Poisson Point Processes

Matt Lukac email: mlukac@uoregon.edu
December 27, 2018

Recall a realization of the basic noise for Gaussian processes looked like that in Figure 1. Now, arrows are either muted or (rarely) point up. See Figure 2.

Figure 1: A realization of the basic noise used to construct a Gaussian process.
Figure 2: A realization of the basic noise used to construct a Poisson process.

1 Motivation

Suppose in some space X we lay down a large number of LED lights, each with their own battery, with density given by a σ-finite measure μ. We do this in a way so that, for each region AX, we put down about Mμ(A) lights in that region, where M is some large number. Independently we turn on each light with probability M-1, and leave off otherwise.

We would like to answer the following question: how many lights in A are on? To that end, let N(A) denote the number of lights on in A and compute

𝔼[N(A)]=𝔼[lights in A𝟙{light on}]=lights in A{light is on}=Mμ(A)(1M)=μ(A). (1)

Thus μ gives the expected density for the set of lights that are on in A. By construction, we know N(A)Binom(Mμ(A),M-1), and hence the distribution of N(A) is approximately Pois(μ(A)). To see this, put L=Mμ(A) and observe,

{N(A)=n} =(Ln)(1M)n(1-1M)L-n (2)
=L(L-1)(L-n+1)n!Mn(1-1M)L-n (3)
1n!(LM)nexp(-LM)+𝒪(1M) (4)
μ(A)nn!e-μ(A) (5)

This motivates the following definition. {definition} Let μ be a σ-finite measure on some space X. A Poisson Point Process (PPP) on X with mean measure (or, intensity) μ is a random point measure N such that:

  1. 1.

    For any Borel set AX, we have N(A)0 and N(A)Pois(μ(A)), i.e.

    {N(A)=n}=μ(A)nn!e-μ(A). (6)
  2. 2.

    If A and B are disjoint Borel subsets of X, then N(A) and N(B) are independent random variables.

Recall a point measure is just a measure whose mass is atomic. That is, if {xi}X then a point measure is of the form

μ=iaiδxi (7)

where δx is the unit point mass at x.

2 PPP Properties

It is sometimes useful to think of a PPP as a random collection of points. With this in mind, we list some important properties of NPPP(μ) on some space X:

  • Enumeration: It is always possible to enumerate the points of N, i.e. there is a random collection of points {xi}X such that

    N=iδxi. (8)
  • Mean measure: If f:X then

    𝔼[f(x)𝑑N(x)]=f(x)𝑑μ(x). (9)

    Note: This is a more general property of point processes, as any point process has a mean measure. To see (9) holds without needing N to be a Poisson point process, let f be a simple function, i.e.

    f(x)=i=1nfi𝟙Ai(x),where  X=iAi,AiAj= for ij. (10)

    Then we compute

    𝔼[Xf(x)𝑑N(x)]=𝔼[ifiN(Ai)]=ifi𝔼[N(Ai)]=ifiμ(Ai)=Xf(x)𝑑μ(x). (11)

    This can then be extended to arbitrary measurable functions through the standard limiting procedure.

  • Thinning: Independently discard each point of N with probability 1-p(x) for a point at xX. The result is a PPP(ν), where

    ν(A)=Ap(x)𝑑μ(x). (12)

    In other words, if N=iδxi and Ai=1 with probability p(xi) and Ai=0 otherwise, then

    N~=iAiδxiPPP(ν). (13)
  • Additivity: If N1PPP(μ1) and N2PPP(μ2) are independent on X, then N1+N2PPP(μ1+μ2). In particular, if {X=n}=λnn!e-λ and {Y=n}=νnn!e-ν are independent, then

    {X+Y=n}=(λ+ν)nn!e-(λ+ν). (14)
  • Labeling: For each point in a PPP, associate an independent label from a space Y according to some probability distribution ν. Let N=iδxi for {xi}X and let G1,G2,Y be iid with density ν. Then

    N¯:=iδ(xi,Gi)PPP(μ×ν) (15)

    on X×Y.

3 Examples

Henceforth, let λ denote Lebesgue measure. {example} Let NPPP(λ) on 0, where λ is Lebesgue measure. As before, we think of the points of N as ‘lights’, here positioned on the positive reals.

  1. 1.

    How far until the first light?

  2. 2.

    Suppose each light is independently either red or green with probability 12. How far until the first red light?

Solution.

Let N=iδxi and put T=min{xi}. Using (6) we compute

{T>t}={N([0,t])=0}=e-t. (16)

This solves part (a). For the colorblind readers, this also solves part (b).

Now let {x~i}{xi} be the (random) set of red lights and define N~=iδx~i, the point process for the red lights from N. By the thinning property (12), N~PPP(12λ). Similarly define T~=min{x~i} and observe

{T~>t}={N~([0,t])=0}=e-t/2, (17)

thus (b) is solved. ∎

{example}

Rain falls for 10 minutes on a large patio at a rate of ν=5000 drops per minute per square meter. Each drop splatters to a random radius R that has an Exponential distribution, with mean 1cm, independently of the other drops. Assume the drops are 1mm thick and the set of locations of the raindrops is a PPP.

  1. 1.

    What is the mean and variance of the total amount of water falling on a square with area 1m2?

  2. 2.

    A very small ant is running around the patio. See Figure 3. What is the chance the ant gets hit?

Solution.

Let N=iδ(xi,yi) where (xi,yi) is the center of the ith drop. Take NPPP(νλ) and let M denote the number of drops in [0,1]2, so that M=N([0,1]2)Pois(ν). Then the total volume V is

V=i=1Mπ103Ri2 (18)

where Ri is the radius of the ith drop. Note this is a sum of random variables where the number of terms is also a random variable. Thus we use Wald’s equation (28) to obtain

𝔼[V]=π103𝔼[M]𝔼[R12]=π103ν21002=2π107ν (19)

The second step in (19) was obtained from the fact that an exponentially distributed random variable X with mean β-1 has higher moments given by

𝔼[Xn]=n!βn. (20)

This is proved by an iterated application of integration by parts, and the result gives rise to

var[Xn]=𝔼[X2n]-𝔼[Xn]2=(2n)!-(n!)2β2n. (21)

The n=2 case will turn out to be useful when computing the variance of V.

Indeed, to compute the variance we utilize the variance decomposition formula. Observe,

var[V] =𝔼[var[VM]]+var[𝔼[VM]] (22)
=𝔼[M(π103)2var(R2)]+var[M(π103)𝔼[R2]] (23)
=ν(π103)2(201004)+ν(π103)2(21002)2 (24)
=(π103)2(241004)ν. (25)

This solves part (a).

Now, part (b) can be solved by way of the labeling property. Here, we use the radius Ri of the ith drop to label the point (xi,yi). Recall the density of an Exponential random variable with mean 0.01 is 100exp(-100r)dr. So we define a measure μ on X:=2×[0,) by

μ(A)=A100νexp(-100r)𝑑x𝑑y𝑑r. (26)

We think of X as the (closed) upper half plane in 3 where the third coordinate is a realization of R. By the labeling property (15), N¯:=iδ(xi,yi,Ri)PPP(μ) on X. For the ant to remain dry, any drop with radius r must land outside the circle of radius r centered at the ant. Viewed from the space X, we want to integrate over the cone with its tip at the ant, whose horizontal cross-section at height r is a circle of radius r. From this we compute

{ant is dry}={N¯(A)=0}=exp(-μ(A))=exp(-100πν0r2e-100rdr)=exp(-πν5000). (27)

Plugging in the given value for ν yields {ant is dry}=exp(-π)0.0432. The ant had better grab an umbrella! ∎

3.1 Wald’s Equation

The following is the statement of Wald’s equation, taken from Wikipedia11 1 The proof is also on Wikipedia.. {theorem}[Wald’s Equation] Let (Xn)n be a sequence of real-valued, independent and identically distributed random variables and let N be a nonnegative integer-valued random variable that is independent of the sequence (Xn)n. Suppose that N and the Xn have finite expectations. Then

𝔼[i=1NXi]=𝔼[N]𝔼[X1]. (28)
Figure 3: A realization of the ant from Example 3. Looks like he had an umbrella after all.