Table Of Contenti
RANDOM FIELDS
AND THEIR GEOMETRY
Robert J. Adler
Faculty of Industrial Engineering and Management
Technion – Israel Institute of Technology
Haifa, Israel
e-mail: robert@ieadler.technion.ac.il
Jonathan E. Taylor
Department of Statistics
Stanford University
Stanford, U.S.A.
e-mail: jtaylor@stat.stanford.edu
December 24, 2003
ii
This is page iii
Printer: Opaque this
Contents
1 Random fields 1
1.1 Random fields and excursion sets . . . . . . . . . . . . . . . 1
1.2 Gaussian fields . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 The Brownian family of processes . . . . . . . . . . . . . . . 6
1.4 Stationarity . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.4.1 Stochastic integration . . . . . . . . . . . . . . . . . 13
1.4.2 Moving averages . . . . . . . . . . . . . . . . . . . . 16
1.4.3 Spectral representations on RN . . . . . . . . . . . . 19
1.4.4 Spectral moments . . . . . . . . . . . . . . . . . . . 24
1.4.5 Constant variance . . . . . . . . . . . . . . . . . . . 27
1.4.6 Isotropy . . . . . . . . . . . . . . . . . . . . . . . . . 27
1.4.7 Stationarity over groups . . . . . . . . . . . . . . . . 32
1.5 Non-Gaussian fields . . . . . . . . . . . . . . . . . . . . . . 34
2 Gaussian fields 39
2.1 Boundedness and continuity . . . . . . . . . . . . . . . . . . 40
2.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
2.2.1 Fields on RN . . . . . . . . . . . . . . . . . . . . . . 49
2.2.2 Differentiability on RN . . . . . . . . . . . . . . . . . 52
2.2.3 Generalised fields . . . . . . . . . . . . . . . . . . . . 54
2.2.4 Set indexed processes . . . . . . . . . . . . . . . . . 61
2.2.5 Non-Gaussian processes . . . . . . . . . . . . . . . . 66
2.3 Borell-TIS inequality . . . . . . . . . . . . . . . . . . . . . . 67
2.4 Comparison inequalities . . . . . . . . . . . . . . . . . . . . 74
iv Contents
2.5 Orthogonal expansions . . . . . . . . . . . . . . . . . . . . . 77
2.5.1 Karhunen-Lo`eve expansion . . . . . . . . . . . . . . 84
2.6 Majorising measures . . . . . . . . . . . . . . . . . . . . . . 87
3 Geometry 95
3.1 Excursion sets. . . . . . . . . . . . . . . . . . . . . . . . . . 95
3.2 Basic integral geometry . . . . . . . . . . . . . . . . . . . . 97
3.3 Excursion sets again . . . . . . . . . . . . . . . . . . . . . . 103
3.4 Intrinsic volumes . . . . . . . . . . . . . . . . . . . . . . . . 112
3.5 Manifolds and tensors . . . . . . . . . . . . . . . . . . . . . 117
3.5.1 Manifolds . . . . . . . . . . . . . . . . . . . . . . . . 118
3.5.2 Tensors and exterior algebras . . . . . . . . . . . . . 122
3.5.3 Tensor bundles and differential forms. . . . . . . . . 128
3.6 Riemannian manifolds . . . . . . . . . . . . . . . . . . . . . 129
3.6.1 Riemannian metrics . . . . . . . . . . . . . . . . . . 129
3.6.2 Integration of differential forms . . . . . . . . . . . . 133
3.6.3 Curvature tensors and second fundamental forms . . 138
3.6.4 A Euclidean example. . . . . . . . . . . . . . . . . . 142
3.7 Piecewise smooth manifolds . . . . . . . . . . . . . . . . . . 146
3.7.1 Piecewise smooth spaces . . . . . . . . . . . . . . . . 148
3.7.2 Piecewise smooth submanifolds . . . . . . . . . . . . 154
3.8 Intrinsic volumes again . . . . . . . . . . . . . . . . . . . . . 157
3.9 Critical Point Theory . . . . . . . . . . . . . . . . . . . . . 160
3.9.1 Morse theory for piecewise smooth manifolds . . . . 161
3.9.2 The Euclidean case . . . . . . . . . . . . . . . . . . . 166
4 Gaussian random geometry 171
4.1 An expectation meta-theorem . . . . . . . . . . . . . . . . . 172
4.2 Suitable regularity and Morse functions . . . . . . . . . . . 186
4.3 An alternate proof of the meta-theorem . . . . . . . . . . . 190
4.4 Higher moments . . . . . . . . . . . . . . . . . . . . . . . . 191
4.5 Preliminary Gaussian computations . . . . . . . . . . . . . 193
4.6 Mean Euler characteristics: Euclidean case . . . . . . . . . . 197
4.7 The meta-theorem on manifolds . . . . . . . . . . . . . . . . 208
4.8 Riemannian structure induced by Gaussian fields . . . . . . 213
4.8.1 Connections and curvatures . . . . . . . . . . . . . . 214
4.8.2 Some covariances . . . . . . . . . . . . . . . . . . . . 216
4.8.3 Gaussian fields on RN . . . . . . . . . . . . . . . . . 218
4.9 Another Gaussian computation . . . . . . . . . . . . . . . . 220
4.10 Mean Euler characteristics: Manifolds . . . . . . . . . . . . 222
4.10.1 Manifolds without boundary . . . . . . . . . . . . . 223
4.10.2 Manifolds with boundary . . . . . . . . . . . . . . . 225
4.11 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
4.12 Chern-Gauss-Bonnet Theorem . . . . . . . . . . . . . . . . 235
0 Contents
5 Non-Gaussian geometry 237
5.1 Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
5.2 Conditional expectations of double forms . . . . . . . . . . 237
6 Suprema distributions 243
6.1 The basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
6.2 Some Easy Bounds . . . . . . . . . . . . . . . . . . . . . . . 246
6.3 Processes with a Unique Point of Maximal Variance . . . . 248
6.4 General Bounds . . . . . . . . . . . . . . . . . . . . . . . . . 253
6.5 Local maxima . . . . . . . . . . . . . . . . . . . . . . . . . . 257
6.6 Local maxima above a level . . . . . . . . . . . . . . . . . . 258
7 Volume of tubes 263
7.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . 264
7.2 VolumeoftubesforfiniteKarhunen-Lo`eveGaussianprocesses264
7.2.1 Local geometry of Tube(M,ρ). . . . . . . . . . . . . 266
7.3 Computing F∗ (Ω ) . . . . . . . . . . . . . . . . . . . . . . 269
j,r r
7.3.1 Case 1: Mc=Rl . . . . . . . . . . . . . . . . . . . . . 270
7.3.2 Case 2: Mc=Sλ(Rl) . . . . . . . . . . . . . . . . . . 276
7.3.3 Volume of tubes for finite Karhunen-Lo`eve Gaussian
processes revisited . . . . . . . . . . . . . . . . . . . 278
7.4 Generalized Lipschitz-Killingcurvature measures . . . . . . 280
References 281
This is page 1
Printer: Opaque this
1
Random fields
1.1 Random fields and excursion sets
If you have not yet read the Preface, then please do so now.
SinceyouhavereadthePreface,youalreadyknowtwoimportantthings
about this book:
• The “random fields” of most interest to us will be random mappings
fromsubsetsofEuclideanspacesor,moregenerally,fromRiemannian
manifolds to the real line. However, since it is often no more difficult
to treat far more general scenarios, they may also be real valued
random mappings on any measurable space.
• Central to much of what we shall be looking is the geometry of ex-
cursion sets.
Definition 1.1.1 Letf beameasurable,realvaluedfunctiononsomemea-
surable space and T be a measurable subset of that space. Then, for each
u∈R,
∆
(1.1.1) A ≡ A (f,T) = {t∈T : f(t)≥u}
u u
≡ f−1([u,∞)),
is called the excursion set of f in T over the level u. We shall also occa-
sionally write excursion sets as f−1[u,∞).
|T
Of primary interest to us will be the setting in which the function f is a
random field.
2 1. Random fields
Definition 1.1.2 Let (Ω,F,P) be a complete probability space and T a
topological space. Then a measurable mapping f : Ω→RT is called a real
valued random field1. Measurable mappings from Ω to (RT)d, d > 1, are
called vector valued random fields. If T ⊂RN, we call f an (N,d) random
field, and if d=1 simply an N-dimensional random field.
We shall generally not distinguish between
f ≡ f(t) ≡ f(t,ω) ≡ (f(ω))(t),
t
etc., unless there is some special need to do so. Throughout, we shall de-
mand that all random fields are separable, a property due originally to
Doob [22], which implies conditions on both T and X.
Definition 1.1.3 An Rd-valued random field f, on a topological space T,
is called separable if there exists a countable dense subset D ⊂ T and a
fixed event N with P{N} = 0 such that, for any closed B ⊂ Rd and open
I ⊂T,
{ω : f(t,ω)∈B, ∀t∈I} ∆ {ω : f(t,ω)∈B, ∀t∈I∩D} ⊂ N.
Here ∆ denotes the usual symmetric difference operator, so that
(1.1.2) A∆B =(A∩Bc)∪(Ac∩B),
where Ac is the complement of A.
Since you have read the Preface, you also know that most of this book
centres on Gaussian random fields. The next section is devoted to defining
these and giving some of their basic properties. Fortunately, most of these
have little to do with the specific geometric structure of the parameter
space T, and after decades of polishing even proofs gain little in the way
of simplification by restricting to special cases such as T = RN. Thus, at
least for a while, we can and shall work in as wide as possible generality.
Only when we get to geometry, in Chapter 3, will we need to specialise,
either to Euclidean T or to Riemannian manifolds.
1.2 Gaussian fields
The centrality of Gaussian fields to this book is due to two basic factors:
• Gaussian processes have a rich, detailed and very well understood
general theory, which makes them beloved by theoreticians.
1On notation: While we shall follow the standard convention of denoting random
variables by upper case Latin characters, we shall use lower case to denote random
functions. The reason for this will be become clear in Chapter 3, where we shall need
theformerfortangentvectors.
1.2 Gaussian fields 3
• Inapplicationsofrandomfieldtheory,asinapplicationsofalmostany
theory, it is important to have specific, explicit formulae that allow
one to predict, to compare theory with experiment, etc. As we shall
see, it will be only for Gaussian (and related, cf. Section 1.5) fields
that it is possible to derive such formulae in the setting of excursion
sets.
Themainreasonbehindboththesefactsistheconvenientanalyticform
of the multivariate Gaussian density, and the related definition of a Gaus-
sian process.
A real-valued random variable X is said to be Gaussian (or normally
distributed) if it has the density function
1
ϕ(x) =∆ √ e−(x−m)2/2σ2, x∈R,
2πσ
for some m∈R and σ >0. It is elementary calculus that the mean of X is
m and the variance σ2, and that the characteristic function is given by
φ(θ) = E(cid:8)eiθX(cid:9) = eiθm−θ2/2σ2.
We abbreviate this by writing X ∼ N(m,σ2). The case m = 0, σ2 = 1 is
rather special and in this situation we say that X has a standard normal
distribution. In general, if a random variable has zero mean we call it
centered.
Since the indefinite integral of ϕ is not a simple function, we also need
notation (Φ) for the distribution and (Ψ) tail probability functions of a
standard normal variable:
1 Z x
(1.2.1) Φ(x) =∆ 1−Ψ(x) =∆ √ e−x2/2dx.
2π
−∞
While Φ and Ψ may not be explicit, there are simple, and rather impor-
tant, bounds which hold for every x>0 and become sharp very quickly as
x grows. In particular, in terms of Ψ we have2
(cid:18) (cid:19)
1 1 1
(1.2.2) − ϕ(x) < Ψ(x) < ϕ(x),
x x3 x
An Rd-valued random variable X is said to be multivariate Gaussian
if, for every α = (α ,...,α ) ∈ Rd, the real valued variable hα,X0i =
1 d
Pd α X is Gaussian3. In this case there exists a mean vector m ∈ Rd
i=1 i i
2Theinequality(1.2.2)followsfromtheobservationthat
„ 3 « „ 1 «
1− ϕ(x) < ϕ(x) < 1+ ϕ(x),
x4 x2
followedbyintegrationoverx.
3Note: Throughout the book, vectors are taken to be row vectors and a prime indi-
cates transposition. The inner product between x and y in Rd is denoted by hx,yi or,
occasionally,byx·y.
4 1. Random fields
with m = E{X } and a non-negative definite4 d×d covariance matrix
j j
C, with elements c =E{(X −m )(X −m )}, such that the probability
ij i i j j
density of X is given by
(1.2.3) ϕ(x) = 1 e− 12(x−m)C−1(x−m)0,
(2π)d/2|C|1/2
where |C| = detC is the determinant5 of C. Consistently with the one-
dimensional case, we write this as X ∼ N(m,C), or X ∼ N (m,C) if we
d
need to emphasise the dimension.
In view of (1.2.3) we have that Gaussian distributions are completely
determined by their first and second order moments and that uncorrelated
Gaussian variables are independent. Both of these facts will be of crucial
importance later on.
While the definitions are fresh, note for later use that it is relatively
straightforward to check from (1.2.3) that the characteristic function of a
multivariate Gaussian X is given by
(1.2.4) φ(θ) = E{eihθ,X0i} = eihθ,m0i− 12θCθ0.
where θ ∈Rd.
One consequence ofthe simplestructure of φ is the factthatif{X }
n n≥1
is an L2 convergent6 sequence of Gaussian vectors, then the limit X must
also be Gaussian. Furthermore, if X ∼N(m ,C ), then
n n n
(1.2.5) |m −m|2 →0, and kC −Ck2 →0,
n n
as n → ∞, where m and C are the mean and covariance matrix of the
limiting Gaussian. The norm on vectors is Euclidean and that on matrices
anyoftheusual.Theproofsinvolveonly(1.2.4)andthecontinuitytheorem
for convergence of random variables.
Oneimmediateconsequenceofeither(1.2.3)or(1.2.4)isthatifAisany
d×d matrix and X ∼N (m,C), then
d
(1.2.6) AX ∼ N(mA, A0CA).
4Ad×dmatrixCiscallednon-negativedefinite(orpositivesemi-definite)ifxCx0≥0
forallx∈Rd.AfunctionC: T ×T →Riscallednon-negativedefiniteifthematrices
(C(ti,tj))ni,j=1 arenon-negativedefiniteforall1≤n<∞andall(t1,...,tn)∈Tn.
5JustincaseyouhaveforgottenwhatwasinthePreface,hereisaone-timereminder:
The notation | | denotes any of ‘absolute value’, ‘Euclidean norm’, ‘determinant’ or
‘Lebesguemeasure’,dependingontheargument,inanaturalfashion.Thenotationkk
isusedonlyforeitherthenormofcomplexnumbersorforspecialnorms,whenitusually
appearswithasubscript.
6Thatis,thereexistsarandomvectorX suchthatE{|Xn−X|2}→0asn→∞.
1.2 Gaussian fields 5
AjudiciouschoiceofAthenallowsustocomputeconditionaldistributions
as well. If n<d make the partitions
X = (cid:0)X1,X2(cid:1) = ((X ,...,X ), (X ,...X )),
1 n n+1 d
m = (cid:0)m1,m2(cid:1) = ((m ,...,m ), (m ,...m )),
1 n n+1 d
(cid:18) (cid:19)
C C
C = 11 12 ,
C C
21 22
where C is an n×n matrix. Then each Xi is N(mi,C ) and the condi-
11 ii
tional distribution7 of Xi given Xj is also Gaussian, with mean vector
(1.2.7) m =mi+C C−1(Xj −mj)0
i|j ij jj
and covariance matrix
(1.2.8) C =C −C C−1C .
i|j ii ij jj ji
We can now define a real valued Gaussian random field to be a random
fieldf onaparametersetT forwhichthe(finitedimensional)distributions
of (f ,...,f ) are multivariate Gaussian for each 1 ≤ n < ∞ and each
t1 tn
(t ,...,t )∈Tn. The functions m(t)=E{f(t)} and
1 n
C(s,t) = E{(f −m )(f −m )}
s s t t
are called the mean and covariance functions of f. Multivariate8 Gaussian
fieldstakingvaluesinRdarefieldsforwhichhα,f0iisarealvaluedGaussian
t
field for every α∈Rd.
In fact, one can also go in the other direction as well. Given any set T, a
function m: T →R, and a non-negative definite function C : T ×T →R
thereexists9 aGaussianprocessonT withmeanfunctionmandcovariance
function C.
Putting all this together, we have the important principle that, for a
Gaussianprocess,everythingaboutitisdeterminedbythemeanandcovari-
ance functions.Thefactthatnorealstructureisrequiredoftheparameter
7Toprovethis,take
A = „ 1n −C12C22 «
0 1
d−n
and define Y = (Y1,Y2) = AX. Check using (1.2.6) that Y1 and Y2 ≡ X2 are inde-
pendentandusethistoobtain(1.2.7)and(1.2.8)fori=1, j=2.
8Similary, Gaussian fields taking values in a Banach space B are fields for which
α(ft) is a real valued Gaussian field for every α in the topological dual B∗ of B. The
covariance function is then replaced by a family of operators Cst : B∗ →B, for which
Cov(α(ft),β(fs))=β(Cstα),forα,β∈B∗.
9ThisisaconsequenceoftheKolmogorovexistencetheorem,which,atthislevelof
generality,canbefoundinDudley[27].SuchaprocessisarandomvariableinRT and
mayhaveterribleproperties,includinglackofmeasurabilityint.However,itwillalways
exist.