METASTATES IN THE HOPFIELD MODEL IN THE # REPLICA SYMMETRIC REGIME 1 Anton Bovier Weierstra(cid:25){Institut fu(cid:127)r Angewandte Analysis und Stochastik Mohrenstrasse 39, D-10117 Berlin, Germany 2 V(cid:19)eronique Gayrard Centre de Physique Th(cid:19)eorique - CNRS Luminy, Case 907 F-13288 Marseille Cedex 9, France Abstract: Westudythe(cid:12)nitedimensionalmarginalsoftheGibbsmeasureintheHop(cid:12)eldmodelat low temperature when the numberof patterns, M, is proportionalto the volume witha su(cid:14)ciently small proportionality constant (cid:11)>0. It is shown that even when a single pattern is selected (by a magnetic (cid:12)eld or by conditioning), the marginals do not converge almost surely, but only in law. The corresponding limitinglaw is constructed explicitly. We (cid:12)t our result in the recently proposed language of \metastates" which we discuss in some length. As a byproduct, in a certain regime of the parameters (cid:11) and (cid:12) (the inverse temperature), we also give a simple proof of Talagrand’s [T1] recent result that the replica symmetric solution found by Amit, Gutfreund, and Sompolinsky [AGS] can be rigorously justi(cid:12)ed. Keywords: Hop(cid:12)eldmodel,neuralnetworks, metastates, replicasymmetry, Brascamp-Liebinequal- ities AMS Subject Classi(cid:12)cation: 82B44, 60K35, 82C32 # WorkpartiallysupportedbytheCommissionoftheEuropeanCommunitiesundercontractCHRX-CT93-0411 1 e-mail: [email protected] 2 e-mail: [email protected] 1. Introduction Strongly disordered systems such as spin glasses represent some of the most interesting and most di(cid:14)cult problems of statistical mechanics. Amongst the most remarkable achievements of theoreticalphysicsinthis(cid:12)eldistheexact solutionofsomemodelsofmean(cid:12)eldtypeviathereplica trick and Parisi’s replica symmetry breaking scheme (For an exposition see [MPV]; the application to the Hop(cid:12)eld model [Ho] was carried out in [AGS]). The replica trick is a formal tool that allows to eliminate the di(cid:14)culty of studying disordered systems by integrating out the randomness at the expense of having to perform an analytic continuation of some function computable only on the 1 positive integers to the value zero . Mathematically, this procedure is highly mysterious and has so far resisted all attempts to be put on a solid basis. On the other hand, its apparent success is a clear sign that something ought to be understood better in this method. An apparently less mysterious approach that yields the same answer is the cavity method [MPV]. However, here too, the derivation of the solutions involves a large number of intricate and unproven assumptions that seem hard or impossible to justify in general. However, there has been some distinct progress in understanding the approach of the cavity method at least in simple cases where no breaking of the replica symmetry occurs. The (cid:12)rst at- tempts in this direction were made by Pastur and Shcherbina [PS] in the Sherrington-Kirkpatrick model and Pastur, Shcherbina and Tirozzi [PST] in the Hop(cid:12)eld model. Their results were con- ditional: They assert to show that the replica symmetric solution, holds under certain unveri(cid:12)ed assumption, namely the vanishing of the so-called Edwards-Anderson parameter. A breakthrough was achieved in a recent paper by Talagrand [T1] where he proved the validity of the replica sym- metric solution in an explicit domainof the modelparameters inthe Hop(cid:12)eld model. His approach is purely by induction over the volume (i.e. the cavity method) and uses only some a priori es- timates on the support properties of the distribution of the so-called overlap parameters as (cid:12)rst proven in [BGP1,BGP2] and in sharper form in [BG1]. N Letusrecallthede(cid:12)nitionoftheHop(cid:12)eldmodelandsomebasicnotations. Let N 1;1 S (cid:17)f(cid:0) g IN denote the set of functions (cid:27) : 1;:::;N 1;1 , and set 1;1 . We call (cid:27) a spin f g ! f(cid:0) g S (cid:17) f(cid:0) g con(cid:12)guration and denote by (cid:27)i the value of (cid:27) at i. Let ((cid:10); ;IP) be an abstract probability space F (cid:22) and let (cid:24)i , i;(cid:22) IN, denote a family of independent identically distributed random variables on 2 (cid:22) 1 this space. For the purposes of this paper we will assume that IP[(cid:24)i = 1] = 2. We will write (cid:6) (cid:22) (cid:22) (cid:24) [!] for the N-dimensional random vector whose i-th component is given by (cid:24)i [!] and call such 1 As a matter of fact, such an analytic continuation is not performed. What is done is much more subtle: The functionatintegervaluesisrepresentedassomeintegralsuitableforevaluationbyasaddlepointmethod. Insteadof doing this, apparentlyirrelevant critical points are selected judiciously andthe ensuing wrongvalue ofthe function is then continued to the correct value atzero. 1 a vector a ‘pattern’. On the other hand, we use the notation (cid:24)i[!] for the M-dimensional vector with the same components. When we write (cid:24)[!] without indices, we frequently will consider it as t t an M N matrixand we write (cid:24) [!] for the transpose of thismatrix. Thus, (cid:24) [!](cid:24)[!] isthe M M (cid:2) (cid:2) N (cid:22) (cid:23) matrix whose elements are i=1(cid:24)i [!](cid:24)i[!]. With this in mindwe willuse throughout the paper a vector notation with (; ) standing for the scalar product in whatever space the argument may lie. P (cid:1) (cid:1) M (cid:22) E.g. the expression (y;(cid:24)i) stands for (cid:22)=1(cid:24)i y(cid:22), etc. (cid:22) P 2 We de(cid:12)ne random maps mN[!] : N [ 1;1] through S ! (cid:0) N (cid:22) 1 (cid:22) mN[!]((cid:27)) (cid:24)i [!](cid:27)i (1:1) (cid:17) N i=1 X (cid:22) Naturally, these maps ‘compare’ the con(cid:12)guration (cid:27) globally to the random con(cid:12)guration (cid:24) [!]. A Hamiltonian is now de(cid:12)ned as the simplest negative function of these variables, namely M(N) N (cid:22) 2 HN[!]((cid:27)) (mN[!]((cid:27))) (cid:17)(cid:0) 2 (cid:22)=1 (1:2) X N 2 = mN[!]((cid:27)) 2 (cid:0) 2 k k where M(N) is some, generally increasing, function that crucially in(cid:13)uences the properties of the M model. 2 denotes the ‘2-norm in IR , and the vector mN[!]((cid:27)) is always understood to be k(cid:1)k M(N)-dimensional. Through this Hamiltonian we de(cid:12)ne in a natural way (cid:12)nite volume Gibbs measures on N via S 1 (cid:12)HN[!]((cid:27)) (cid:22)N;(cid:12)[!]((cid:27)) e(cid:0) (1:3) (cid:17) ZN;(cid:12)[!] and the induced distribution of the overlap parameters 1 N;(cid:12)[!] (cid:22)N;(cid:12)[!] mN[!](cid:0) (1:4) Q (cid:17) (cid:14) The normalizing factor ZN;(cid:12)[!], given by N (cid:12)HN[!]((cid:27)) (cid:12)HN[!]((cid:27)) ZN;(cid:12)[!] 2(cid:0) e(cid:0) IE(cid:27)e(cid:0) (1:5) (cid:17) (cid:17) (cid:27) N X2S is called the partition function. We are interested in the large N behaviour of these measures. In our previous work we have been mostly concerned with the limiting induced measures. In this paper we return to the limitingbehaviour of the Gibbs measures themselves, making use, however, of the information obtained on the asymptotic properties of the induced measures. 2 We will make the dependence of random quantities on the random parameter ! explicit by an added [!] whenever we wanttostress it. Otherwise, we will frequently dropthe reference to! to simplify the notation. 2 We pursue two objectives. Firstly, we give an alternative proof (whose outline was given in [BG2]) of Talagrand’s result (with possibly a slightly di(cid:11)erent range of parameters) that, al- though equally based on the cavity method, makes more extensive use of the properties of the overlap-distribution that were proven in [BG1]. This allows, in our opinion, some considerable simpli(cid:12)cations. Secondly, we will elucidate some conceptual issues concerning the in(cid:12)nite volume Gibbs states in this model. Several delicacies in the question of convergence of (cid:12)nite volume Gibbs states (or local speci(cid:12)cations) in highly disordered systems, and in particular spin glasses, were pointed out repeatedly by Newman and Stein over the last years [NS1,NS2]. But only during the last year did they propose the formalism of so-called \metastates" [NS3,NS4,N] that seems to provide the appropriate framework to discuss these issues. In particular, we will show that in the Hop(cid:12)eld model, this formalism seems unavoidable for spelling out convergence results. Let us formulate our main result in a slightly preliminary form (precise formulations require some more discussion and notation and will be given in Section 5). (cid:22) Denote by m(cid:3)((cid:12)) the largest solution of the mean (cid:12)eld equation m=tanh((cid:12)m) and by e the M ((cid:22);s) M (cid:22)-thunitvectorofthecanonicalbasisofIR . Forall((cid:22);s) 1;1 1;:::;M letB(cid:26) IR 2f(cid:0) g(cid:2)f g (cid:26) (cid:22) denote the ball of radius (cid:26) centered at sm(cid:3)e . For any pair of indices ((cid:22);s) and any (cid:26) > 0 we de(cid:12)ne the conditional measures ((cid:22);s) ((cid:22);s) N (cid:22)N;(cid:12);(cid:26)[!]( ) (cid:22)N;(cid:12)[!]( B(cid:26) ); ( 1;1 ) (1:6) A (cid:17) Aj A2B f(cid:0) g 3 The so called \replica symmetric equations" of [AGS] is the following system of equations in three unknowns m1;r, and q, given by m1 = d (g)tanh((cid:12)(m1 +p(cid:11)rg)) N Z 2 q = d (g)tanh ((cid:12)(m1 +p(cid:11)rg)) (1:7) N Z q r = 2 (1 (cid:12)+(cid:12)q) (cid:0) With this notation we can state 4 Theorem 1.1: There exist (cid:12)nite positive constats c;c0;c0 such that if 0 (cid:11) c(m(cid:3)((cid:12))) and (cid:20) (cid:20) 1 p(cid:11) 0 (cid:11) c0(cid:12)(cid:0) , with limN M(N)=N = (cid:11), the following holds: Choose (cid:26) such that c0 m(cid:3)((cid:12)) (cid:20) (cid:20) "1 (cid:20) (cid:20) 1 I (cid:26) 2m(cid:3)((cid:12)). Then, for any (cid:12)nite I IN, and for any sI 1;1 , (cid:20) (cid:26) (cid:26)f(cid:0) g 1 (cid:12)si[m1(cid:24)i+gip(cid:11)r] ((cid:22);s) e (cid:22)N;(cid:12);(cid:26)( (cid:27)I = sI ) 1 (1:8) f g ! i I 2cos((cid:12)[m1(cid:24)i +gip(cid:11)r]) Y2 3 We cite these equations, (3.3-5) in [AGS] only for the case k = 1, where k is the number of the so-called \condensed patterns". One could generalize our results presumably measures conditioned on balls around \mixed states",i.e. themetastablestateswithmorethanone\condensedpattern",butwehavenotworkedoutthedetails. 3 as N , where the gi, i I are independent Gaussian random variables with mean zero and " 1 2 1 variance one that are independent of the random variables (cid:24)i, i I. The convergence is understood 2 in law with respect to the distribution of the Gaussian variables gi. This theorem should be juxtaposed to our second result: Theorem 1.2: On the same set of parameters as in Theorem 1.1, the following is true with I probability one: For any (cid:12)nite I IN and for any x IR , there exist subsequences Nk[!] (cid:26) 2 " 1 I such that for any sI 1;1 , if (cid:11) > 0, (cid:26)f(cid:0) g sixi ((cid:22);s) e klim(cid:22)Nk[!];(cid:12);(cid:26)[!](f(cid:27)I =sIg) = 2cosh(xi) (1:9) "1 i I Y2 The above statements may look a little bit surprising and need clari(cid:12)cation. This will be the mainpurposeof Section2, where we givea ratherdetaileddiscussionof the problemof convergence andthe notionofmetastates withthe particularissuesindisorderedmean(cid:12)eldmodelsinview. We willalsoproposeyetadi(cid:11)erentnotionofastate(letuscallit\superstate"),thattriestocapturethe asymptotic volume dependence of Gibbs states in the form of a continuous time measure valued stochastic process. We also discuss the issue of the \boundary conditions" or rather \external (cid:12)elds", and the construction of conditional Gibbs measures in this context. This will hopefully prepare the ground for the understanding of our results in the Hop(cid:12)eld case. The following two section collect technical preliminaries. Section 3 recalls some results on the overlap distributionfrom [BG1-3] that will be crucially needed later. Section 4 states and proves a version of the Brascamp-Lieb inequalities [BL] that is suitable for our situation. Section 5 contains our central results. Here we construct explicitly the (cid:12)nite dimensional marginals of the Gibbs measures in (cid:12)nite volume and study their behaviour in the in(cid:12)nite volume limit. The results will be stated in the language of metastates. In this section we assume the convergence of certain thermodynamic functions which will be proven in Section 6. Modulo this, this section contains the precise statements and proofs of Theorems 1.1 and 1.2. In Section 6 we give a proof of the convergence of these quantities and we relate them to the replica symmetric solution. This sections is largely based on the ideas of [PST] and [T1] and is mainly added for the convenience of the reader. Acknowledgements: Wegratefullyacknowledgehelpfuldiscussionsonmetastates withCh. New- man and Ch. Ku(cid:127)lske. 4 2. Notions of convergence of random Gibbs measures. In this section we make some remarks on the appropriate picture for the study of limiting Gibbs measures for disordered systems, with particular regard to the situation in mean-(cid:12)eld like systems. Although some of the observations we will make here arose naturally from the properties we discovered in the Hop(cid:12)eld model, our understanding has been greatly enhanced by the recent work of Newman andStein[NS3,NS4,N] and their introductionof the concept of \metastates". We refer the reader to their papers for more detail and further applications. Some examples can also be found in [K]. Otherwise, we keep this section self-contained and geared for the situation we will describe in the Hop(cid:12)eld model, although part of the discussion is very general and not restricted to mean (cid:12)eld situations. For this reason we talk about (cid:12)nite volume measures indexed by (cid:12)nite sets (cid:3) rather then by the integer N. Metastates. The basic objects of study are (cid:12)nite volume Gibbs measures, (cid:22)(cid:3);(cid:12) (which for con- venience we will always consider as measures on the in(cid:12)nite product space ). We denote by S1 ( 1( ); ) themeasurablespaceofprobabilitymeasureson equippedwiththe sigma-algebra M S1 G S1 4 generatedbytheopensetswithrespecttotheweaktopologyon 1( ) . Wewillalwaysregard G M S1 Gibbs measures as random variables on the underlying probability space ((cid:10); ;IP) with values in F the space 1( ), i.e. as measurable maps (cid:10) 1( ). M S1 !M S1 We are in principle interested in considering weak limits of these measures as (cid:3) . There " 1 are essentially three things that may happen: (1) Almost sure convergence: For IP-almost all !, (cid:22)(cid:3)[!] (cid:22) [!] (2:1) ! 1 where (cid:22) [!] may or may not depend on ! (in general it will). 1 (2) Convergence in law: (cid:22)(cid:3) D (cid:22) (2:2) ! 1 (3) Almost sure convergence along random subsequences: There exist (at least for almost all !) subsequences (cid:3)i[!] such that " 1 (cid:22)(cid:3)i[!][!] (cid:22) ; (cid:3)i[!] [!] (2:3) ! 1f g In systems with compact single site state space, (3) holds always, and there are models with non-compact state space where it holds with the \almost sure" provision. However, this contains 4 0 0 Note that a basis of open sets is given by sets of the forms Nf1;:::;fk;(cid:15)((cid:22))(cid:17)f(cid:22)j81(cid:20)i(cid:20)kj(cid:22)(fi)(cid:0)(cid:22)(fi)j<(cid:15)g, where 1 fi are continuous functions on ; indeed, it is enoughto consider cylinder functions. S 5 little information, if the subsequences along which convergence holds are only known implicitly. In particular, it gives no information on how, for any given large (cid:3) the measure (cid:22)(cid:3) \looks like approximately". In contrast, if (i) holds, we are in a very nice situation, as for any large enough (cid:3) and for (almost) any realization of the disorder, the measure (cid:22)(cid:3)[!] is well approximated by (cid:22) [!]. 1 Thus, the situation would be essentially like in an ordered system (the \almost sure" excepted). It seems to us that the common feeling of most people working in the (cid:12)eld of disordered systems was that this could be arranged by putting suitable boundary conditions or external (cid:12)elds, to \extract pure states". Newman and Stein [NS1] were, to our knowledge, the (cid:12)rst to point to di(cid:14)culties with this point of view. In fact, there is no reason why we should ever be, or be able to put us, in a situation where (1) holds, and this possibility should be considered as perfectly exceptional. With (3) uninteresting and (1) unlikely, we are left with (2). By compactness, (2) holds always at least for (non-random!) subsequences (cid:3)n, and even convergence without subsequences can be expected rather commonly. On the other hand, (2) gives us very reasonable information on our system, telling us what is the chance that our measure (cid:22)(cid:3) for large (cid:3) will look like some measure (cid:22) . This is much more than what (3) tells us, and baring the case where (1) holds, all we may 1 reasonably expect to know. We should thus investigate the case (2) more closely. As proposed actually (cid:12)rst by Aizenman and Wehr [AW], it is most natural to consider an object K(cid:3) de(cid:12)ned as a measure on the product space (cid:10) 1( )(equippedwiththeproducttopology andtheweak topology, respectively), such (cid:10)M S1 that its marginaldistributionon (cid:10) isIP whilethe conditionalmeasure, (cid:20)(cid:3)()[!], on 1( ) given (cid:1) M S1 5 is the Dirac measure on (cid:22)(cid:3)[!]; the marginal on 1( ) is then of course the law of (cid:22)(cid:3). The F M S1 advantage of this construction over simply regarding the law of (cid:22)(cid:3) lies in the fact that we can in this way extract more information by conditioning, as we shall explain. Note that by compactness K(cid:3) converges at least along (non-random!) subsequences, and we may assume that it actually converges to some measure K. Conditioning this measure on we obtain a random measure (cid:20) on F 1( 1) (the regular conditional distribution of K on given ). See e.g. [Ka]). In a slightly M S G F abusive, but rather obvious notation: K( )[!] =(cid:20)()[!] (cid:14)!( ). (cid:1)jF (cid:1) (cid:10) (cid:1) Now the case (1) above corresponds to the situation where the conditional probability on G given is degenerate, i.e. F (cid:20)()[!] =(cid:14)(cid:22)1[!](); a.s. (2:4) (cid:1) (cid:1) Thus we see that in general even (cid:20)()[!] is a nontrivial measure on the space of in(cid:12)nite volume (cid:1) 6 Gibbs measures, thislatter object being calledthe (Aizenman-Wehr) metastate . What happensis 5 1 We write shorthand for 1( ) whenever appropriate. F M S (cid:10)F 6 It maybe interesting torecall the reasons that led Aizenman andWehr tothis construction. Intheir analysis of the e(cid:11)ect of quenched disorder on phase transition they required the existence of \translation-covariant" states. 6 that the asymptotic properties of the Gibbs measures as the volume tends to in(cid:12)nity depend in a intrinsicwayonthetailsigma(cid:12)eldofthedisordervariables,andevenafterallrandomvariablesare (cid:12)xed, some \new" randomness appears that allowsonly probabilisticstatements on the asymptotic Gibbs state. A toy example: It may be usefulto illustratethe passage fromconvergence inlaw to theAizenman- Wehr metastate in a more familiar context, namely the ordinary central limit theorem. Let ((cid:10); ;IP) be a probability space, and let Xi i IN be a family of i.i.d. centered random variables F f g 2 with variance one; let n be the sigma algebra generated by X1;:::;Xn and let limn n. F F (cid:17) "1F 1 n De(cid:12)ne the real valued random variable Gn pn i=1Xi. We may de(cid:12)ne the joint law Kn of Gn (cid:17) andthe Xi as a probabilitymeasure onIR (cid:10). ClPearly, thismeasure converges to some measure K (cid:10) whose marginalon IR willbe the standardnormaldistribution. However, we can say more, namely Toy-Lemma 2.1 In the example described above, (cid:20)( )[!] = (0;1); IP-a.s. (2:5) (cid:1) N Proof: We need to understandwhat (2.5) means. Let f be a continuous functionon IR. We claim that for almost all !, 2 x =2 e(cid:0) f(x)(cid:20)(dx)[!] = f(x)dx (2:6) p2(cid:25) Z Z De(cid:12)ne the martingale hn f(x)K(dx;d! n). We may write (cid:17) jF R N 1 hn = Nlim IEXn+1:::IEXNf pN Xi "1 i=1 ! X N = Nli"m1IEXn+1:::IEXNf pN1(cid:0)n i=n+1Xi!; a.s. (2:7) 2 X x =2 e(cid:0) = f(x)dx; p2(cid:25) Z 1 n where we used that for (cid:12)xed N, pN i=1Xi converges to zero as N almost surely. Thus, " 1 for any continuous f, hn is almost surPely constant, while limn"1hn = f(x)K(dx;d!jF), by the martingale convergence theorem. This proves the lemma. R } Such object could be constructed as weak limits of (cid:12)nite volume states with e.g. periodic or translation invariant boundary conditions, provided the corresponding sequences converge almost surely (and not via subsequences with possibly di(cid:11)erent limits). They noted that in a general disordered system this may not be true. The metastate provided a wayout ofthis di(cid:14)culty. 7 The CLT example may inspire the question whether one might not be able to retain more information on the convergence of the random Gibbs state than is kept in the Aizenman-Wehr metastate. The metastate tells us about the probability distribution of the limiting measure, but we have thrown out all information on how for a given !, the (cid:12)nite volume measures behave as the volume increases. Newman and Stein [NS3,NS4] have introduced a possibly more profound concept of the em- pirical metastate which captures more precisely the asymptotic volume dependence of the Gibbs states in the in(cid:12)nite volume limit. We will brie(cid:13)y discuss this object and elucidate its meaning in the above CLT context. Let (cid:3)n be an increasing and absorbing sequence of (cid:12)nite volumes. De(cid:12)ne em the random empirical measures (cid:20)N ()[!] on ( 1( 1)) by (cid:1) M S N em 1 (cid:20)N ()[!] (cid:14)(cid:22)(cid:3)n[!] (2:8) (cid:1) (cid:17) N n=1 X In [NS4] it was proven that for su(cid:14)cientlysparse sequences (cid:3)n and subsequencesNi, it istrue that almost surely em lim(cid:20)Ni()[!] =(cid:20)( )[!] (2:9) i (cid:1) (cid:1) "1 NewmanandSteinconjecturedthatinmanysituations,theuseofsparsesubsequenceswouldnotbe necessary to achieve the above convergence. However, Ku(cid:127)lske [K] has exhibited some simple mean (cid:12)eld examples where almost sure convergence only holds for very sparse (exponentially spaced) subsequences). He also showed that for more slowly growing sequences convergence in law can be proven in these cases. 1 n Toy example revisited: All this is easily understood in our example. We set Gn pn i=1Xi. (cid:17) Then the empirical metastate corresponds to P N em 1 (cid:20)N ()[!] (cid:14)Gn[!] (2:10) (cid:1) (cid:17) N n=1 X We will prove that the following Lemma holds: em Toy-Lemma 2.2 Let Gn and (cid:20)N ( )[!] be de(cid:12)ned above. Let Bt, t [0;1] denote a standard (cid:1) 2 Brownian motion. Then em em 1 (i) The random measures (cid:20)N converge in law to the measure (cid:20) = 0 dt(cid:14)t(cid:0)1=2Bt R (ii) em IE[(cid:20) ( ) ] = (0;1) (2:11) (cid:1) jF N 8 Proof: Our main objective is to prove (i). We will see that quite clearly, this result relates to Lemma 2.1 as the CLT to the Invariance Principle, and indeed, its proof is essentially an immediate consequence of Donsker’s Theorem. Donsker’s theorem (see [HH] for a formulation in more generality than needed in this chapter) asserts the following: Let (cid:17)n(t) denote the continuous function on [0;1] that for t=k=n is given by k 1 (cid:17)n(k=n) Xi (2:12) (cid:17) pn i=1 X and that interpolates linearly between these values for all other points t. Then, (cid:17)n(t) converges in distribution to standard Brownian motion in the sense that for any continuous functional F : C([0;1]) IR it istrue that F((cid:17)n) converges inlaw to F(B). From here the proof of (i) is obvious. ! We have to proof that for any bounded continuous function f, N N 1 1 (cid:14)Gn[!](f) f (cid:17)n(n=N)= n=N N (cid:17) N ! n=1 n=1 1X 1X (cid:16) p (cid:17) (2:13) dtf(Bt=pt) dt(cid:14)Bt=pt(f) 0 (cid:17) 0 Z Z To see this, simply de(cid:12)ne the continuous functionals F and FN by 1 F((cid:17)) dtf((cid:17)(t)=pt) (2:14) (cid:17) 0 Z and N 1 FN((cid:17)) f((cid:17)(n=N)= n=N) (2:15) (cid:17) N n=1 X p We have to show that in distribution F(B) FN((cid:17)N) converges to zero. But (cid:0) F(B) FN((cid:17)N) =F(B) F((cid:17)N)+F((cid:17)N) FN((cid:17)N) (2:16) (cid:0) (cid:0) (cid:0) By the invariance principle,F(B) F((cid:17)N) converges to zero indistributionwhile F((cid:17)N) FN((cid:17)N) (cid:0) (cid:0) converges to zero since FN is the Riemann sum approximation to F. To see that (ii) holds, note (cid:12)rst that as in the CLT, the Brownian motion Bt is measurable with respect to the tail sigma-algebra of the Xi. Thus em IE[(cid:20) ] = (0;1) (2:17) jF N } Remark: It is easily seen that for su(cid:14)ciently sparse subsequences ni (e.g. ni = i!), N 1 (cid:14)Gni (0;1); a.s (2:18) N !N i=1 X 9
Description: