ebook img

A penalised model reproducing the mod-Poisson fluctuations in the Sathe-Selberg theorem PDF

0.58 MB·
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview A penalised model reproducing the mod-Poisson fluctuations in the Sathe-Selberg theorem

A PENALISED MODEL REPRODUCING THE MOD-POISSON FLUCTUATIONS IN THE SATHÉ-SELBERG THEOREM YACINE BARHOUMI-ANDRÉANI Abstract. We construct a probabilistic model for the number of divisors of a random 7 uniform integer that converges in the mod-Poisson sense to the same limiting function as 1 0 its original counterpart, the one arising in the Sathé-Selberg theorem. This construction 2 involvesaconditioningandgivesanalternativeperspectivetotheusualparadigmof“hybrid product” modelsdevelopedbyGonek,HughesandKeatinginthecaseoftheRiemannZeta n function. a J 2 1 1. Introduction ] R The Erdös-Kac theorem in probabilistic number theory concerns the Gaussian fluctuations P of the number of prime divisors of a random integer : if P denotes the set of prime numbers, h. let ω(N) be the number of prime divisors of N ∈ N defined by t a ω(N) := (cid:88)1 m {p|N} p∈P [ and let Un be a random variable uniformly distributed in {1,...,n}. The Erdös-Kac theorem 1 writes (see [4, 5]) v 32 sup(cid:12)(cid:12)(cid:12)P(cid:18)ω(U√n)−loglogn (cid:54) x(cid:19)−(cid:90) x e−u2/2√du (cid:12)(cid:12)(cid:12) −−−−→ 0 (1) 4 x∈R(cid:12) loglogn −∞ 2π(cid:12) n→+∞ 3 The key understanding of this theorem is the following : the random variables 0 1. B(n) := 1 p {p|Un} 0 7 are {0,1}-Bernoulli random variables that are weakly correlated, and their approximation by 1 a sequence of independent random variables is accurate at the the level of this Central Limit : v Theorem (CLT). Concretely, one can perform the approximation i X ω(U ) = (cid:88)1 = (cid:88)B(n) ≈ (cid:88) B(∞) r n {p|Un} p n→+∞ p a p∈P p∈P p∈P,p(cid:54)n (∞) the B ’s being independent Bernoulli random variables such that p (cid:16) (cid:17) 1 (cid:16) (cid:17) P B(∞) = 1 = = 1−P B(∞) = 0 p p p To measure the accuracy of this last approximation, we introduce the independent model (cid:88) Ω := B(∞) n p p∈P,p(cid:54)n Date: January 13, 2017. 2010 Mathematics Subject Classification. 60E10, 60E05, 60F05, 60G50, 60F99, 11K99, 11K65. 1 2 Y.BARHOUMI-ANDRÉANI A model of a random or a deterministic sequence is a random sequence that can be sub- stituted to its orginal in a prescribed framework while still capturing its main properties, for instance a particular type of convergence. At the order of renormalisation of the CLT given by (1), the independent model is accurate since one can write ω(U√n)−loglogn ≈ Ωn−(cid:80)p∈P,p(cid:54)n p1 −−−L−→ N (0,1) (cid:113) loglogn (cid:80) 1 n→+∞ p∈P,p(cid:54)n p Here, we have used the well-known estimate for the prime harmonic sum (cid:88) 1 H(P) := = loglogn+O(1) (2) n p p∈P,p(cid:54)n This model is interesting to understand the Erdös-Kac CLT, but it hides a certain amount (n) of information since at the second order of renormalisation the dependency of the B ’s p re-appears : one has the following result due to Selberg [17] improving a result of Sathé [18] (cid:16) (cid:17) (cid:16) (cid:16) (cid:17)(cid:17) E zω(Un) = e(z−1)(loglogn+κ)Φ (z) 1+O (logn)Re(z−2) (3) ω where, for R > 0, the O is uniform for |z| (cid:54) R, where κ is an absolute positive constant and (cid:18) (cid:19) (cid:18) (cid:19) Φω(z) := (cid:89) 1+ z−1 e−z−k1 (cid:89) 1+ z−1 e−z−p1 (4) k p k∈N∗ p∈P But one has (cid:16) (cid:16) (cid:17)(cid:17) E(cid:0)zΩn(cid:1) = e(z−1)(loglogn+κ(cid:48))Φ (z) 1+O (logn)Re(z−2) (5) Ω with κ(cid:48) another absolute constant and (cid:18) (cid:19) (cid:89) z−1 −z−1 ΦΩ(z) := 1+ e p p p∈P leading to a corrective factor ΦC(z) := (cid:81)k∈N∗(cid:0)1+ z−k1(cid:1)e−z−k1 such that Φω(z)= ΦΩ(z)ΦC(z). This factor is easily seen to be the limiting function of the random variable n (cid:88) (∞) C := B n k k=1 which is equal in distribution to the number of cycles C(σ ) of a random uniform permutation n σ ∈ S (see [1, 12]), hence the name. n n Let (X ) be a sequence of random variables and let P be a Poisson-distributed random n γn variable of parameter γ ∈ R . When there exists a continuous function Φ : A ⊆ C → C n + satisfying some technical conditions (see definition 2.1) such that the following convergence holds locally uniformly in z ∈ A E(cid:0)zXn(cid:1) −−−−→ Φ(z) E(zPγn) n→+∞ one says that (X ,γ ) converges in the mod-Poisson sense to Φ. n n n This particular type of convergence was introduced in [12] following a similar development in the Gaussian setting in [9]. It is unusual in Probability theory where the Fourier-Laplace A MODEL REPRODUCING THE MOD-POISSON FLUCTUATIONS IN THE SELBERG-SATHÉ THEOREM 3 transform is not often renormalised ; it implies the usual CLT by a change of renormalisation. This is the mode of convergence underlying equations (3) and (5) since E(cid:0)zPγn(cid:1) = eγn(z−1) A natural question arises from the last computations : Question 1.1. How to refine the independent model Ω to get a model that would reproduce n the mod-Poisson fluctuations, i.e. a model that would converge in the mod-Poisson sense to the function Φ ? ω Thecreationofheuristicprobabilisticmodelsaimingatunderstandingthesequenceofprime numbers originates in the work of Cramer [3] and has recently seen some spectacular devel- opments with the work [13] that precises Cramer’s original idea that prime numbers behave “at random”. The approach in this article is conceptually identical to the work of Hardy and Littlewood (see [8] or [19]) that refines the coarse Cramer’s model to incorporate effects likely to explain the distribution of twin primes. This refinement is done by biasing the probabilities of the Cramer model ; an enlightening description of the biasing procedure is given in [19]. The analogy stops nevertheless here : the Cramer model consists in heuristically replacing the sequence of primes by a random sequence and argue that they behave in a “similar way” ; as [19] remarks, such an approach “must be taken with a liberal dose of salt”. Here, no such heuristicreplacementiscarriedout(foracomparisonofunrelatedprobabilisticmodelinnum- ber theory, see [20]). The problem here tackled is to construct a probabilistic approximation of a “true” random variable, ω(U ), but the approximation has to be understood in a peculiar N probabilistic sense (the mod-Poisson one in place of, say, a total variation approximation), which motivates the terminology of model such as defined previously. Thequestion1.1ismotivatedbyasimilardevelopmentfortherandomvariableζ(1/2+itU) where U is a random variable uniformly distributed on the interval [0,1], t > 0 and ζ is the Riemann Zeta function. This random variable satisfies a CLT due to Selberg (see e.g. [10]) in the same vein as the Erdös-Kac one for ω(U ), namely n (cid:12)   (cid:12) sup(cid:12)(cid:12)(cid:12)Plog(cid:113)|ζ(1/2+itU)| (cid:54) x−(cid:90) x e−u2/2√du (cid:12)(cid:12)(cid:12) −−−−→ 0 x∈R(cid:12)(cid:12) 1 loglogt −∞ 2π(cid:12)(cid:12) t→+∞ 2 The question of computing the limiting function for the “mod-Gaussian renormalisation” given for all λ ∈ iR by E(cid:0)eλlog|ζ(1/2+itU)|(cid:1) −−−−→ Φ (λ) = Φ (λ)Φ (λ) eλ22×12loglogt t→+∞ ζ Matrix Arithmetic is the celebrated Keating-Snaith’s moments conjecture (see [11] for the definitions). Inordertounderstandthislastconvergence, theauthorsof[7]constructa“hybridproduct” model for log|ζ(1/2+itU)| that converges in the mod-* sense to the limiting function Φ . ζ Answering question 1.1 for the “toy model” given by ω(U ) is thus of importance for it may n give a hint to understand log|ζ(1/2+itU)|. We refer to [7] for the exact details of the ζ model ; instead of describing it, let us find its equivalent for ω(U ). Since the limiting function occuring in (5) is a product Φ = Φ Φ , n ω Ω C the idea is to think of ω(U ) as being approximately an independent sum of random variables n 4 Y.BARHOUMI-ANDRÉANI created by means of Bernoulli random variables, i.e. A A(cid:48) (cid:88) (cid:88) ω(U ) ≈ ω := B + B(cid:48) (6) n n k k k=1 k=1 with P(B = 1) = 1 , P(B(cid:48) = 1) = 1 and where A,A(cid:48) are choosen so that the mod-Poisson k p k k k speed of convergence γ = loglogn+κ of ω(U ) matches the speed of convergence of ω . We n n n have set P := {p ,k (cid:62) 1}. As γ is asymptotically the mean of ω(U ), using the classical k n n (cid:80) relation 1/k = logn+O(1) one finds 1(cid:54)k(cid:54)n A A(cid:48) (cid:88) 1 (cid:88) 1 E(ω ) = + = loglogA+log(A(cid:48))+O(1) n p k k k=1 k=1 which amounts to A(cid:48)logA = O(logn) the constant in the O being explicitely known. This intuitive model, despite its degree of freedom and its artificial character, has the advantage of being an acceptable mod-Poisson model for ω(U ) since it converges in the mod- n Poisson sense to Φ = Φ Φ . Nevertheless, one could ask for another reason why the limiting ω Ω C correction to the independence takes the form of an additive independent term, and why this correction is again constructed by means of independent random variables. One can argue that a natural modification of the initial model is more likely to be understood by a biasing à la Hardy-Littlewood [8, 19] instead of a summation paradigm. This is the goal of this article. More precisely, we will answer question 1.1 by conditioning a random proportion of primes to be divisors with probability one in a slight modification of Ω , i.e. by modifying the proba- n bilities of the Bernoulli sum in the same way Hardy and Littlewood modify the probabilities of the Cramer Bernoulli variables, namely Theorem. Set γ := loglogn. For θ > 0, let B (θ) be the Bernoulli random variable given n k by θ P(B (θ) = 1) = = 1−P(B (θ) = 0) k k θ+k−1 Thereexistarealsequence(v ) , arandomintegerC(cid:48) andasequence(I ) ofi.i.d. random n n n (cid:96) (cid:96) integers independent of (B (v )) and C(cid:48) (all quantities explicitely defined in theorem 3.9) k n k n such that (cid:18) (cid:12) (cid:19) Ω(cid:48)n(cid:48) :=L (cid:88)Bpk(vn)(cid:12)(cid:12)(cid:12)BpI1(vn) = ··· = BpICn(cid:48) (vn) = 1 k satisfies (cid:16) (cid:17) E xΩ(cid:48)n(cid:48) −−−−→ Φ (x) E(xPγn) n→+∞ ω The explicit description of all involved parameters is the content of theorem 3.9. It will be proven using a probabilistic interpretation of mod-Poisson convergence developed in [2], which interprets it as a change of probability. In the context of discrete random variables, A MODEL REPRODUCING THE MOD-POISSON FLUCTUATIONS IN THE SELBERG-SATHÉ THEOREM 5 conditioning and biasing can be understood in the same framework, which deepends the analogy with the Hardy-Littlewood approach. Notations We gather here some notations used throughout the paper. The set {1,2,...,n} will be denoted by 1,n . The set of prime numbers will be denoted by P := {p ,k (cid:62) 1}. (cid:74) (cid:75) k The distribution of a real random variable X in the probability space endowed with the measure P will be denoted by P : if A is a measurable set, P (A) := P(X ∈ A). X X If X and Y are two random variables having the same distribution, that is P = P , we X Y will note X =L Y or equivalently X ∼ P . We will denote by P(γ) the Poisson distribution of Y parameter γ > 0, by N (0,1) the standard Gaussian distribution and by U(A) the uniform distribution on the set A. For f ∈ L1(P ), f (cid:62) 0, the penalisation or bias of P by f is the probability measure P X X Y denoted by f(X) P := •P Y E(f(X)) X This definition is equivalent to the following : for all g ∈ L∞(P ), X E(f(X)g(X)) E(g(Y)) = E(f(X)) Note that in a discrete setting, conditioning amounts to take f = 1 for a suitable set A. A A partition of an integer N is a sequence of integers λ = (λ ,...,λ ) where λ (cid:62) λ (cid:62) ... 1 k 1 2 and (cid:80)k λ = N. We define the length of such a partition by (cid:96)(λ) := k. The paintbox i=1 i process (see [14]) is a random partition λ = (λ ,...,λ ) constructed in the following way : let 1 k (I ) be i.i.d. integer valued random variables and define the (random) equivalence relation by i i k ∼ r ⇐⇒ I = I (7) k r The equivalence classes of this relation define a random partition λ. In the case where I ∼ U( 1,N ), this random partition is equal in law to the cycle structure of a random uniform (cid:74)perm(cid:75)utation σ ∈ S and in particular, the total number of cycles of σ satisfies N C(σ) = (cid:96)(λ). 2. Reminder on mod-Poisson convergence 2.1. Definition and examples. Let P ∼ P(γ) with γ > 0. Recall that γ γk P(P = k) = e−γ γ k! which is a statement equivalent to E(cid:0)eiuPγ(cid:1) = exp(cid:0)γ(eiu−1)(cid:1) for all u ∈ R. We define the mod-Poisson convergence in the Laplace-Fourier setting by the following Definition 2.1. Let(Z ) beasequenceofpositiverandomvariablesand(γ ) beasequence n n n n of strictly positive real numbers. (Z ) is said to converge in the mod-Poisson sense at speed n n (γ ) if for all z ∈ C, the following convergence holds locally uniformly in z ∈ C n n E(cid:0)zZn(cid:1) −−−−→ Φ(z) E(zPγn) n→+∞ 6 Y.BARHOUMI-ANDRÉANI where Φ : C → C is a continuous function satisfying Φ(1) = 1, Φ(z) = Φ(z) and with P ∼ P(γ ). γn n When such a convergence holds, we write it as mod-P (Z ,γ ) −−−−→ Φ n n n→+∞ Remark 2.2. The limiting function Φ is not unique, since it is defined up to multiplication by an exponential (see [9]). Remark 2.3. The original definition used in [12] is in the Fourier setting, i.e. for |z| = 1. The advantage of this definition is that the Fourier transform of a random variable always exists. Other restrictions of the domain of convergence are possible. For instance, [6] uses {−c < Re < c} for a certain c > 0. In the Laplace case, one has, locally uniformly in x ∈ R + E(cid:0)xZn(cid:1) −−−−→ Φ(x) E(xPγn) n→+∞ and in particular, Φ(x) (cid:62) 0 for all x ∈ R . + From now on, we restrict ourselves to the Laplace setting. Mod-Poisson convergence will always mean “in the Laplace setting” unless specified. In particular, the limiting mod- Poisson function Φ will be defined on R and a quantity such as ||Φ|| will be understood as + ∞ sup |Φ(x)|. x∈R+ The following example is fundamental to understand mod-Poisson convergence : Example 2.4. Let (B ) be a sequence of Bernoulli random variables such that k k p := P(B = 1) = 1−P(B = 0) k k k where (p ) is a sequence of real numbers satisfying the conditions k k (cid:88) (i) p = +∞ k k(cid:62)1 (cid:88) (ii) p2 < +∞ k k(cid:62)1 Let Z := (cid:80)n B and γ := (cid:80)n p . Then, n k=1 k n k=1 k mod-P (Z ,γ ) −−−−→ Φ n n n→+∞ where (cid:89) Φ(x) = (1+p (x−1))e−pk(x−1) (8) k k(cid:62)1 Indeed, setting P ∼ P(γ ) one has, locally uniformly in x γn n E(cid:0)xZn(cid:1) (cid:81)n E(cid:0)xBk(cid:1) (cid:89)n = k=1 = (1+p (x−1))e−pk(x−1) E(xPγn) eγn(x−1) k k=1 (cid:89) −−−−→ (1+p (x−1))e−pk(x−1) k n→+∞ k(cid:62)1 since (cid:80)kp2k < ∞ and (1+pk(x−1))e−pk(x−1) = exp(cid:0)−p2k(x−1)2/2+o(p2k)(cid:1). A MODEL REPRODUCING THE MOD-POISSON FLUCTUATIONS IN THE SELBERG-SATHÉ THEOREM 7 One can see that equation (5) is a particular case of this last theorem with p = 1/p k k where P := {p ,k (cid:62) 1}. As pointed out in the introduction, this is also the case of equation k (3) using the “hybrid sum” ω of (6) that incorporates the corrective term n (cid:18) (cid:19) ΦC(x) := (cid:89) 1+ x−1 e−x−k1 k k(cid:62)1 This term is the limiting mod-Poisson function of the random variable C := (cid:80)n B n k=1 k where (B ) is the last sequence of Bernoulli random variables with p = 1/k, and with speed k k k n (cid:88) 1 H := = logn+γ +o(1) n k k=1 where γ is the Euler-Mascheroni constant. Remark 2.5. Sucharandomvariablehasalsothedistributionofthetotalnumberofcyclesofa randompermutationselectedaccordingtotheuniformdistributionP definedbyP (σ) = 1/n! n n for all σ ∈ S (see e.g. [1]). Using the formula (see e.g. [23] 12.11) n (cid:18) (cid:19) 1 = e(z−1)γ (cid:89) 1+ z−1 e−z−k1 Γ(z) k k(cid:62)1 Φ (z) can be replaced by 1/Γ(z) when the speed H is replaced by H −γ. C n n 2.2. A probabilistic interpretation of mod-Poisson convergence. Werecallthefollow- ing theorem from [2] : Theorem 2.6. Let Φ be a bounded function on R and γ > 0. Define the distribution + Q(Φ,γ) by (cid:16) (cid:17) Φ Pγ γ Q ∼ Q(Φ,γ) ⇐⇒ P := •P γ Qγ E(cid:16)Φ(cid:16)Pγ(cid:17)(cid:17) Pγ γ where P ∼ P(γ). γ Then, if Q (Φ) ∼ Q(Φ,γ ), we have γn n mod-P (Q (Φ),γ ) −−−−→ Φ γn n n→+∞ For the reader’s convenience, we remind the proof of this result. Proof. Recall the change of probability, for x,γ > 0 xPγ •P = P (9) E(xPγ) Pγ Pxγ easily seen writing, for all θ ∈ R E(cid:0)xPγ eiθPγ(cid:1) eγ(xeiθ−1) (cid:16) (cid:17) = = eγx(eiθ−1) = E eiθPxγ E(xPγ) eγ(x−1) 8 Y.BARHOUMI-ANDRÉANI Then, we have (cid:32) (cid:33) E(cid:0)xQγn(Φ)(cid:1) = E(cid:16)Φ(cid:16)Pγγnn(cid:17)xPγn(cid:17) = E E(cid:16)xxPPγγnn(cid:17)Φ(cid:16)Pγγnn(cid:17) = E(cid:16)Φ(cid:16)Pγxnγn(cid:17)(cid:17) E(xPγn) E(xPγn)E(cid:16)Φ(cid:16)Pγn(cid:17)(cid:17) E(cid:16)Φ(cid:16)Pγn(cid:17)(cid:17) E(cid:16)Φ(cid:16)Pγn(cid:17)(cid:17) γn γn γn By continuity and boundedness of Φ, and using dominated convergence and the fact that Pxγn −−−L−→ x γn n→+∞ one gets, locally uniformly in x ∈ R (and in particular for x = 1) + (cid:18) (cid:18) (cid:19)(cid:19) P E Φ xγn −−−−→ Φ(x) γn n→+∞ As Φ(1) = 1, one finally gets the result. (cid:3) Example 2.7. Continuing example 2.4, we see that in the case of a function given by (8), i.e. (cid:89) Φ(x) = (1+p (x−1))e−pk(x−1) k k(cid:62)1 one has, for all x ∈ R + 0 (cid:54) Φ(x) (cid:54) 1 and the last theorem applies. The positivity of Φ on R is obvious, and as 1+y (cid:54) ey for + all y ∈ R, setting y = pk(x−1) one has (1+pk(x−1))e−pk(x−1) (cid:54) 1 which gives the upper bound. A probabilistic interpretation of mod-Poisson convergence follows from this last theorem : if (Z ) is a sequence of random variables converging in the mod-Poisson sense at speed (γ ) n n n n to Φ, one may think of the distribution of Z as close to the distribution of Q (Φ). The n γn limitingfunctionΦ, once correctly scaled, would thus be aparticular correction to the Poisson distribution that would allow a refined speed of convergence in the CLT, and mod-Poisson convergence could thus be understood as a certain second-order convergence in distribution. This is the case in the mod-Gaussian setting (see [2]), but also in the mod-Poisson setting since [21, II.6 (20)] gives (cid:18) (cid:18) (cid:19) (cid:18) (cid:19)(cid:19) k k P(ω(U ) = k) = P(P = k) Φ +O n loglogn ω loglogn (loglogn)2 uniformly in n (cid:62) 3 and k ∈ 1,(2−δ)loglogn for all δ > 0. Moreover, using 3.7, one has ||Φ(cid:48) || < ∞, which implies us(cid:74)ing the Gaussian C(cid:75)LT for P ω ∞ γ (cid:18)(cid:12) (cid:18) (cid:19) (cid:12)(cid:19) (cid:18)(cid:12) (cid:18) (cid:19) (cid:12)(cid:19) (cid:18)(cid:12) (cid:12)(cid:19) (cid:18) (cid:19) E (cid:12)(cid:12)(cid:12)Φω Pγγ −1(cid:12)(cid:12)(cid:12) = E (cid:12)(cid:12)(cid:12)Φω Pγγ −Φω(1)(cid:12)(cid:12)(cid:12) (cid:54) (cid:12)(cid:12)(cid:12)(cid:12)Φ(cid:48)ω(cid:12)(cid:12)(cid:12)(cid:12)∞E (cid:12)(cid:12)(cid:12)Pγγ −1(cid:12)(cid:12)(cid:12) = O √1γ This last result can hence be transformed into  Φ (cid:16) k (cid:17) E(cid:16)Φ (cid:16)Ploglogn(cid:17)(cid:17) (cid:18) (cid:19) ω loglogn ω loglogn k P(ω(Un) = k) = P(Ploglogn = k)E(cid:16)Φ (cid:16)Ploglogn(cid:17)(cid:17) +O √loglogn +O (loglogn)2  ω loglogn A MODEL REPRODUCING THE MOD-POISSON FLUCTUATIONS IN THE SELBERG-SATHÉ THEOREM 9 that is (cid:32) (cid:33) (loglogn)k−1/2 P(ω(U ) = k) = P(Q (Φ ) = k)+O n loglogn ω k! logn 3. A model that converges in the mod-Poisson sense In order to construct a probabilistic model for ω(U ) that converges in the mod-Poisson n sense to Φ , we remind some classical probabilistic biases. ω 3.1. Classical biases and changes of probability. Afundamentaloperationinprobability theory is the change of probability by means of a weight on the initial probability measure. This weight is called bias or penalisation and we will use undifferently both terminology. Definition 3.1 (Bias/penalisation of measure). Let X be a real random variable in the probability space endowed with the measure P and denote by P its law. For f ∈ L1(P ), X X f (cid:62) 0, the penalisation (or bias) of P by f is the probability measure P denoted by X Y f(X) P := •P Y E(f(X)) X Classicalbias inprobability theoryallowto understand“pathwise transformations” induced by such a transformation. Example 3.2. The most classical change of probability concerns the passage from N (0,1) to N (µ,1) =L µ+N (0,1). Indeed, if X ∼ N (0,1), one easily checks that eµX P = •P = eµX−µ2/2•P X+µ E(eµX) X X Hence, in the Gaussian setting, an exponential bias is equivalent to a translation of the canonical evaluation. Note that the Poisson counterpart of this exponential bias was given in equation (9). A classical transform in probability theory is made using f : x (cid:55)→ x when the random variable is positive. Definition 3.3 (Size-bias transform). Let X (cid:62) 0 be a random variable with expectation µ := E(X) < ∞. A random variable X(s) is said to be a size-bias transform of X if, for all real functions f such that E(|Xf(X)|) < ∞ (cid:16) (cid:16) (cid:17)(cid:17) E(Xf(X)) = µE f X(s) An equivalent definition is thus X P := •P (10) X(s) E(X) X Example 3.4. A classical change of measure for a random walk is given by its size-bias cou- pling, i.e. given(X ) asequenceofi.i.d. positiverandomvariablesofexpectationE(X ) := 1 k k k defined on the same probability space, the random walk (S ) of increments (X ) is given n n k k by n (cid:88) S := X n k k=1 10 Y.BARHOUMI-ANDRÉANI (s) The size-bias transform of S is the random variable S whose law is given by n n S P := n •P Sn(s) n Sn A pathwise construction of such a random variable is implied by the following Lemma 3.5 (Size-bias coupling of an independent sum). Let (Y ) be a sequence of in- k k dependent positive integrable random variables, independent of (X ) and having the same k k distribution as (X ) and let I ∈ 1,n be a random index independent of (X ) and (Y ) of k k k k k k law given by (cid:74) (cid:75) E(X ) P(I = k) = k (cid:80)n E(X ) (cid:96)=1 (cid:96) Then, S(s) =L S −X +Y(s) n n I I and in particular, if (Y ) is defined on the same probability space as (X ) , one has a natural k k k k (cid:16) (cid:17) (s) coupling S ,S . n n For the reader’s convenience, we recall the proof of this lemma (see also e.g. [1, pp 78-79]). (cid:104)−k(cid:105) (cid:80) Proof. Let f be a bounded measurable function and S := X . Then, by indepen- n (cid:96)(cid:54)=k (cid:96) dence, n (cid:16) (cid:16) (cid:17)(cid:17) 1 1 (cid:88) E f S(s) := E(S f(S )) = E(X f(S )) n E(S ) n n E(S ) k n n n k=1 n 1 (cid:88) (cid:16) (cid:16) (cid:17)(cid:17) = E X f S(cid:104)−k(cid:105)+X E(S ) k n k n k=1 n = 1 (cid:88)E(X )E(cid:16)f(cid:16)S(cid:104)−k(cid:105)+Y(s)(cid:17)(cid:17) E(S ) k n k n k=1 (cid:16) (cid:16) (cid:17)(cid:17) (cid:16) (cid:16) (cid:17)(cid:17) = E f S(cid:104)−I(cid:105)+Y(s) = E f S −X +Y(s) n I n I I (cid:3) A last type of useful bias concerns the exponential bias of a sum of independent terms, and in particular Bernoulli random variables. Lemma3.6(Exponentialbiasofanindependentsum). Let(Y ) beasequenceofindependent k k random variables. Define, for a certain x > 0, n (cid:88) S := Y n k k=1 xSn P = •P Sn(x) E(xSn) Sn

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.