ebook img

Evolutionarily stable strategies of random games, and the vertices of random polygons PDF

0.47 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Evolutionarily stable strategies of random games, and the vertices of random polygons

TheAnnalsofAppliedProbability 2008,Vol.18,No.1,259–287 DOI:10.1214/07-AAP455 (cid:13)c InstituteofMathematicalStatistics,2008 EVOLUTIONARILY STABLE STRATEGIES OF RANDOM GAMES, 1 AND THE VERTICES OF RANDOM POLYGONS 8 0 By Sergiu Hart, Yosef Rinott and Benjamin Weiss 0 2 Hebrew University of Jerusalem n a Anevolutionarilystablestrategy (ESS)isanequilibriumstrategy J thatisimmunetoinvasionsbyrarealternative(“mutant”)strategies. 2 Unlike Nash equilibria, ESS do not always exist in finite games. In 2 this paper we address the question of what happens when the size of the game increases: does an ESS exist for “almost every large” ] R game?Lettingtheentriesinthen ngamematrixbeindependently × randomlychosenaccordingtoadistributionF,westudythenumber P of ESS with support of size 2. In particular, we show that, as n . → h , the probability of having such an ESS: (i) converges to 1 for t ∞distributions F with “exponential and faster decreasing tails” (e.g., a m uniform, normal, exponential); and (ii) converges to 1 1/√e for − distributionsF with“slowerthanexponentialdecreasingtails”(e.g., [ lognormal, Pareto, Cauchy). 1 Ourresultsalsoimplythattheexpectednumberofverticesofthe v convexhullof n random points in theplaneconverges toinfinity for 3 thedistributions in (i), and to 4 for thedistributions in (ii). 5 3 3 1. Introduction. The concept of evolutionarily stable strategy (ESS for 1. short), introduced by Maynard Smith and Price [12], refers to a strategy 0 that, when played by the whole population, is immune to invasions by rare 8 alternative (“mutant”) strategies (see Section 2.1 for precise definitions). 0 Formally,anESScorrespondstoasymmetricNashequilibriumthatsatisfies : v an additional stability requirement. Every (symmetric) finite game has a i X (symmetric) Nash equilibrium. But the same is not true for ESS: there are r games with finitely many pure strategies that have no ESS. Moreover, the a Received February 2007; revised June 2007. 1Supportedin part bygrantsoftheIsrael ScienceFoundationand bytheInstitutefor AdvancedStudies at theHebrewUniversity of Jerusalem. AMS 2000 subject classifications. Primary 91A22, 60D05; secondary 60F99, 52A22. Key words and phrases. Evolutionarily stable strategy, ESS, random game, random polytope,convexhullofrandompoints,Nashequilibrium,Poisson approximation,Chen– Stein method, heavy-tailed distribution, subexponential distribution, threshold phe- nomenon. This is an electronic reprint of the original article published by the Institute of Mathematical Statistics in The Annals of Applied Probability, 2008,Vol. 18, No. 1, 259–287. This reprint differs from the original in pagination and typographic detail. 1 2 S. HART,Y.RINOTTAND B. WEISS nonexistence of ESS is not an “isolated” phenomenon: it holds for open sets of games.2 Thisleads us to thequestion of whathappenswhenthenumberof strate- gies is large: does an ESS exist for “almost every large game”? Specifically, assuming that the payoffs in the game are randomly chosen (they are inde- pendent and identically distributed random variables), what is the proba- bility that an ESSexists, and whatis the limit of this probability as the size of the game increases? For pure ESS,the answer tothis question is simple:theprobability that a pureESSexistsis1 (1 1/n)n,whichconvergesto1 1/e 63%asn , − − − ≃ →∞ where n is the numberof strategies. What about mixed ESS?Here we study mixed ESS with support of size 2—called “two-point ESS”—and find out that, unlike pure ESS, the answer depends on the underlying distribution F from which the payoffs are drawn. Bywayofillustration,considerthefamilyofcumulativedistributionfunc- tions F (x)=1 e−xα for all x 0, where α>0. Our result is: α − ≥ When α 1 the probability that there is a two-point ESS converges to 1 • ≥ as3 n . →∞ When α<1 the probability that there is a two-point ESS converges to • 1 1/√e 39% as4 n . − ≃ →∞ Moreover, we show that the distribution of the number of two-point ESS converges to a Poisson distribution, with a parameter converging to infinity when α 1, and with a parameter of 1/2 when α<1. ≥ This threshold phenomenon is not restricted to the class F . We identify α two classes of distributions.The first is a class of “light-tailed” distributions with tail probabilities 1 F(x) that decrease exponentially as x (i.e., − →∞ exponential distributions) or faster (e.g., normal distributions, uniform dis- tributions on bounded intervals, logistic distributions); they all lead to the same result as F for α 1. The second is a class of “heavy-tailed” dis- α ≥ tributions with tail probabilities that decrease slower than exponentially as x (including,inparticular,thefollowing distributions:Pareto, Cauchy, →∞ lognormal, stable with parameter less than 2), which all behave like F for α α<1. We refer to these two classes, respectively, as for “Exponential EF and Faster decreasing tails,” and for “Slower than Exponential decreas- SE ing tails” (see Sections 4 and 5 for precise definitions). 2For instance, the “rock-scissors-paper” game of Example 9.2.1 in van Damme [15], and all its small enough perturbations, haveno ESS. 3Soa fortiori theprobability that an ESS exists converges to 1 in this case. 4Wealsoshowinthiscasethattheprobabilitythatthereiseitherapureoratwo-point ESS converges to 1 e−3/2 78%. − ≃ EVOLUTIONARILYSTABLESTRATEGIES 3 An interesting consequence of our results concerns the classic problem of the number of vertices of the convex hull of a collection of random points in the plane, originally studied by R´enyi and Sulanke [13]; see Section 3. Tak- ing symmetric versions of the distributions5 F , and assuming that the 2n α coordinates ofthe n points intheplaneareindependentand F -distributed, α we have: When α 1 the expected number of vertices of the convex hull of n • ≥ random points in the plane converges to infinity as n . →∞ When α < 1 the expected number of vertices of the convex hull of n • random points in the plane converges to 4 as n . →∞ In addition, in the second case α<1, the number of vertices converges in probability to 4; thus, the convex hull is a quadrilateral with probability converging to 1. Here again, the results hold for the general classes and FE , respectively. SE The paper is organized as follows. The two classes of distributions are defined in Sections 4.1 and 5.1, respectively. Our main results for ESS are statedinTheorems1and2inSection2.2(seealsoTheorem17inSection4.2 and Theorem33 in Section 5.3), and,for thenumberof vertices, inTheorem 10 in Section 3. Section 2 presents the model—ESS and random games— together with some preliminary results. Section 3 deals with the number of vertices of random polygons. The detailed analysis is provided in Sections 4 and 5, and we conclude with a discussion in Section 6. 2. Preliminaries. 2.1. Evolutionarily stable strategies. The setup is that of a symmetric two-person game, with the payoffs given by the n n matrix R = × (R(i,j)) . The interpretation is that a meeting between two players, i,j=1,...,n thefirstplayingthepurestrategy iandthesecondplayingthepurestrategy j (where 1 i,j n), yields a payoff of R(i,j) to the first, and R(j,i) to ≤ ≤ the second (these payoffs may be viewed as a measure of “fitness” or “re- productive success”).6 A mixed strategy p is a probability vector on the set of pure strategies, that is, p=(p ,...,p ) ∆(n):= x Rn : n x =1 ; 1 n ∈ { ∈ + i=1 i } the payoff function R is bilinearly extended to pairs of mixePd strategies: R(p,q):= n n p q R(i,j). i=1 j=1 i j A mixedPstraPtegy p ∆(n) is an evolutionarily stable strategy (ESS) for ∈ the matrix R if it satisfies the following conditions (Maynard Smith and Price [12]): 5That is, F (x)=(1/2)e−|x|α for x 0 and F (x)=1 (1/2)e−xα for x 0 [a distri- α α ≤ − ≥ bution F is symmetric if F( x)=1 F(x) for all x]. − − 6Thus the payoff matrix of the first player is R, and that of the second player is R⊤, the transpose of R. 4 S. HART,Y.RINOTTAND B. WEISS [ESS1] R(p,p) R(q,p) for all q ∆(n). ≥ ∈ [ESS2] If q=p satisfies R(q,p)=R(p,p), then R(q,q)<R(p,q). 6 This definition is equivalent to the requirement that for every q=p there 6 exists an “invasion barrier” b(q)>0 such that R(p,(1 ε)p+εq)>R(q,(1 − − ε)p+εq) for all ε (0,b(q)). The interpretation of this inequality is that ∈ any small enough proportion ε [i.e., less than b(q)] of q-mutants cannot successfully invade a p-population, since the mutants’ (average) payoff is strictly less than that of the existing population. An ESS p is called an ℓ-point ESS if the support supp(p)= i:p >0 i { } of p is of size ℓ. In particular, when ℓ = 1 we have a pure ESS. In the biological setup, ℓ=1 corresponds to “monomorphism,” and ℓ>1 to “ℓ- (n) (n) allele polymorphism.” Let S S (R) be the number of ℓ-point ESS for ℓ ≡ ℓ the matrix R. 2.2. ESS of random games. Let F be a cumulative distribution function on R. We will assume throughout this paper that F is continuous with a support (a,b) that is either finite or infinite (i.e., a<b ). For −∞≤ ≤∞ every integer n 1, let R R(n) be an n n matrix whose n2 elements are ≥ ≡ × independent F-distributed random variables; the number of ℓ-point ESS of R(n) is now a random variable S(n). ℓ We use the following notation: E for expectation; (Z) for the distribu- L tion function of the random variable Z; Poisson(λ) for the Poisson distri- bution with parameter λ [i.e., (Z)=Poisson(λ) if P(Z =k)=e−λλk/k! L for all integers k 0]; and the convergence of distributions is with respect ≥ to the variation norm [i.e., the l -norm on measures: (Z ) (Z ) = 1 1 2 P(Z =k) P(Z =k) ]. The two classes of distribkuLtions, −naLmely,kthe k| 1 − 2 | “Pexponential and faster decreasing tails” class and the “slower than ex- EF ponential decreasing tails” class , will be formally defined in Sections 4.1 SE and 5.1, respectively. (n) We now state our main results on S , the number of two-point ESS: 2 Theorem 1. If F , then, as n : ∈EF →∞ (i) µ :=E(S(n)) ; n 2 →∞ (n) (ii) (S ) Poisson(µ ) 0; and (iii) kPL(the2re i−s a two-poinnt EkS→S) 1. → Theorem 2. If F , then, as n : ∈SE →∞ (i) µ :=E(S(n)) 1/2; n 2 → (n) (ii) (S ) Poisson(1/2) 0; and (iii) kPL(the2re i−s a two-point EkS→S) 1 e−1/2 0.39. → − ≃ EVOLUTIONARILYSTABLESTRATEGIES 5 For the convergence to Poisson distributions (ii) we will use a result of the so-called “Chen–Stein method” that requires estimating only the first two moments (see Section 2.5); surprisingly, our proofs in the two cases are different. As for (iii), they are immediate from (ii). The two theorems are proved in Sections 4 and 5, respectively. Note that, for distributions in , EF Theorem 1(iii) implies that the probability that there is an ESS converges to 1 [see Section 6(c)]. Returning to the definition of ESS in Section 2.1, condition [ESS1] says that p is a best reply to itself, that is, (p,p) is a Nash equilibrium. By the bilinearity of R, it is equivalent to: R(i,p)=R(p,p) for all i supp(p), and ∈ R(j,p) R(p,p) for all j /supp(p). Since F is a continuous distribution, it ≤ ∈ follows that, with probability 1, the inequalities are strict, that is, R(j,p)< R(p,p)forallj /supp(p)[thejthrowisindependentoftherowsinsupp(p)]. ∈ Therefore, there are no best replies to p outside the support of7 p, that is, R(q,p)=R(p,p) if and only if supp(q) supp(p). Thus condition [ESS2] ⊂ applies only to such q, and we obtain (see Haigh [10]): Lemma 3. For a random matrix R, the following hold a.s.: (i) i is a pure ESS if and only if R(i,i)>R(j,i) for all j=i. 6 (ii) There is a two-point ESS with support i,j if and only if there exist { } p ,p >0 such that p R(i,i)+p R(i,j)=p R(j,i)+p R(j,j)>p R(k,i)+ i j i j i j i p R(k,j) for all k=i,j, and R(i,i)<R(j,i) and R(j,j)<R(i,j). j 6 The following is immediate from (i) (see Haigh [10]): Proposition 4. S(n), the number of pure ESS, is a Binomial(n,1/n) 1 (n) random variable, and thus (S ) Poisson(1) as n . L 1 → →∞ Proof. S(n)= n C where C is the indicator that i is a pure ESS, 1 i=1 i i that is, R(i,i)>R(jP,i) for all j=i, and so P(Ci=1)=1/n. (cid:3) 6 (n) For two-point ESS, we can express their number S as a sum of n(n 2 − 1)/2 identically distributed indicators, (n) S = D , 2 ij 1≤Xi<j≤n (n) where D D is the indicator that columns i,j provide a two-point ij ≡ ij ESS.8 To study the asymptotic behavior of S(n), we will need to evaluate 2 7So (p,p) is a quasi-strict Nash equilibrium. 8Lemma 3(ii) implies that, a.s., for each i=j there can be at most one ESS with 6 support i,j (in fact, condition [ESS2] implies that the supports of two distinct ESS p { } 6 S. HART,Y.RINOTTAND B. WEISS the first two moments (see Section 2.5), namely, P(D = 1) = P(D = ij 12 1) and P(D =D =1)=P(D =D =1) (when i,j and i′,j′ are ij ij′ 12 13 { } { } disjoint, D and D are independent, since D is a function of the entries ij i′j′ ij in columns i and j only). 2.3. First moment. The event that D = 1 depends only on the en- 12 tries in the first two columns of the matrix R, which we will denote X = i R(i,1) and Y =R(i,2). Thus X ,...,X ,Y ,...,Y are 2n independent F- i 1 n 1 n distributed random variables. For each i, let P := (X ,Y ) be the corre- i i i sponding point in R2. The two points P and P are almost surely distinct, 1 2 and thus determine a line Ax+By=C through them, where9 (1) A:=Y Y , B:=X X , C:=X Y X Y . 1 2 2 1 2 1 1 2 − − − Finally, we denote by Γ Γ(n) the event that there is a two-point ESS with ≡ support 1,2 , that is, D =1; recalling Lemma 3(ii), we have 12 { } Γ Γ(n):= X <X ,Y >Y ,AX +BY <C for all k=3,...,n . 1 2 1 2 k k ≡ { } Let µ :=E(S(n)) denote the expected number of two-point ESS. Then n 2 n (2) µ = P(Γ(n)). n (cid:18)2(cid:19) We now define an auxiliary random variable U U(n), a function of P 1 ≡ and P , as follows: 2 P(AX +BY >C P ,P ), if X <X and Y >Y , (3) U := 3 3 | 1 2 1 2 1 2 (cid:26)1, otherwise, where A,B and C are determined as above (1) by P and P . Thus U is the 1 2 probability that an independentpoint lies above the line through P and P 1 2 when X <X and Y >Y . Let F be the cumulative distribution function 1 2 1 2 U of U [note that F (1−)=P(X <X ,Y >Y )=1/4]. We have U 1 2 1 2 Lemma 5. 1 P(Γ)= (1 u)n−2dF (u). U Z − 0 Proof. ImmediatesinceU isdeterminedbyP andP ,andforallk 3 1 2 the points P are independentof U and P(AX +BY >C P ,P )=U (≥the k k k 1 2 | atom at u=1 does not matter since the integrand vanishes there). (cid:3) and p′ can never be comparable, i.e., neither supp(p) supp(p′) nor supp(p) supp(p′) ⊂ ⊃ can hold). 9A,B and C are thusrandom variables that are functions of P and P . 1 2 EVOLUTIONARILYSTABLESTRATEGIES 7 Corollary 6. 1 P(D =1)=P(Γ)=(n 2) (1 u)n−3F (u)du. 12 U − Z − 0 Proof. Integrate by parts: 1 (1 u)n−2dF (u)=[(1 u)n−2F (u)]1 Z − U − U 0 0 1 +(n 2) (1 u)n−3F (u)du, U − Z − 0 and note that the first term vanishes. (cid:3) 2.4. Second moment. Toevaluate P(D =D =1), weneedtheentries 12 13 in the third column of the matrix R as well. Let Z =R(i,3) be n random i variables that are F-distributed, with all the X ,Y ,Z independent. Let Γ′ i i i be the event that D =1 (we will use ′ for the XZ-problem), that is, 13 Γ′:= X <X ,Z >Z ,A′X +B′Z <C′ for all k=1,3 , 1 3 1 3 k k { 6 } where A′,B′ and C′ are determined by P′ = (X ,Z ) and P′ = (X ,Z ) 1 1 1 3 3 3 [cf. (1)]. Let U′ be the corresponding random variable: U′ := P(A′X + 2 B′Y >C′ P′,P′) if X <X and Z >Z , and U′:=1 otherwise; put W := 2 | 1 3 1 3 1 3 max U,U′ , with cumulative distribution function F . W { } Proposition 7. 1 P(D =D =1)=P(Γ Γ′) (n 3) (1 u)n−4F (u)du. 12 13 W ∩ ≤ − Z − 0 Proof. For each k 4 we have ≥ P(AX +BY <C,A′X +B′Z′<C′ P ,P ,P′,P′) i i i i | 1 2 1 3 min P(AX +BY <C P ,P ),P(A′X +B′Z′ <C′ P′,P′) ≤ { k k | 1 2 k k | 1 3 } =min 1 U,1 U′ =1 max U,U′ =1 W. { − − } − { } − Therefore 1 P(Γ Γ′) (1 u)n−3dF (u). W ∩ ≤Z − 0 As in Corollary 6, integrating by parts yields the result. (cid:3) 8 S. HART,Y.RINOTTAND B. WEISS 2.5. Poisson approximation. The “Chen–Stein method” yields Poisson approximations for sumsof Bernoulli randomvariables whose dependenceis nottoolarge.WewillusethefollowingformulationduetoArratia,Goldstein and Gordon [1]: Theorem 8. Let I be an arbitrary index set. For each α I, let Z be α a Bernoulli random variable with P(Z =1)=1 P(Z =0)=∈ p >0, and α α α − let B I be the “neighborhood of dependence” for α; that is, α B and α α ⊂ ∈ Z is independent of Z for all β /B . Put α β α ∈ Z := Z , α αX∈I λ:= E(Z )= p , α α αX∈I αX∈I b := E(Z )E(Z )= p p , 1 α β α β αX∈IβX∈Bα αX∈IβX∈Bα b := E(Z Z ). 2 α β αX∈Iβ∈BXα\{α} Then 1 e−λ (Z) Poisson(λ) 2(b +b ) − 2(b +b ). 1 2 1 2 kL − k≤ λ ≤ Proof. Theorem1inArratia,GoldsteinandGordon[1],withno“near- independence” (i.e., b′ =b =0). (cid:3) 3 3 2.6. Notation. Weusethefollowingstandardnotation,allasn :g(n) →∞ ∼ h(n) for lim g(n)/h(n)=1; g(n).h(n) for limsup g(n)/h(n) 1; g(n) n n ≤ ≈ h(n) for 0<lim g(n)/h(n)< ; g(n)=O(h(n)) for limsup g(n)/h(n)< n ∞ n ; and g(n)=o(h(n)) for lim g(n)/h(n)=0. Also, log is the natural loga- n ∞ rithm log throughout. e 3. Theconvexhullofnrandompointsintheplane. Interestingly,theex- (n) pectation µ ofS isrelated tothenumberofvertices,oredges,ofthecon- n 2 vexhullK ofthenrandompointsintheplaneP ,P ,...,P (theconnection 1 2 n does not, however, extend beyond the first moments). Denote that number by V V(n), and let V be the number of edges of K whose outward normal 0 ≡ is positive.10 Thedistribution F is called symmetric if F( x)=1 F(x) for − − allx[or,moregenerally,ifthereexistsx suchthatF(x x)=1 F(x +x) 0 0 0 − − for all x]. 10The“outwardnormal”toanedgeofK isperpendiculartotheedgeandpointsaway from theinterior of K. EVOLUTIONARILYSTABLESTRATEGIES 9 Proposition 9. 1 2µ =E(V ) P(V >0)=1 . n 0 0 ≥ − n Moreover, if F is symmetric, then 8µ =E(V). n Proof. Let E be the indicator that the line segment P P is an edge ij i j of K with positive outward normal; then V = E . Clearly, P(E = 0 i<j ij ij 1)=P(E12 =1)=2P(Γ) (if the additional conPdition X1 <X2,Y1 >Y2 in Γ is not satisfied, interchange P and P ; this yields the factor 2), and so 1 2 E(V )=(n(n 1)/2)2P(Γ)=2µ . 0 n − Now V =0 if and only if there is a point P that is maximal in both the 0 i X- and the Y-direction, that is, X =max X and also Y =max Y . The i j j i j j probability of this event is 1/n (letting i be the index where X =max X , i j j the probability that Y =max Y is 1/n, since the Y’s are independent of i j j the X’s). Therefore, 1 E(V ) P(V 1)=1 . 0 0 ≥ ≥ − n If F is symmetric, the same holds for outward normals in each of the four quadrants, and so E(V)=4E(V ). (cid:3) 0 Our main result for the number of vertices V(n) is: Theorem 10. Let F be a symmetric distribution. Then, as n : →∞ (i) if F , then E(V(n)) ; and (ii) if F ∈EF, then E(V(n)) →4∞and P(V(n)=4) 1. ∈SE → → Proof. CombineProposition 9above withresultsthat willbeobtained in the next two sections: Proposition 12 for (i) and Corollary 20 for (ii). (cid:3) Some intuition for the interesting result (ii) for “heavy-tailed” distribu- tions isprovidedimmediately after theproofof Theorem19inSection 5.2.11 Figures12 1and2show,foreachoneoffivedifferentdistributions,n=10,000 11Fisher [8] shows that for certain distributions (including the Weibull distributions with parameter 0<α<1) the limit shape of the normalized convex hull is (x,y) R2: x + y 1 —which is the convex hull of four points. However, this does no{t impl∈y that|th|en|u|m≤ber}ofverticesV(n) convergesto4,sincetheremaybemanyverticescloseto each one of these four points (as is the case for the uniform distribution, where the limit shape is the unit square, and V(n) ). 12Generated by maple. →∞ 10 S. HART,Y.RINOTTAND B. WEISS Fig.1. Thenumberofvertices V oftheconvexhullofnrandompointsdrawnfromthree distributions in . (a) Uniform distribution: n=10,000, V =29. (b) Normal distribu- EF tion: n=10,000, V =16. (c) Exponential distribution: n=10,000, V =9. random points together with their convex hull and the resulting number of vertices V(n). In the context of random points drawn from radially symmet- ricdistributions(ratherthanindependentcoordinates),Carnal[4]hasshown that E(V(n)) converges to a constant 4 for a certain class of heavy-tailed ≥ distributions (with the constant depending on the distribution). We conclude this section with a lemma that is useful when comparing distributions (see its use in the next section).

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.