ebook img

Proximal point algorithm, Douglas-Rachford algorithm and alternating projections: a case study PDF

0.56 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Proximal point algorithm, Douglas-Rachford algorithm and alternating projections: a case study

Proximal point algorithm, Douglas–Rachford algorithm and alternating projections: a case study 5 HeinzH.Bauschke,∗ MinhN.Dao,† DominikusNoll‡ and HungM.Phan§ 1 0 2 January26,2015 n a J 6 2 Abstract ] C O Many iterative methods for solving optimization or feasibility problems have been invented, and often convergence of the iterates to some solution is proven. Under . h favourable conditions, one might have additional bounds on the distance of the iter- t a ate to the solution leading thus to worst case estimates, i.e., how fast the algorithm must m converge. [ Exactconvergenceestimatesaretypicallyhardtocomeby. Inthispaper,weconsider the complementary problem of finding best case estimates, i.e., how slow the algorithm 1 hastoconverge,andwealsostudyexactasymptoticratesofconvergence. Ourinvestigation v focusesonconvexfeasibilityintheEuclideanplane,whereonesetistherealaxiswhile 3 0 theotheristheepigraphofaconvexfunction.Thiscasestudyallowsustoobtainvarious 6 convergencerateresults. Wefocusonthepopularmethodofalternatingprojectionsand 6 the Douglas–Rachford algorithm. These methods are connected to the proximal point 0 algorithmwhichisalsodiscussed. OurfindingssuggestthattheDouglas–Rachfordal- . 1 gorithm outperforms the method of alternating projections in the absence of constraint 0 qualifications. Variousexamplesillustratethetheory. 5 1 : v 2010MathematicsSubjectClassification: Primary65K05;Secondary65K10,90C25. i X Keywords: alternating projections, convex feasibility problem, convex set, Douglas–Rachford algo- r a rithm,projection,proximalmapping,proximalpointalgorithm,proximityoperator. ∗Mathematics, University of British Columbia, Kelowna, B.C. V1V 1V7, Canada. E-mail: [email protected]. †Department of Mathematics and Informatics, Hanoi National University of Education, 136 Xuan Thuy, Hanoi, Vietnam, and Mathematics, University of British Columbia, Kelowna, B.C. V1V 1V7, Canada. E-mail: [email protected]. ‡InstitutdeMathe´matiques,Universite´deToulouse,118routedeNarbonne,31062Toulouse,France.E-mail: [email protected]. §Department of Mathematical Sciences, University of Massachusetts Lowell, 265 Riverside St., Olney Hall 428,Lowell,MA01854,USA.E-mail:hung [email protected]. 1 1 Introduction Threealgorithms LetX beaEuclideanspace,withinnerproduct(cid:104)·,·(cid:105)andinducednorm(cid:107)·(cid:107),andlet f: X → ]−∞,+∞] be convex, lower semicontinuous, and proper. A classical method for finding a minimizer of f is the proximal point algorithm (PPA). It requires using the proximal point mapping(orproximityoperator)whichwaspioneeredbyMoreau[13]: Fact1.1(proximalmapping) For every x ∈ X, there exists a unique point p = P (x) ∈ X f such that miny∈X f(y)+ 12(cid:107)x−y(cid:107)2 = f(p)+ 12(cid:107)x− p(cid:107)2. The induced operator Pf : X → X is firmlynonexpansive1,i.e.,(∀x ∈ X)(∀y ∈ X)(cid:107)P (x)−P (y)(cid:107)2+(cid:107)(Id−P )x−(Id−P )y(cid:107)2 ≤ f f f f (cid:107)x−y(cid:107)2. TheproximalpointalgorithmwasproposedbyMartinet[12]andfurtherstudiedbyRock- afellar[16]. Nowadaysnumerousextensionsexist;however,herewefocusonlyonthemost basicinstanceofPPA: Fact1.2(proximalpointalgorithm(PPA)) Let f: X → ]−∞,+∞] be convex, lower semicon- tinuous, and proper. Suppose that Z, the set of minimizers of f, is nonempty, and let x ∈ X. Then 0 thesequencegeneratedby (1) (∀n ∈ N) xn+1 = Pf(xn) convergestoapointin Zanditsatisfies (2) (∀z ∈ Z)(∀n ∈ N) (cid:107)xn+1−z(cid:107)2+(cid:107)xn−xn+1(cid:107)2 ≤ (cid:107)xn−z(cid:107)2. Anostensiblyquitedifferenttypeofoptimizationproblemis,fortwogivenclosedconvex nonemptysubsets AandBofX,tofindapointin A∩B (cid:54)= ∅. Letuspresenttwofundamen- talalgorithmsforsolvingthisconvexfeasibilityproblem. Thefirstmethodwasproposedby Bregman[8]. Fact1.3(methodofalternatingprojections(MAP)) Let a ∈ Aandset 0 (3) (∀n ∈ N) an+1 = PAPB(an). Then(an)n∈N convergestoapoint a∞ ∈ C = A∩B. Moreover, (4) (∀c ∈ C)(∀n ∈ N) (cid:107)an+1−c(cid:107)2+(cid:107)an+1−PBan(cid:107)2+(cid:107)PBan−an(cid:107)2 ≤ (cid:107)an−c(cid:107)2. ThesecondmethodisthecelebratedDouglas–Rachfordalgorithm. Thenextresultcanbe deducedbycombining[11]and[4]. 1 Notethatif f = ι istheindicatorfunctionofanonemptyclosedconvexsubsetofX,thenP = P ,where C f C theP isthenearestpointmappingorprojectorofC;thecorrespondingreflectorisR =2P −Id. C C C 2 Fact1.4(Douglas–Rachfordalgorithm(DRA)) Set T = Id−P +P R ,letz ∈ X,andset A B A 0 (5) (∀n ∈ N) an = PAzn and zn+1 = Tzn. Then2 (zn)n∈N converges to some point in z∞ ∈ FixT = (A∩B)+NA−B(0), and (an)n∈N con- vergesto PAz∞ ∈ A∩B. Again, there are numerous refinements and adaptations of MAP and DRA; however, it is here not our goal to survey the most general results possible3 but rather to focus on the speedofconvergence. Wewillmakethispreciseinthenextsubsection. Goalandcontributions Most rate-of-convergence results for PPA, MAP, and DRA take the following form: If some additional condition is satisfied, then the convergence of the sequence is at least as good as some form of “fast” convergence (linear, superlinear, quadratic etc.). This can be interpreted as a worst case analysis. In the generality considered here4, we are not aware of results that approach thisproblemfromtheotherside,i.e.,thataddressthequestion: Underwhichconditionsisthe convergencenobetterthansomeformof“slow”convergence? Thisconcernsthebestcaseanalysis. Ideally,onewouldlikeanexactasymptoticrateofconvergenceinthesenseof (14)below. While we do not completely answer these questions, we do set out to tackle them by providing a case study when X = R2 is the Euclidean plane, the set A = R×{0} is the realaxis,andtheset B istheepigraphofaproperlowersemicontinuousconvexfunction f. We will see that in this case MAP and DRA have connections to the PPA applied to f. We focus in particular on the case not covered by conditions guaranteeing linear convergence ofMAPorDRA5.WeoriginallyexpectedthebehaviourofMAPandDRAincasesof“bad geometry” to be similar6. It came to us as surprise that this appears not to be the case. In fact, the examples we provide below suggest that DRA performs significantly better than MAP. Concretely, suppose that B is the epigraph of the function f(x) = (1/p)|x|p, where 1 < p < +∞. Since A = R×{0}, we have that A∩B = {(0,0)} and since f(cid:48)(0) = 0, the “angle” between A and B at the intersection is 0. As expected MAP converges sublinearly (evenlogarithmically)to0. However,DRAconvergesfasterinallcases: superlinearly(when 1 < p < 2), linearly (when p = 2) or logarithmically (when 2 < p < +∞). This example is deducedbygeneralresultsweobtainonexactratesofconvergenceforPPA,MAPandDRA. 2HereFixT=(cid:8)x∈X(cid:12)(cid:12)x=Tx(cid:9)isthesetoffixedpointsofT,andNA−B(0)standsforthenormalconeofthe (cid:8) (cid:12) (cid:9) setA−B= a−b(cid:12)a∈ A,b∈B at0. 3See,e.g.,[3]forvariousmoregeneralvariantsofPPA,MAP,andDRA. 4Some results are known for MAP when the sets are linear subspaces; however, the slow (sublinear) convergencecanonlybeobservedininfinite-dimensionalHilbertspace;see[9]andreferencestherein. 5Indeed,themostcommonsufficientconditionforlinearconvergenceineithercaseisri(A)∩ri(B)(cid:54)=∅;see [5,Theorem3.21]forMAPand[14]or[6,Theorem8.5(i)]forDRA. 6ThisexpectationwasfoundedinthesimilarbehaviourofMAPandDRAfortwosubspaces;see[2]. 3 Organization The paper is organized as follows. In Section 2, we provide various auxiliary results on the convergence of real sequences. These will make the subsequent analysis of PPA, MAP, and DRAmorestructured. Section3focusesonthePPA.Afterreviewingresultsonfinite,super- linear, andlinearconvergence, weexhibitacasewheretheasymptoticrateisonlylogarith- mic. We then turn to MAP in Section 4 and provide results on the asymptotic convergence. We also draw the connection between MAP and PPA and point out that a result of Gu¨ler is sharp. InSection5,wedealwithDRA,drawagainaconnectiontoPPAandpresentasymp- toticconvergence. Thenotationweemployisfairlystandardandfollows,e.g.,[15]and[3]. 2 Auxiliary results InthissectionwecollectvariousresultsthatfacilitatethesubsequentanalysisofPPA,MAP andDRA.Webeginwiththefollowingusefulresultwhichappearstobepartofthefolklore7. Fact2.1(generalizedStolz–Cesa`rotheorem) Let (an)n∈N and (bn)n∈N besequencesinRsuch that (bn)n∈N isunboundedandeitherstrictlymonotoneincreasingorstrictlymonotonedecreasing. Then a −a a a a −a (6) lim n+1 n ≤ lim n ≤ lim n ≤ lim n+1 n, n→∞ bn+1−bn n→∞ bn n→∞ bn n→∞ bn+1−bn wherethelimitsmayliein[−∞,+∞]. Setting(bn)n∈N = (n)n∈N inFact2.1,weobtainthefollowing: Corollary2.2 Thefollowinginequalitiesholdforanarbitrarysequence(xn)n∈N inR: x x (7) nl→im∞(xn+1−xn) ≤ nl→im∞ nn ≤ nl→im∞ nn ≤ nl→im∞(xn+1−xn). Fortheremainderofthissection,weassumethat (8) g: R++ → R++ isincreasingand H isanantiderivativeof−1/g. Example2.3(xq) Let g(x) = xq on R++, where 1 ≤ q < ∞. If q > 1, then −1/g(x) = −x−q and we can choose H(x) = x1−q/(q−1) which has the inverse H−1(x) = 1/((q− 1)x)1/(q−1). If q = 1, then we can choose H(x) = −ln(x) which has the inverse H−1(x) = exp(−x). Proposition2.4 Let(βn)n∈N and(δn)n∈N besequencesinR++,andsupposethat (9) (∀n ∈ N) βn+1 = βn−δng(βn). Thenthefollowinghold: 7Sincewewereabletolocateonlyanonlinereference,weincludeaproofinAppendixA. 4 β −β g(β ) (i) (∀n ∈ N) δn ≤ H(βn+1)−H(βn) ≤ δn+1βn+n1−βn+n+12 = δng(βn+n1). H(β ) H(β ) g(β ) (ii) lim δ ≤ lim n ≤ lim n ≤ lim δ n . n→∞ n n→∞ n n→∞ n n→∞ ng(βn+1) (iii) If(δn)n∈N isconvergent,sayδn → δ∞,and gg(β(βn+n)1) → 1,then H(nβn) → δ∞. Proof. Foreveryn ∈ N,wehave (10a) δn = βng−(ββn)+1 ≤ (cid:90) βn gd(xx) = H(βn+1)−H(βn) n βn+1 β −β β −β g(β ) (10b) ≤ gn(βn+n1+)1 = δn+1βn+n1−βn+n+12 = δng(βn+n1). (cid:4) Hence(i)holds. Combiningwith(7),weobtain(ii). Finally,(iii)followsfrom(ii). Corollary2.5 Let(xn)n∈N and(δn)n∈N besequencesinR++ suchthat (11) (∀n ∈ N) xn = xn+1+δng(xn+1). Thenthefollowinghold: g(x ) (i) (∀n ∈ N) δn g(xn+)1 ≤ H(xn+1)−H(xn) ≤ δn. n g(x ) H(x ) H(x ) (ii) lim δ n+1 ≤ lim n ≤ lim n ≤ lim δ . n→∞ n g(xn) n→∞ n n→∞ n n→∞ n (iii) If(δn)n∈N isconvergent,sayδn → δ∞,and gg(x(xn+n)1) → 1,then H(nxn) → δ∞. Proof. Indeed,set(∀n ∈ N)ε = δ g(xn+1) andrewritetheupdate n n g(xn) g(x ) (12) xn+1 = xn−δn g(xn+)1 g(xn) = xn−εng(xn). n (cid:4) NowapplyProposition2.4. Definition2.6(typesofconvergence) Let(αn)n∈N beasequenceinR++ suchthatαn → 0,and supposethereexist1 ≤ q < +∞suchthat α (13) n+q1 → c ∈ R+. α n Thentheconvergenceof(αn)n∈N to0is: (i) withorderqifq > 1andc > 0; 5 (ii) superlinearifq = 1andc = 0; (iii) linearifq = 1and0 < c < 1; (iv) sublinearifq = 1andc = 1; (v) logarithmicifitissublinearand|αn+1−αn+2|/|αn−αn+1| → 1. If(βn)n∈N isalsoasequenceinR++,itisconvenienttodefine α (14) αn ∼ βn ⇔ nl→im∞ βnn ∈ R++. The following example exhibits a case where we obtain a simple exact asymptotic rate of convergence. Example2.7 Let (xn)n∈N and (δn)n∈N be sequences in R++, and let 1 < q < ∞. Suppose that x (15) δn → δ∞ ∈ R++, x n → 1, and (∀n ∈ N) xn = xn+1+δnxnq+1. n+1 Then x → 0logarithmically, n x 1 (cid:16)1(cid:17)1/(q−1) (16) n → and x ∼ . (cid:16)1(cid:17)1/(q−1) ((q−1)δ∞)1/(q−1) n n n Proof. Suppose that g(x) = xq and note that g(xn+1)/g(xn) = (xn+1/xn)q → 1q = 1. This implies that x → 0 logarithmically. Finally, (16) follows from Example 2.3, Corollary 2.5, n (cid:4) and(14). We conclude this section with some one-sided versions which are useful for obtaining informationabouthowfastorslowasequencemustconverge. Corollary2.8 Let(βn)n∈N and(ρn)n∈N besequencesinR++,andsupposethat (17) (∀n ∈ N) βn+1 ≤ βn−ρng(βn) and ρ = lim ρn ∈ R++. n→∞ Then (18) (cid:0)∀ε ∈ ]0,ρ[(cid:1)(∃m ∈ N)(∀n ≥ m) β ≤ H−1(cid:0)n(ρ−ε)(cid:1). n Proof. Observethat β −β (19) (∀n ∈ N) βn+1 = βn−δng(βn), where δn = ng(β n)+1 ≥ ρn. n Hence,byProposition2.4,ρ ≤ limn→∞H(βn)/n. Letε ∈]0,ρ[. Thenthereexistsm ∈ Nsuch that(∀n ≥ m)ρ−ε ≤ H(β )/n ⇔ H−1(n(ρ−ε)) ≥ β . (cid:4) n n 6 Example2.9 Let (βn)n∈N and (ρn)n∈N be sequences in R++, let 1 ≤ q < ∞, and suppose that (20) (∀n ∈ N) βn+1 ≤ βn−ρnβqn and ρ = lim ρn ∈ R++. n→∞ Let0 < ε < ρ. Thenthereexistsm ∈ Nsuchthatthefollowinghold: (i) Ifq > 1,then(∀n ≥ m) β ≤ 1 = O(cid:0)1/n1/(q−1)(cid:1). n (cid:0) (cid:1)1/(q−1) (q−1)n(ρ−ε) (ii) Ifq = 1,then(∀n ≥ m) β ≤ γn,whereγ = exp(ε−ρ) ∈ ]0,1[. n Consequently,theconvergenceof(βn)n∈Nto0isatleastsublinearifq > 1andatleastlinear ifq = 1. (cid:4) Proof. CombineExample2.3withCorollary2.8. Remark2.10 Example2.9(i)canalsobededucedfrom[7,Lemma4.1];seealso[1]. Corollary2.11 Let(βn)n∈N and(ρn)n∈N besequencesinR++,andsupposethat g(β ) (21) (∀n ∈ N) βn ≥ βn+1 ≥ βn−ρng(βn) and ρ = nl→im∞ρng(βn+n1) ∈ R+. Then (22) (∀ε ∈ R++)(∃m ∈ N)(∀n ≥ m) βn ≥ H−1(cid:0)n(ρ+ε)(cid:1). Proof. Observethat β −β (23) (∀n ∈ N) βn+1 = βn−δng(βn), where δn = ng(β n)+1 ≤ ρn. n Hence, by Proposition 2.4, limn→∞H(βn)/n ≤ ρ. Let ε ∈ R++. Then there exists m ∈ N suchthat(∀n ≥ m)ρ+ε ≥ H(β )/n⇔ H−1(n(ρ+ε)) ≤ β . (cid:4) n n Example2.12 Let (βn)n∈N and (ρn)n∈N be sequences in R++, let 1 ≤ q < ∞, and suppose that q β (24) (∀n ∈ N) βn ≥ βn+1 ≥ βn−ρnβqn and ρ = nl→im∞ρnβqn ∈ R+. n+1 Letε ∈ R++. Thenthereexistsm ∈ Nsuchthatthefollowinghold: 1 (i) Ifq > 1,then(∀n ≥ m) β ≥ . n (cid:0) (cid:1)1/(q−1) (q−1)n(ρ+ε) (ii) Ifq = 1,then(∀n ≥ m) β ≥ γn,whereγ = exp(−ρ−ε) ∈ ]0,1[. n Consequently,theconvergenceof(βn)n∈N to0isatbestsublinearifq > 1andatbestlinear ifq = 1. (cid:4) Proof. CombineExample2.3withCorollary2.11. 7 3 Proximal point algorithm (PPA) Thissectionfocusesontheproximalpointalgorithm. Weassumethat (25) f: R → ]−∞,+∞] isconvex,lowersemicontinuous,proper, with (26) f(0) = 0and f(x) > 0when x (cid:54)= 0. Given x ∈ R,wewillstudythebasicproximalpointiteration 0 (27) (∀n ∈ N) xn+1 = Pf(xn). Notethatifx > 0andy < 0,then f(y)+1|x−y|2 > f(0)+1|x−0|2 ≥ f(P x)+1|x−P x|2. 2 2 f 2 f Hencethebehaviourof f|R−− isirrelevantforthedeterminationofPf|R++ (andananalogous statementholdsforthedeterminationofPf|R−−)! Forthisreason,werestrictourattentionto thecasewhen (28) x0 ∈ R++ isthestartingpointoftheproximalpointalgorithm. Thegeneraltheory(Fact1.2)thenyields (29) x ≥ x ≥ ··· ≥ x ↓ 0. 0 1 n Inthissection,itwillbeconvenienttoadditionallyassumethat (30) f isanevenfunction; although, as mentioned, the behaviour of f|R−− is actually irrelevant because x0 ∈ R++. Combining the assumption that 0 is the unique minimizer of f with [15, Theorem 24.1], we learnthat (31) 0 ∈ ∂f(0) = [f(cid:48) (0), f(cid:48) (0)]∩R = [−f(cid:48) (0), f(cid:48) (0)]∩R. − + + + Westartourexplorationbydiscussingconvergenceinfinitelymanysteps. Proposition3.1(finiteconvergence) We have x → 0 in finitely many steps, regardless of the n startingpoint x ∈ R ,ifandonlyif 0 ++ (32) 0 < f(cid:48) (0), + inwhichcase P x = 0⇔ x ≤ f(cid:48) (0). f n n + Proof. Let x > 0. Then P x = 0⇔ x ∈ 0+∂f(0)⇔ x ≤ f(cid:48) (0)by(31). f + Supposefirstthat f(cid:48) (0) > 0. Then,by(31),0 ∈ int∂f(0)and,using(29),thereexistsn ∈ N + suchthatxn ≤ f+(cid:48) (0). Itfollowsthatxn+1 = xn+2 = ··· = 0. (Alternatively,thisfollowsfrom amuchmoregeneralresultofRockafellar;see[16,Theorem3]andalsoRemark3.4below.) Now assume that there exists n ∈ N such that P x = 0 and x > 0. By the above, f n n x ≤ f(cid:48) (0)andthus f(cid:48) (0) > 0. (cid:4) n + + 8 Anextremecaseoccurswhen f(cid:48) (0) = +∞inProposition3.1: + Example3.2(ι andtheprojector) Suppose that f = ι . Then P = P and (∀n ≥ 1) {0} {0} f {0} x = 0. n Example3.3(|x|1 andthethresholder) Suppose that f = |·| in which case ∂f(0) = [−1,1] and f+(0) = 1. Proposition 3.1 guarantees finite convergence of the PPA. Indeed, either a directargumentor[3,Example14.5]yields (cid:40) x− x , if|x| > 1; (33) P : x (cid:55)→ |x| f 0, otherwise, Consequently, x = 0ifandonlyifn ≥ (cid:100)x (cid:101). n 0 Remark3.4 In [16, Theorem 3], Rockafellar provided a very general sufficient condition for finiteconvergenceofthePPA(whichworksactuallyforfindingzerosofamaximallymono- toneoperatordefinedonaHilbertspace). Inourpresentsetting,hisconditionis (34) 0 ∈ int∂f(0). ByProposition3.1,thisisalsoaconditionthatisnecessaryforfiniteconvergence. Thus, we assume from now on that f(cid:48) (0) = 0, or equivalently (since f is even and by + (31)),that (35) f(cid:48)(0) = 0. inwhichcasefiniteconvergencefailsandthus (36) x > x > ··· > x ↓ 0. 0 1 n We now have the following sufficient condition for linear convergence. The proof is a refinementoftheideasofRockafellarin[16]. Proposition3.5(sufficientconditionforlinearconvergence) Supposethat f(x) (37) λ = lim ∈ ]0,+∞]. x2 x↓0 Thenthefollowinghold: (i) Ifλ < +∞,thenthereexistsα ∈ (cid:2) 1 , 1(cid:3)suchthat 0 2λ λ α (38) (∀ε > 0)(∃m ∈ N)(∀n ≥ m) |xn+1| ≤ (cid:113) 0 |xn|. 1+α2(1+2λ−2ε) 0 9 (ii) Ifλ = +∞,then α (39) (∀α > 0)(∀ε > 0)(∃m ∈ N)(∀n ≥ m) |xn+1| ≤ (cid:112) |xn|. 1+α2(1+2λ−ε) Proof. By [16, Remark 4 and Proposition 7], there exists α ∈ (cid:2) 1 , 1(cid:3) such that (∂f)−1 is 0 2λ λ Lipschitz continuous at 0 with every modulus α > α . Let α > α . Then there exists τ > 0 0 0 suchthat (40) (∀|x| < τ)(∀z ∈ (∂f)−1(x)) |z| ≤ α|x|. Since x → 0 by [16, Theorem 2] (or (36)), there exists m ∈ N such that (∀n ≥ m) |x − n n xn+1| ≤ τ. Letn ≥ m. Noticingthat xn ∈ (Id+∂f)(xn+1),wehave (41) xn+1 ∈ (∂f)−1(xn−xn+1). Itfollowsby(40)that (42) |xn+1| ≤ α|xn−xn+1|. Since xn−xn+1 ∈ ∂f(xn+1),wehave (43) (cid:104)xn−xn+1,xn+1(cid:105) = (cid:104)xn−xn+1,xn+1−0(cid:105) ≥ f(xn+1)− f(0) = f(xn+1). Now for every ε > 0, employing (37) and increasing m if necessary, we can and do assume that (cid:16) ε(cid:17) (44) (∀n ≥ m) (cid:104)xn−xn+1,xn+1(cid:105) ≥ λ− |xn+1|2. 2 Letn ≥ m. Combining(42)and(44),weobtain (45a) |xn|2 = |xn+1|2+|xn−xn+1|2+2(cid:104)xn−xn+1,xn+1(cid:105) 1 (45b) ≥ |xn+1|2+ α2|xn+1|2+(2λ−ε)|xn+1|2 (cid:16)1+α2(1+2λ−ε)(cid:17) (45c) = α2 |xn+1|2. Thisgives α (46) |xn+1| ≤ (cid:112) |xn| 1+α2(1+2λ−ε) andhence(39)holds. Nowassumethatλ < +∞sothatα > 0. Sinceα (cid:55)→ √ α = 0 1+α2(1+2λ−ε) (cid:113) (cid:113) 1 +(11+2λ−ε) is strictly increasing on R+, we note that the choice α = α0/ 1−εα20 > α0 α2 (cid:4) yields(38). 10

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.