ebook img

Automatic smoothness detection of the resolvent Krylov subspace method for the approximation of $C_0$-semigroups PDF

0.66 MB·
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Automatic smoothness detection of the resolvent Krylov subspace method for the approximation of $C_0$-semigroups

AUTOMATIC SMOOTHNESS DETECTION OF THE RESOLVENT KRYLOV SUBSPACE METHOD FOR THE APPROXIMATION OF C -SEMIGROUPS 0 VOLKER GRIMM AND TANJA GO¨CKLER∗ Abstract. The resolvent Krylov subspace method builds approximations to operator functions f(A) times a vectorv. Forthesemigroupandrelatedoperatorfunctions,thismethodisprovedtopossessthefavorableproperty thattheconvergenceisautomaticallyfasterwhenthevectorv issmoother. Theuserofthemethoddoesnotneed to know the presented theory and alterations of the method are not necessary in order to adapt to the (possibly 7 unknown)smoothnessofv. Thefindingsareillustratedbynumericalexperiments. 1 0 Key words. Operatorfunctions,resolventKrylovsubspacemethod,rationalKrylovsubspacemethod,semi- 2 group,ϕ-functions,rationalapproximation. n AMS subject classifications. (2010)65F60,65M15,65M22,65J08. a J 7 1. Introduction. Let X be some Banach space with norm (cid:107)·(cid:107). For t ≥ 0, we consider a 2 C -semigroup etA, which is generated by A, applied to some initial data v ∈X, or more exactly, 0 ] u(t)=etAv, v ∈X, t≥0. (1.1) A N Due to a standard rescaling argument (cf. Section 2.2 on page 60 in [8]), it suffices to study . boundedsemigroups,thatis,semigroupssatisfying(cid:107)etA(cid:107)≤N forallt≥0. Theobjectofinterest h t (1.1) is just the (mild) solution of the abstract linear evolution equation a m u(cid:48)(t)=Au(t), u(0)=v, t∈[0,∞), (1.2) [ whose effective approximation is important in many applications, especially for the numerical 1 v solution of semilinear evolution equations by either splitting methods (e.g. [25,35]) or exponential 6 integrators(e.g.[20]). Inordertoapproximatethesolution(1.1)oftheabstractevolutionequation 4 in an efficient and reliable way, one has to use a method which leads to an error reduction that 0 is independent of the norm of the matrix representing the discretized operator A (see [18]). Such 8 error bounds can therefore be designated as grid-independent, since the refinement of the grid in 0 . space does not deteriorate the convergence in time (cf. [10,11]). 1 In the case of a matrix or a bounded operator A, the basic importance of an efficient ap- 0 7 proximation to etA and possible methods for this problem are well reflected in “Nineteen dubious 1 ways to compute the exponential of a matrix” [27] by Moler and van Loan. The subsequent find- : ing that the standard Krylov subspace approximation can be used for the approximation of the v i matrix exponential times a vector, etAv, led to an updated version “Nineteen dubious ways to X computetheexponentialofamatrix,twenty-fiveyearslater”withtheKrylovsubspacemethodas r twentieth method (see [28]). Recently, it becomes more and more apparent, that rational Krylov a subspace methods constitute a promising twenty-first possibility that is even suitable for matri- ces with a large norm or unbounded operators. The use of rational Krylov subspaces for the approximation of matrix/operator functions f(A) times v has been studied and promoted, e.g., in [2–5,7,9,11,12,15,16,21,23,24,29–33,36]. In this paper, we will study the approximation of etAv and products of related operator functions,theso-calledϕ-functions,timesvintheresolventKrylovsubspacespannedby(γ−A)−1 and v. An efficient approximation of these operator functions is of major importance particularly in the context of exponential integrators. Our error analysis provides sublinear error bounds for unbounded operators A that translate to error bounds independent of the norm of the discretized operator. That is, the error bounds prove a grid-independent convergence for the discretized problem. Moreover, it turns out that the error reduction correlates with the smoothness of the initial value v. A favorable property is that the resolvent Krylov subspace method detects the ∗ Karlsruhe Institute of Technology (KIT), Institut fu¨r Angewandte und Numerische Mathematik, D–76128 Karlsruhe,Germany,volker.grimm | [email protected]. 1 smoothness of the initial vector by itself and converges the faster the smoother v is. All of this happensautomatically,theuserofthemethoddoesnotevenneedtoknowtheprecisesmoothness of the initial value. After this introduction and a motivation in Section 2, we briefly review a functional calculus in Section 3. In Section 4 we prove that any function of the presented functional calculus times a vector can be approximated in the resolvent Krylov subspace spanned by the resolvent and this vector. Fortheproofofourmainresults,somesmoothingoperatorsareintroducedinSection5. In Section6,theapproximationofthesemigroupinBanachspacesisconsidered. Ourmaintheorems canbefoundinSection7,wheretheeffectofthesmoothnessofthevectorontheconvergencerate for the approximation of the semigroup and related functions in Hilbert spaces is studied. Some numerical illustrations of our results are given in Section 8, followed by a conclusion. 2. Motivation. For a first illustration of this nice feature of the resolvent Krylov subspace methodjustmentionedabove,weconsidertheone-dimensionalSchr¨odingerequationonL2(0,2π) u(cid:48)(t)=i ∂2 u(t), u(0)=u for x∈(0,2π), t≥0. (2.1) ∂x2 0 With A = ∂2 , we obtain the abstract equation u(cid:48)(t) = iAu(t), where the domain of A is the ∂x2 Sobolev space H2(0,2π) containing all 2π-periodic functions that admit a second order weak π derivative. We now discretize (2.1) by a pseudospectral method. Therefore, we approximate the unknown solution u by a finite linear combination of the basis functions φ (x) = eikx, that is k u(t)≈(cid:80)N/2−1 ψ (t)φ with N even, and search for coefficients ψ (t) such that k=−N/2 k k k N/2−1 N/2−1 (cid:88) (cid:88) ψ(cid:48)(t)φ = (−ik2)ψ (t)φ . k k k k k=−N/2 k=−N/2 This ansatz is equivalent to Ψ(cid:48)(t)=iA Ψ(t), Ψ(0)=Ψ (2.2) N 0 with solution Ψ(τ) = eiτANΨ0, where the vector Ψ(t) ∈ CN contains the Fourier coefficients ψ (t) for k = −N,...,N − 1 and the matrix A ∈ RN×N is a diagonal matrix with entries k 2 2 N (cid:0)−N(cid:1)2,(cid:0)−N +1(cid:1)2,...,(cid:0)N −1(cid:1)2. The discretized initial vector Ψ =(cid:0)ψ (0)(cid:1) is given by 2 2 2 0 k 1 (cid:90) 2π ψ (0)= u (x)e−ikxdx, k =−N,...,N −1. k 2π 0 2 2 0 These coefficients ψ (0) can be approximated by a discrete Fourier transform of the discretized k function u . Here, we use the initial data 0 uq(x)= (cid:0)π2(cid:1)4q(x−π)2qx2q, x∈(0,π], 0  (cid:0)2(cid:1)4q(x−π)2q(x−2π)2q, x∈(π,2π]. π Differentiatingthisfunction2q+1times, d2q+1 uq becomesdiscontinuousatx=π andatx=2π, dx2q+1 0 if uq is considered as a 2π-periodic function. So, we have uq ∈ D(Aq) but uq (cid:54)∈ D(Aq+1). By 0 0 0 Ψq ∈ CN, we denote the corresponding spectral discretizations of the initial value. The solution 0 eiτANΨq ofthediscretizedinitialvalueproblemattimeτ >0isnowapproximatedintherational 0 Krylov subspace K ((γ−iτA )−1,Ψq), where γ =1. n N 0 In Figure 2.1, the error of the rational Krylov subspace approximation is plotted against the dimension of the Krylov subspace (blue solid lines) for N = 131072, τ = 0.02, and smoothness indices q = 2,4,6,8. We can observe that eiτANΨq is approximated the better the smoother 0 the continuous initial value uq is, or more exactly, the higher the number q with uq ∈ D(Aq) is. 0 0 Furthermore,weappliedforcomparisontheimplicitEulermethodtothediscretizedproblemand added the obtained error curves to Figure 2.1 (red dashed lines). 2 10−2 10−4 q=2 10−6 q=4 q=6 10−8 q=8 10−10 0 2 4 6 8 10 12 14 Fig.2.1. PlotoftheerrorversusdimensionoftheKrylovsubspaceKn((1−iτAN)−1,Ψq0)(bluesolidlines), andoftheimplicitEulermethod(reddashedlines)forN =131072,τ =0.02andinitialvectorsΨq resultingfrom 0 the discretization of uq ∈D(Aq) for q=2,4,6,8 (circle-, triangle-, square-, cross-marked line). 0 In order to introduce the resolvent Krylov subspace and to get a first idea why this subspace might be a good choice for the approximation of the operator/matrix exponential, we consider the implicit Euler scheme, which is, besides the explicit Euler scheme, a standard method to approximate the matrix exponential times a vector. The two methods are based on the relations explicit Euler: lim (cid:0)I+ τA(cid:1)nv =eτAv, n→∞ n implicit Euler: lim (cid:0)I− τA(cid:1)−nv =eτAv. n→∞ n Formatriceswithalargenorm,theexplicitEulermethoddoesnotworkefficiently. Thediscretized Schr¨odingerequationisastiffsystemofordinarydifferentialequations. Ifweincreasethenumber N ofbasisfunctions,thenormofthediscretizationmatrixA grows. Forthediscretizedproblem N considered here, the explicit Euler method is therefore not suitable. The situation is even worse for the continuous equation, since the explicit Euler scheme cannot be used unless the initial data is very smooth and lies in D(A∞) = ∩∞ D(An). The implicit Euler method, however, provides n=1 anapproximationtothesemigroupforallinitialvectorsv intheassociatedBanachspaceX. The resolvent (I− τA)−1 maps X to D(A) and can thus be seen as a smoothing operator. While the n explicit Euler method cannot be applied for initial values v (cid:54)∈D(A∞), the implicit Euler method can be proven to possess the convergence rates (see [6])  (cid:13)(cid:13)eτAv−(cid:0)1− τA(cid:1)−nv(cid:13)(cid:13)≤ C √τn(cid:107)Av(cid:107), v ∈D(A), (cid:13) n (cid:13)  C τ2 (cid:107)A2v(cid:107), v ∈D(A2). n For even smoother data, the implicit Euler method does not converge faster. In order to improve both methods, we briefly review the basic idea of Krylov subspace methods. Consider n−1 steps of the explicit Euler method that can be seen as a product of a polynomial in A and v, that is eAv ≈(cid:0)1+ τ A(cid:1)n−1v =a v+a Av+···+a An−1v =p(A)v, p∈P , n−1 0 1 n−1 n−1 where P is the space of polynomials of maximum degree n − 1. Instead of using a fixed n−1 polynomial approximation given by the explicit Euler scheme, it might be better to search a best approximation in the polynomial Krylov subspace K (A,v)=span{v,Av,A2v,...,An−1v}. n 3 UsingthisapproximationspaceforthenumericalsolutionofthediscretizedSchr¨odingerequation, it turns out that the approximation improves with respect to stability, but a substantial error re- ductionwouldjustbeginafternearly(cid:107)τA (cid:107)iterationsteps(see[19]),where(cid:107)τA (cid:107)becomeslarge N N for fine space discretizations. Analogous to the explicit Euler method, the standard polynomial Krylov subspace method is not suitable for a grid-independent approximation of the Schr¨odinger equation. Instead of applying the implicit Euler method, one can try to find a better approximation of the type P eAv ≈a v+a (γ−A)−1v+···+a (γ−A)−(n−1)v =r(A)v, r ∈ n−1 , 0 1 n−1 (γ−·)n−1 that means, to search a best approximation in the rational Krylov subspace K ((γ−A)−1,v)=span{v,(γ−A)−1v,(γ−A)−2v,...,(γ−A)−n+1v}, γ >0. n This so-called resolvent Krylov subspace has been proposed by Ruhe in [34] for eigenvalue com- putations and is by now a standard technique for this purpose (cf. [1]). We will use the resolvent Krylov subspace as approximation space for the approximation of eAv and related operator func- tions in the following. Analogous to the implicit Euler method, the approximation based on the resolventKrylovsubspacewillbegrid-independentbutimproveontheimplicitEulermethodwith respect to the convergence rate dependent on the smoothness of the vector v, as illustrated above (cf. Figure 2.1). 3. Preliminaries. Webrieflyreviewafunctionalcalculusthathasbeenformerlyusedin[11] and [14]. The Lebesgue space of complex-valued integrable functions defined on R is denoted by L1(R) with norm (cid:107)·(cid:107) . By C(R), we designate the space of continuous functions f : R → C. 1 Moreover, let M =(cid:8)f ∈C(R) | Ff ∈L1(R) and supp(Ff)⊆[0,∞)(cid:9), (3.1) + where Ff is the Fourier transform of f given as 1 (cid:90) ∞ Ff(s)= e−ixsf(x)dx for f ∈L1(R). 2π −∞ Forf (cid:54)∈L1(R),theFouriertransformisunderstoodinthesenseofdistributions. Foreachfunction holomorphic in the left half-plane, we denote by f : R → C the restriction of f to Rez = 0 so (0) that f (ξ)=f(iξ), ξ ∈R, and we define the algebra (0) (cid:8) (cid:9) M(cid:102):= f holomorphic and bounded for Rez ≤0 | f(0) ∈M+ . Let A generate a bounded strongly continuous semigroup with (cid:107)eτA(cid:107)≤N on some Banach space X. For functions f ∈M(cid:102), we introduce a functional calculus via (cid:90) ∞ f(A)= esAFf (s)ds. (3.2) (0) 0 This defines a bounded linear operator f(A) satisfying (cid:107)f(A)(cid:107)≤N(cid:107)Ff (cid:107) . Until we know that (0) 1 the functional calculus is consistent with standard operator functions such as the resolvent and the semigroup, we write (f(z))(A), when the definition of the operator functions is according to the new calculus (3.2), instead of simply f(A). For f(z) = (z −z)−k with Rez > 0 and k ≥ 1, 0 0 we have by elementary semigroup theory (cf. Corollary 1.11, pp. 56–57 in [8]), (cid:18) 1 (cid:19) (cid:90) ∞ sk−1 (A)= esAe−sz0 · ds=(z −A)−k, (z −z)k (k−1)! 0 0 0 4 thatis,thedefinitionvia(3.2)coincideswiththedefinitionintermsoftheresolvent. Analogously, all rational functions with a smaller degree of the numerator than the denominator and poles in the right complex half-plane are included by our functional calculus so far. We will need another extension in order to include the semigroup, i.e., we want that the generator A inserted in the exponential function etz, t≥0, coincides with the semigroup. Let f(z) M0 :={f holomorphic for Rez ≤0 | ∃n∈N0 : (1−z)n ∈M(cid:102)}. For f ∈M , we set 0 (cid:18) (cid:19) f(z) f(A):=(1−A)n (A), (1−z)n wherenissuchthat f(z) ∈M(cid:102). Notethatthedefinitiondoesnotdependonthechoiceofnand (1−z)n that the definition results in a closed operator on X. Finally, we define the set M(cid:102)⊆M:={f ∈M0 | f(A):X →X is bounded} whichissufficientforourpurposes. ThefollowinglemmacanbefoundasProposition1.12in[17]. Lemma 3.1. The mapping f →f(A) via (3.2) is a homomorphism of M into the algebra of bounded linear operators on X. We can check, that the semigroup is now included in the extended functional calculus. Lemma 3.2. For τ ≥0, we have (eτz)(A)=eτA. Proof. For n=1, one can verify that eτz ∈M(cid:102). Hence, we have by (3.2) that 1−z (cid:18) ezτ (cid:19) (cid:90) ∞ (cid:90) ∞ (A)= esA1 (s)eτ−sds= esAeτ−sds 1−z [τ,∞) 0 τ (cid:90) ∞ (cid:90) ∞ = e(s+τ)Ae−sds=eτA esAe−sds=eτA(1−A)−1. 0 0 Finally, we conclude (cid:18) eτz (cid:19) (1−A) (A)=(1−A)eτA(1−A)−1 =eτA 1−z which proves the assertion. For all functions relevant to our discussion, the functional calculus (3.2) coincides with the definitions in semigroup theory. From now on, we therefore do not use different notations and simply write f(A) for a function f of an operator A with respect to (3.2) . We will also need the following lemma of Brenner and Thom´ee (cf. Lemma 4 in [6]), whose proof extends to our case. Lemma 3.3. For f,g ∈M with f(z)=zlg(z) for some l>0 and Rez ≤0, we have f(A)v =g(A)Alv for v ∈D(Al). 4. Approximation in the resolvent Krylov subspace. Here and in the following, we always consider bounded semigroups with generator A on some Banach space X which satisfy (cid:107)etA(cid:107) ≤ N. For bounded semigroups, it is well-known that the right complex half-plane belongs to the resolvent set of the generator A (e.g. Theorem 1.10 on page 55 in [8]) which guarantees that the resolvent (γ−A)−1 exists for all γ >0. 5 We are interested in the approximation of operator functions, especially the semigroup, times a vector v ∈X in the resolvent Krylov space K ((γ−A)−1,v):=span{v,(γ−A)−1v,(γ−A)−2v,...,(γ−A)−n+1v}, γ >0. (4.1) n Forn=1,2,3,...,thesespacesformanestedsequenceofsubspaces. Ifthereexistsanindexn for 0 whichK ((γ−A)−1,v)isinvariantunder(γ−A)−1,wehaveK ((γ−A)−1,v)=K ((γ−A)−1,v) n0 n0 k forallk ≥n . ForaBanachspaceX offinitedimension,thisalwayshappens. Atthelatest,when 0 n reaches the dimension of X. For a Banach space of infinite dimension, this might happen or it might not. In most cases, the spaces build an infinite series of nested spaces that are different. We therefore first discuss the natural question, whether all functions of our functional calculus canbeapproximatedtoanarbitraryprecisioninthespace(4.1)whenntendstoinfinity. Forthis purpose, we define the maximal resolvent Krylov subspace. Definition 4.1. The maximal resolvent Krylov subspace for a given vector v ∈X and a fixed γ >0 is given as the space K ((γ−A)−1,v):=span{v,(γ−A)−1v,(γ−A)−2v,...}. (4.2) ∞ We also need the closure of this space that we designate by K ((γ−A)−1,v)⊆X. ∞ The following theorem states that all functions that are defined for A via (3.2) times v are in the closure of the maximal resolvent Krylov subspace (4.2), that is, f(A)v can be approximated in the Krylov subspace (4.2) to any desired precision. Since the span designates all finite linear combinations, this also means that all functions in our functional calculus can be approximated in the space (4.1) to any arbitrary precision, if we let n go to infinity. Theorem 4.2. For all v ∈X and all functions f ∈M, we have f(A)v ∈K ((γ−A)−1,v). ∞ Proof. If we define Y :=K ((γ−A)−1,v), ∞ then Y is an invariant subspace of (µ−A)−1 for all µ > 0 (cf. proof of Theorem 4.6.1 in [26]). Hence, we have (µ−A)−1y ∈Y for all y ∈Y , µ>0. Theorem 4.6.1 in [26] now states that Y is an invariant subspace of our semigroup etA, t ≥ 0, (cid:12) and A. Furthermore, the restriction etA(cid:12) of the semigroup etA to Y is again a semigroup with Y generator A| and A| y =Ay for all y ∈D(A)∩Y =:D(A| ). For f ∈M(cid:102)⊆M, we thus find Y Y Y (cid:90) ∞ (cid:90) ∞ f(A)y = esAyFf(0)(s)ds= esA|YyFf(0)(s)ds=f(A|Y)y ∈Y for all y ∈Y. 0 0 Since v ∈ Y, we obtain f(A)v ∈ Y. Now we proceed with the case of functions belonging to M. By the definition of M, we have for f ∈M and all y ∈Y that (cid:18) (cid:19) f(z) f(z) f(A)y =(1−A)n (A)y =(1−A)ng(A)y with g(z)= , g ∈M(cid:102), (1−z)n (1−z)n where n has been chosen appropriately. Because of Y being A-invariant, we obtain (cid:16) (cid:17) (γ−A)ly =(γ− A| )ly for all y ∈D (A| )l , l∈N. Y Y By the first part of the proof, since g ∈M(cid:102), we can conclude for y ∈Y that f(A)y =(1−A)ng(A)y =(1−A)ng(A| )y =(1− A| )ng(A| )y =f(A| )y ∈Y. Y Y Y Y 6 Again, due to v ∈Y, the statement f(A)v ∈Y follows. In the case that an index n exists for which the resolvent Krylov subspace is invariant, we 0 obtain from Theorem 4.2, that f(A)v ∈K ((γ−A)−1,v)=K ((γ−A)−1,v). n0 ∞ Thus, f(A)v can be represented exactly in the finite-dimensional space (4.1) with index n=n . 0 We also study subspaces of X of the type K ((γ−A)−1,(γ−A)−qv), q =1,2,3,... (4.3) ∞ with a smoothed initial vector. These spaces are usually different. For example, if v ∈ X\D(A) holds true, then v is in the space (4.2), but v is not in any of the spaces (4.3), which are all subsets of D(A). An intriguing fact is that the closures of the spaces (4.3) are identical and coincide with the closure of (4.2). Hence, from a numerical analyst’s point of view, if w ∈ X can be approximated to an arbitrary precision in any of the spaces (4.2) or (4.3), then w can be approximated in all spaces to an arbitrary precision. Lemma 4.3. For every q =1,2,3,... and v ∈X, we have K ((γ−A)−1,(γ−A)−qv)=K ((γ−A)−1,v). ∞ ∞ Proof. If we set Y :=K ((γ−A)−1,(γ−A)−1v), (4.4) ∞ then we have, analogously to the previous proof, that (µ−A)−1y ∈Y for all y ∈Y , µ>0. Obviously,(γ−A)−1v ∈Y andhence(γ−µ)(µ−A)−1(γ−A)−1v ∈Y. Bytheresolventequation (cf. (1.2) on page 239 in [8]), it follows (µ−A)−1v =(γ−A)−1v+(γ−µ)(µ−A)−1(γ−A)−1v ∈Y . Since µ > 0 has been arbitrarily chosen, we have µ(µ−A)−1v ∈ Y for all µ > 0. Due to the well-known fact that lim µ(µ−A)−1v =v µ→∞ (e.g. Lemma 3.4, p.73 in [8]), we find v ∈ Y, since µ(µ−A)−1v ∈ Y and Y is closed. This immediately shows our assertion for q =1. The statement for q >1 now follows by induction. 5. Smoothing operators with range in the resolvent Krylov subspace. We study in this section, how well an initial vector v ∈ D(Aq) can be approximated in different resolvent Krylov subspaces. These bounds will be necessary to prove our main theorems. The constants occurring in the following bounds and estimates will always be generic constants denoted by C(param ,param ,...),wherethetermsinbracketsindicatetheparametersonwhichC depends. 1 2 The first lemma provides an error estimate for the approximation of v ∈D(Aq) in the special √ √ resolvent Krylov subspace K (( n−A)−1,( n−A)−qv). q Lemma 5.1. There are bounded operators H of the form n,q √ 2q−1 (cid:18) (cid:19)k (cid:18) (cid:19)k−q(cid:18) (cid:19) (cid:18) (cid:19)(cid:18) (cid:19) (cid:88) n 2q−1 (cid:88) k 2q−1 k−1 H = hq √ , hq = (−1)l = (−1)k−q, n,q k n−A k k l k k−q k=q l=0 such that, for all v ∈D(Aq), we have C(q,N) (cid:107)H v−v(cid:107)≤ (cid:107)Aqv(cid:107) and (cid:107)H (cid:107)≤C(q,N), n,q nq2 n,q 7 where the constants depend only on q and the bound N of the bounded semigroup, but not on n. Proof. The coefficients hq, k =q,...,2q−1, are chosen such that k 1−(cid:80)2q−1hq(1−z)−k g(z):= k=q k zq can be continued to a holomorphic function on C\{1}. One can check that g(z) is a linear combination of (1−z)−q,(1−z)−(q+1),...,(1−z)−(2q−1). For any generator B of a bounded semigroup etB with (cid:107)etB(cid:107) ≤ N, we have (cid:107)(1−B)−1(cid:107) ≤ N (cf. Theorem 1.10 on page 55 in [8]) and thus (cid:13)(cid:13)1−(cid:80)2q−1hq(1−B)−k(cid:13)(cid:13) (cid:13) k=q k (cid:13)≤C(q,N). (cid:13) Bq (cid:13) (cid:13) (cid:13) We will now use this estimate for B = √1 A, which generates a semigroup satisfying (cid:107)e√tnA(cid:107)≤N n to bound the difference (cid:107)H v −v(cid:107). Since g ∈ M, 1 ∈ M and (cid:80)2q−1hq(1−z)−k ∈ M are n,q k=q k functions that belong to our extended functional calculus, we obtain according to Lemma 3.3 (cid:13) √  (cid:13) (cid:13) 2q−1 (cid:18) (cid:19)k (cid:13) (cid:107)Hn,qv−v(cid:107)=(cid:13)(cid:13)(cid:13)1− (cid:88) hqk √n−nA v(cid:13)(cid:13)(cid:13) (cid:13) k=q (cid:13) (cid:13) (cid:16) (cid:17)−k(cid:13) ≤(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)1−(cid:80)2kq=−q1(cid:16)h√1qk A1(cid:17)−q √1nA (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)·(cid:13)(cid:13)(cid:13)(cid:16)√1nA(cid:17)qv(cid:13)(cid:13)(cid:13)≤ C(nq,q2N)(cid:107)Aqv(cid:107). (cid:13) n (cid:13) √ √ The bound on H follows, since (cid:107)( n−A)−1(cid:107) ≤ N/ n (e.g. Theorem 1.10 on page 55 in [8]) √n,q√ and therefore (cid:107) n( n−A)−1(cid:107)≤N. √ √ In the next lemma, we study the best approximation W to n( n−A)−1 in the resolvent n subspace R (γ,A):=span(cid:8)(γ−A)−1,(γ−A)−2,...,(γ−A)−n(cid:9), γ >0. (5.1) n Lemma 5.2. For any p∈N and any n∈N, there are operators W of the form n n (cid:88) 1 W = wn ∈R (γ,A) with wn ∈C n k(γ−A)k n k k=1 such that √ (cid:13) (cid:13) (cid:13) n (cid:13) C(γ,p,N) (cid:13)(cid:13)√n−A −Wn(cid:13)(cid:13)≤ np2 and (cid:107)Wn(cid:107)≤C(γ,p,N) with n-independent generic constants C(γ,p,N). Proof. Weestimatethebestapproximationinthespace(5.1). Thisbestapproximationexists, since the space is finite. By standard calculations with the resolvent, we obtain R (γ,A)=span(cid:8)(γ−A)−1,A(γ−A)−2,...,Am−1(γ−A)−m(cid:9), γ >0. m We now proceed analogously to the proof of Theorem 4.1 in [13]. By using √ n √ (cid:90) ∞ √ √ = n esAe− nsds, n−A 0 Am−1 (cid:90) ∞ = esAe−γs(−1)m−1L (γs)ds (γ−A)m m−1 0 8 with the m-th Laguerre polynomial (cid:88)m (cid:18)m(cid:19)(−1)k L (x)= xk, m k k! k=0 it follows (cid:13) √ (cid:13) (cid:13)(cid:13)(cid:13)√n−nA −(cid:88)m ak(γA−k−A1)k(cid:13)(cid:13)(cid:13)≤ Nγ ·Em(cid:0)esFf(0)(γs)(cid:1), esFf(0)(γs)=√ne(1−√γn)s, (cid:13) (cid:13) k=1 √ whereFf istheFouriertransformoff(z)= √ n restrictedtoRez =0. Itremainstoestimate (0) n−z Em(cid:0)esFf(0)(γs)(cid:1)=a1,i.n..,fam(cid:13)(cid:13)e−s(cid:0)√ne(1−√γn)s−(cid:88)m ak(−1)k−1Lk−1(s)(cid:1)(cid:13)(cid:13)1. k=1 Let L1 (R) be the space of Lebesgue integrable functions with respect to the weight function ω . Moreoωv0er, we equip the space Wr1(ω0):=(cid:8)g ∈L1ω0(R) : (cid:13)(cid:13)g(r)ϕrω0(cid:13)(cid:13)1 <∞(cid:9) with the norm 0 √ (cid:107)g(cid:107) :=(cid:107)gω (cid:107) +(cid:107)g(r)ϕrω (cid:107) , where ϕ(s)= s, ω (s)=e−s. Wr1(ω0) 0 1 0 1 0 Accordingto[22],thereexistsforanyr ≥1aconstantC(r)suchthatwehaveforanyg ∈W1(ω ) r 0 C(r) E (g)≤ (cid:107)g(r)ϕrω (cid:107) , m mr2 0 1 where C(r) neither depends on m nor on g. Our function √ √ g(s)=esFf(0)(γs)= ne(1− γn)s, is even smoother and we obtain √ (cid:107)g(r)(s)(√s)re−s(cid:107)1 =√n(cid:90) ∞(cid:12)(cid:12)(cid:12)(cid:12)e−s(cid:18)1− γn(cid:19)rsr2e(cid:16)1−√γn(cid:17)s(cid:12)(cid:12)(cid:12)(cid:12) ds 0 =√n(cid:12)(cid:12)(cid:12)1− √n(cid:12)(cid:12)(cid:12)r(cid:90) ∞sr2e−√γnsds (cid:12) γ (cid:12) 0 √ =√n(cid:12)(cid:12)(cid:12)1− n(cid:12)(cid:12)(cid:12)r· √γ ·(cid:18)√γ (cid:19)r2 (cid:90) ∞sr2e−sds (cid:12) γ (cid:12) n n 0 √ =γ(cid:18) n(cid:19)r2 (cid:12)(cid:12)(cid:12)√γ −1(cid:12)(cid:12)(cid:12)rΓ(cid:0)r +1(cid:1). γ (cid:12) n (cid:12) 2 Hence, we have √ Em(g)≤C(γ,r)(cid:18) mn(cid:19)r2 (cid:12)(cid:12)(cid:12)(cid:12)√γn −1(cid:12)(cid:12)(cid:12)(cid:12)rΓ(cid:0)2r +1(cid:1) and therefore with the choice r =2p and m=n p! (cid:12)(cid:12) γ (cid:12)(cid:12)2p C(γ,p) En(g)≤C(γ,2p)np2 (cid:12)(cid:12)√n −1(cid:12)(cid:12) ≤ np2 . Choosing the coefficients a in accordance with the coefficients wn that belong to a best approxi- k k mation, the first statement is proved. Since (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:88)n wkn(γ−1A)k(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)≤(cid:13)(cid:13)(cid:13)(cid:13)√n√−nA(cid:13)(cid:13)(cid:13)(cid:13)+(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)√n√−nA −(cid:88)n wkn(γ−1A)k(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)≤N +C(γ,p)=C(γ,p,N) k=1 k=1 9 the second statement is an immediate consequence. √ √ √ Thechoiceγ = ninLemma5.2obviouslygiveserrorzero,sincetheresolvent n( n−A)−1 √ can be represented exactly in the resolvent subspace R ( n,A). n We now continue with two further lemmas that state how well a vector v ∈ D(Aq) can be approximated in resolvent Krylov subspaces of type (4.3). Lemma 5.3. There exist operators S˜ with n,q S˜ v ∈K ((γ−A)−1,(γ−A)−qv) n,q (2q−1)(n−1)+q such that for all v ∈D(Aq) and n∈N the bounds C(γ,q,N) (cid:107)S˜ v−v(cid:107)≤ ((cid:107)v(cid:107)+(cid:107)Aqv(cid:107)), (cid:107)S˜ (cid:107)≤C(γ,q,N), n,q nq2 n,q hold true with constants C(γ,q,N) not depending on n. Moreover, we have AlS˜ v =S˜ Alv for v ∈D(Al). n,q n,q Proof. We choose the coefficients hq as in Lemma 5.1 and set k 2q−1 (cid:18) (cid:19)(cid:18) (cid:19) S˜ = (cid:88) hqWk, hq = 2q−1 k−1 (−1)k−q, n,q k n k k k−q k=q where the Wk are the k-th powers of the operators W in Lemma 5.2. Since these operators W n n n are uniformly bounded according to Lemma 5.2, it is clear that the operators S˜ are uniformly n,q bounded with respect to n as well. From the definition of W , we conclude n AlW v =W Alv for all v ∈D(Al), n n and hence the same holds true for S˜ . For fixed q, let now v ∈D(Aq) be arbitrary. It follows n,q C(q,N) (cid:107)S˜ v−v(cid:107)≤(cid:107)S˜ −H (cid:107)(cid:107)v(cid:107)+(cid:107)H v−v(cid:107)≤(cid:107)S˜ −H (cid:107)(cid:107)v(cid:107)+ (cid:107)Aqv(cid:107) n,q n,q n,q n,q n,q n,q nq2 with Lemma 5.1. Since we can write √ √ √ (cid:18) (cid:19)k k−1 (cid:18) (cid:19)(cid:18) (cid:19)l n (cid:88) n n Wk− √ = Wk−l−1 W − √ √ , n n−A n n n−A n−A l=0 √ √ and since W and n( n−A)−1 are bounded independently of n, we obtain with Lemma 5.2 n √ (cid:13) (cid:18) (cid:19)k(cid:13) (cid:13)(cid:13)Wk− √ n (cid:13)(cid:13)≤ C(γ,q,N) for k =q,...,2q−1, (cid:13) n n−A (cid:13) nq2 and therefore C(γ,q,N) (cid:107)S˜ −H (cid:107)≤ . n,q n,q nq2 Altogether, the first bound in our lemma is proved. Due to the special form of the operators W , n the statement S˜ v ∈K ((γ−A)−1,(γ−A)−qv) is clear. n,q (2q−1)n−(q−1) Lemma 5.4. There exist operators S with n,q S v ∈K ((γ−A)−1,(γ−A)−qv) n,q (cid:98)n(cid:99) 2 such that for all v ∈D(Aq) and n≥2q, we have C(γ,q,N) (cid:107)S v−v(cid:107)≤ ((cid:107)v(cid:107)+(cid:107)Aqv(cid:107)) , (cid:107)S (cid:107)≤C(γ,q,N) n,q nq2 n,q 10

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.