HYPERBOLICITY OF MINIMIZERS AND REGULARITY OF VISCOSITY SOLUTIONS FOR RANDOM HAMILTON-JACOBI EQUATIONS KONSTANTIN KHANIN AND KE ZHANG 2 1 Abstract. We show that for a family of randomly kicked Hamilton-Jacobi equa- 0 tions, the unique global minimizer is hyperbolic, almost surely. Furthermore, we 2 prove the unique forward and backward viscosity solutions, though in general only n Lipshitz, are smooth in a neighbourhood of the global minimizer. Our result gen- a eralizes the result of E, Khanin, Mazel and Sinai ([8]) to arbitrary dimensions, and J extends the result of Iturriaga and Khanin in [12]. 6 1 ] S D 1. Introduction . h t We start by considering the inviscid Burger’s equation a m ∂ u+(u )u = fω(y,t), y Td,t R, (1) [ t ·∇ ∈ ∈ 1 where fω(y,t) = Fω(y,t) is a random force given by the potential Fω. The v −∇ inviscid equation can be viewed as the limit of the viscous Burger’s equation as the 1 8 viscosity aproaches zero. For this view point, We refer to [10] and the references 3 therein. In this paper, we will focus only on the inviscid case. The theory of (1) is 3 . developed by E, Khanin, Mazel and Sinai in [8], for the one dimensional configuration 1 0 space (d = 1) and the “white noise” potential 2 1 M M : Fω(y,t) = F (y,t) = F (y)W˙ (t), (2) v i i i i i=1 i=1 X X X r where F : Td R are smooth functions, and W˙ are independent white noises. a i → i In [12], Iturriaga and Khanin considered the “kicked” potential Fω(y,t) = Fω(y)δ(t j), (3) j − j Z X∈ where Fω are chosen independently from the same distribution, and δ( ) is the delta j · function. The potential (3) can be considered as a discrete version of (2). The theory of (3) runs parallel to the theory of (2), and has the advantage of being technically simpler. 1 2 KONSTANTINKHANINAND KEZHANG In these works, only solutions of the type u = φ is considered, which converts (1) ∇ to the Hamilton-Jacobi equation 1 ∂ φ+ ( φ)2 +Fω(y,t) = 0. (4) t 2 ∇ Hence, for the solutions of interest, it is equivalent to study the viscosity solutions of (4). Moreover, foreachb Rd,wemayconsidersolutionsof(4)with φ(y,t)dy = b, ∈ ∇ as the vector b is invariant under the evolution. R The study of these solutions is closely related with the concept of minimizing orbits in Lagrangian systems. More precisely, for each b R, s < t, x,x Td, we define ′ ∈ ∈ the minimal action function by t 1 Aω,b(x,x) = inf (ζ˙(τ))2 ζ˙ b Fω(ζ(τ),τ)dτ, (5) s,t ′ 2 − · − Zs where the infimum is taken over all absolutely continuous curves ζ : [s,t] Td with ζ(s) = x and ζ(t) = x. A curve γ : I Td is called a minimizer →if the ′ → infimum in Aω,b(γ(s),γ(t)) is achieved at γ [s,t] for all [s,t] I R. For the cases s,t I = [t , ), I = ( ,t ] or I = R, the c|urve γ is called ⊂a for⊂ward minimizer, a 0 0 ∞ −∞ backward minimizer or a global minimizer, respectively. Deferring the precise definitions to the next section, we roughly reinterpret the main results of [8] (using the point of view of [12]) as follows. For d = 1, b R, under ∈ some nondegeneracy conditions, the following hold almost surely. C1. ThereexistsuniqueviscositysolutionsonthesetTd [t , )andTd ( ,t ], 0 0 × ∞ × −∞ up to a constant translation; C2. For Lebesgue a.e. x Td, there exist unique forward and backward minimiz- ∈ ers; C3. (“One force, one solution principle”) There exists a unique global minimizer supporting a unique invariant measure, properly interpreted. C4. The unique invariant measure has no zero Lyapunov exponents (and hence is hyperbolic), if properly interpreted. C5. The unique forward and backward viscosity solutions are smooth in a neigh- bourhood of the global minimizer. Furthermore, the graph of the gradient of the viscosity solutions is equal to the local stable and unstable manifolds of the global minimizer. The conclusion C5 in [8] is actually stronger, but we will not discuss it in this paper. In [12], Iturriaga and Khanin generalized conclusions C1-C3, for both the “kicked” and “white noise” potentials, to arbitrary dimensions. Their approach is variational, and is related to Aubry-Mather theory and weak KAM theory (see for example, [13], [14], [9]). The variational approach of [12] is the starting point of this paper. We describe our main result roughly as follows (see Theorem 2.3, Theorem 2.4 for accurate statements): MINIMIZERS AND VISCOSITY SOLUTIONS 3 Main result. For the “kicked” potential, conclusion C4 and C5 hold in arbitrary dimensions. In other words, the unique invariant measure is hyperbolic, and the viscosity solutions are smooth near the global minimizer. Remark. As mentioned before, it is expected that methods applied to the “kicked” case in this paper should apply to the “white noise” case. We choose to convey the ideas of our method through the technically simpler “kicked” case, and treat the “white noise” case in a later work. The proofs of C4 and C5 in the one-dimensional case depends strongly on one- dimensionality, and seem difficult to generalize. To prove C4 for arbitrary dimen- sions, we devise a completely different strategy. The main new ingredient is the use of the Green bundles (see [11]). These bundles have been useful in proving uniform hyperbolicity for Hamiltonian flows, and have only recently been used to study Lya- punov exponents, due to the work of Arnaud ([2], [3]). Furthermore, we relate the Green bundles to the nondegeneracy of the minimum for a variational problem, and employ techniques from variational analysis. These also seem to be new features of this paper. Since the viscosity solutions are at most Lipshitz in general, the regularity result we obtain in C5 is highly nontrivial. Although we do not state it explicitly, these solutions has the same Hölder regularity as the potentials (if the potentials are Ck+α, then the solutions are Ck+α). This fact is due to the smoothness of the invariant manifolds for hyperbolic systems. When the global minimizer is nonuniformly hyperbolic, there is no clear-cut rela- tion between the viscosity solutions and the stable/unstable manifolds. In general, the viscosity solutions, as a product of the variational method, may bare no direct relations to the stable/unstable manifolds. Our proof of C5 from C4 uses precise information of the variational problem (more than what’s needed to prove C4), and requires a careful adaptation of the nonuniform hyperbolic theory. We would like to mention that for nonrandom systems, under the easier assumption that the global minimizer is uniformly hyperbolic, an analogous result is known (see [5]). The outline of the paper is as follows. In section 2, we introduce the notations, recall the results of Iturriaga and Khanin in [12], and formulate the main theorems. In section 3, we describe the variational set up, and formulate some statements in variational analysis. The proofs of these statements are deferred to the end of the paper. Insection4,wedefinetheGreenbundles, andestablishtheconnectionbetween the variational problem and the Green bundles. In section 5, we show that the transversality of the Green bundles imply nonzero exponents, proving Theorem 2.3. Sections 4 and 5 make use of the results of Arnaud in [2] and [3]. In section 6 and 7, we prove Theorem 2.4 using variational arguments and Pesin’s theory. In the last three sections, the statements formulated in section 3 are proved. 4 KONSTANTINKHANINAND KEZHANG 2. Formulation of the main results We restrict ourselves to the case of “kicked” potentials F(y,t) = F (y)δ(t j). j − j Z X∈ Here we assume that the potentials F are chosen independently from a distribution j χ P(C2+α(Td)), 0 < α 1 (some of the results hold under weaker regularity ∈ ≤ assumptions). Since we will be considering the solutions φ of the type φdy = b, we write ∇ φ(y,t) = b y + ψ(y,t), where ψdy = 0. It’s easy to see that ψ is a viscosity · ∇ R solution of the Hamilton-Jacobi equation R ∂ ψ(x,t)+Hb( ψ(x,t),t) = 0, (6) t ∇ withtheHamiltonianHb(x,p,t) = 1(p+b)2+Fω(x,t). Thecorresponding Lagrangian 2 is given by L(x,v,t) = 1v2 b v. Note that this is exactly the Lagrangian we defined 2 − · Aω,b with. According to the Lax-Oleinik variational principle, given x,x Td and s,t ′ ∈ s < t, we have ψ(x,t) = inf ψ(x,s)+Aω,b(x,x) . ′ s,t ′ x Td ∈ n o Note that Fω(y,t) = 0 for all t / Z. As a consequence, any curve ζ realizing the ∈ minimum in the definition of Aω,b must be linear between integer values of ζ. In this s,t sense, the viscosity solutions are completely determined by their values at t Z. For b Rd, m,n Z, m < n, we define the discrete version of the action f∈unction ∈ ∈ by n 1 1 n 1 Aω,b (x,x) = inf − (x˜ x˜ )2 b (x˜ x˜ ) − Fω(x˜ ) , (7) m.n ′ ( 2 i+1 − i − · n − m − i i ) i=m i=m X X where the infimum is taken over all (x˜ )n , x˜ Rd such that x˜ = x and x˜ = x j j=m j ∈ n m ′ (mod Zd). The sequence (x˜ )n corresponds to thelift ofthe curve ζ in(5) atinteger j j=m values, and we call it a configuration following the language of twist diffeomorphisms. The discrete version of the variational principle is given by ψ(x,n) = inf ψ(x,m)+Aω,b (x,x) , ′ m,n ′ x Td ∈ n o where x,x Td and m < n Z. Throughout the paper, we may drop the supscript ′ ∈ ∈ ω and b when there is no risk of confusion. The solution ψ(x,n) is closely related to the family of maps Φω : Td Rd Td Rd j × → × x x x +v Fω(x )modZd. Φω : j j+1 = j j −∇ j j , (8) j "vj# 7→ "vj+1# " vj −∇Fjω(xj) # MINIMIZERS AND VISCOSITY SOLUTIONS 5 The maps belong to the so-called standard family, and are examples of symplectic, exact and monotonically twist diffeomorphisms. For m,n Z, m < n, denote ∈ Φω (x,v) = Φω Φω(x,v). m,n n 1 ◦···◦ m − For any n Z and (x ,v ) Td Rd, we define (x ,v ) = Φ (x ,v ) if j > n and n n j j n,j n n ∈ ∈ × (x ,v ) = (Φ ) 1(x ,v ) if j < n. We call (x ,v ) the forward orbit of (x ,v ), j j j,n − n n j j j n n n ≥ (xj,vj)j n the backward orbit of (xn,vn), and (xj,vj)j Z the full orbit of (xn,vn). Assum≤e that (x˜ )n is a configuration that attains∈the infimum in (7), then x = j j=m j x˜ modZd, j = m, ,n, and v = x˜ x˜ Fω(x˜ ), j = m+1, ,n satisfy the j ··· j j+1− j−∇ j j ··· relation Φω(x ,v ) = (x ,v ). We also define v by the relation Φω(x ,v ) = j j j j+1 j+1 m m m m (x ,v ). An orbit (x ,v ) defines a configuration x˜ which is unique up to a m+1 m+1 j j j lift of the first point x . We say that the orbit (x ,v )n is a minimizer for the m j j j=m action Aω,b (x ,x ) if the orbit generates a configuration that attains the infimum m,n m n in Aω,b (x ,x ). We define the forward, backward and global minimizer in the same m,n m n way as in the continuous case. The following assumptions on the probability space P(C2(Td)) were introduced in [12]. Assumption 1. For any y Td, there exists G suppP s.t. G has a y y ∈ ∈ maximum at y and that there exists b > 0 such that G (y) G(x) b y x 2. y − ≥ | − | Assumption 2. 0 suppP. ∈ Assumption 3. There exists G suppP such that G has unique maximum. ∈ We define the backward Lax-Oleinik operator Kω,b : C(Td) C(Td), for m < n, m,n Z by the following expression: m,n → ∈ Kω,b ϕ(x) = inf ϕ(x )+Aω,b (x ,x) . (9) m.n xm Td{ m m,n m } ∈ The following is proved in [12] under the weaker assumption that Fω C1(Td): j ∈ Theorem 2.1. [12] (1) Assume assumption 1 or 2 holds. For each n Z, for a.e. ω Ω, we have 0 ∈ ∈ the following statements. There exists a Lipshitz function ψ (x,n), n n , such that for any − 0 • ≤ m < n n , 0 ≤ Kω,b ψ (x,m) = ψ (x,n). m,n − − For any ϕ C(T) and n n , we have that 0 • ∈ ≤ lim inf Kω,b ϕ(x) ψ (x,n) c = 0. m C Rk m,n − − − k →−∞ ∈ 6 KONSTANTINKHANINAND KEZHANG For n n and Lebesgue a.e. x Td, the gradient ψ (x,n) exists. 0 − • ≤ ∈ ∇ Denote x = x and v = ψ (x ,n) + b; we have that (x ,v ) = −n n− ∇ − −n −j j− (Φ ) 1(x ,v ), j n is a backward minimizer. j,n − −n n− ≤ (2) Assume that assumption 3 holds. Then the conclusions for the first case hold for b = 0. Similar theorems hold for the forward minimizers. For ϕ C(Td), m,n Z, ∈ ∈ m < n, we define the forward Lax-Oleinik operator as follows: K˜ω,b ϕ(x) = sup ϕ(x ) Aω,b (x,x ) . (10) m.n { n − m,n n } xn Td ∈ Under the same assumptions as in Theorem 2.1, we have that there exists a Lipshitz function ψ+(x,n), n n , such that 0 ≥ K˜ω,b ψ+(x,n) = ψ+(x,m). m,n For n n and Lebesgue a.e. x Td, v+ = ψ+(x,n) +b exists. Write x+ = x; ≥ 0 ∈ n −∇ n we have that (x+,v+) = Φω (x+,v+) is a forward minimizer. j j j,n n n We further reduce the choice of our potential to the one generated by a finite family of smooth potentials, multiplied by i.i.d. random vectors. These potentials emulate the behaviour of the “white noise” case. Assumption 4. Assume that M Fω(x) = ξi(ω)F (x), (11) j j i i=1 X where F : Td R are smooth functions, and the vectors ξ (ω) = (ξi(ω))M i → j j i=1 are identically distributed vectors in RM with an absolutely continuous dis- tribution. We have the following theorem from [12]. Theorem 2.2. [12] (1) Assume that assumption 4 holds and one of assumptions 1 and 2 holds. If (F , F ) : Td RM (12) 1 M ··· → is one-to-one, then for all b Rd and a.e. ω there exists a unique (xω,vω) Td Rd, such that the full o∈rbit of (xω,vω) is a global minimizer. 0 0 ∈ × 0 0 (2) The same conclusion is valid if assumption 3 holds and b = 0. As the random potential is generated by a stationary random process, the time shift θm is a metric isomorphism of the probability space Ω satisfying Fω(y,n+m) = Fθmω(y,n), m Z. ∈ The family of maps Φω then defines a random transformation Φˆ on the space Td j × Rd Ω given by × Φˆ(x,v,ω) = (Φω(x,v),θω). (13) 0 MINIMIZERS AND VISCOSITY SOLUTIONS 7 Let (xω,vω) be the global minimizer in Theorem 2.2. We have that the probability j j measure ν(d(x,v),dω) = δ (d(x,v))P(dω) xω,vω 0 0 is invariant and ergodic under the transformation (13). The map DΦω : Td Rd 0 × → Sp(2d) defines a cocycle over the transformation (13), where Sp(2d)is the group of all 2d 2dsymplecticmatrices. UnderthefirstpartofAssumption5below,theLyapunov × exponents for this cocycle are well defined. Denote them by λ (ν), ,λ (ν). Due 1 2d ··· to the symplectic nature of the cocycle we have λ (ν) λ (ν) 0 λ (ν) λ (ν). 1 d d+1 2d ≤ ··· ≤ ≤ ≤ ≤ ··· ≤ ToshowtheLyapunovexponents arenonzero, werequire anadditionalassumption. Assumption 5. Under assumption 4, assume first of all that E( ξ ) < . j | | ∞ Secondly, ξ is given by a probability density ρ : RM [0, ) satisfying the j → ∞ following condition: Let cˆ denote the variable (c , ,c ,c , ,c ). For i 1 i 1 i+1 M each 1 i M, there exists functions ρ (c ) L ·(·R·) an−d ρˆ(cˆ)···L1(RM 1) i i ∞ i i − ≤ ≤ ∈ ∈ such that ρ(c) ρ (c )ρˆ(cˆ). i i i i ≤ Remark. The second part of Assumption 5 is rather mild. For example, it is satisfied if ξ1, ,ξM are independentrandom variableswith bounded densities, or if ξ is given j ··· j j by a Gaussian distribution with a nondegenerate covariance matrix. We only need to avoid the case when the density is degenerate in certain direction. Weshallreplacetheone-to-oneconditionfromTheorem2.2withastrongercondition, requiring the map (12) is an embedding. Theorem 2.3. (1) Assume that assumption 4 and 5 hold, and one of assumptions 1 or 2 holds. Assume in addition that the map (12) is an embedding. Then for all b Rd, for a.e. ω, the Lyapunov exponents of ν satisfy ∈ λ (ν) < 0 < λ (ν). d d+1 (2) For the case b = 0, assumption 1 or 2 can be replaced with the weaker assump- tion 3. The same conclusions hold. A corollary of Theorem 2.3 is that for a.e. ω, the orbit (xωk,vkω)k Z is uniformly ∈ hyperbolic (see [4]). With a slightly stronger regularity assumption on the map (12), there exists local unstable manifold Wu(x ,v ) and stable manifold Ws(x ,v ). k k k k Our next theorem states that, near (x ,v ), Wu and Ws coincide with the sets k k W = (x, ψ (x,k)) and W+ = (x, ψ+(x,k)) + b . The functions ψ ( ,k) k− { ∇x − } k { ∇x } ± · areonlyLipshitz andW areonlydefined aboveafull(Lebesgue) measure ofx. Since k± Wu and Ws are smooth manifolds, our theorem states that ψ are in fact smooth in ± a neighbourhood of x . k 8 KONSTANTINKHANINAND KEZHANG Theorem 2.4. (1) Assume that assumptions 4 and 5 hold, and one of assump- tions 1 or 2 holds. Assume in addition that, for 0 < α 1, the map (12) is a C2+α embedding. Then for all b Rd, for a.e.ω, there ≤exists a neighbourhood ∈ U(ω) of (x ,v ) such that 0 0 W U Wu(x ,v ), W+ U Ws(x ,v ). 0− ∩ ⊂ 0 0 0 ∩ ⊂ 0 0 Furthermore, there exists a neighbourhood V(ω) of x such that ψ (x,0) are 0 ± smooth on V(ω). (2) For the case b = 0, assumption 1 or 2 can be replaced with the weaker assump- tion 3. The same conclusions hold. 3. Viscosity solutions and the minimizers In this section we will first deduce some useful properties of the action functional, and introduce a variational problem closely related to the global minimizer. The derivation of the variational problem mostly follow [12]. We say that a function f : Td R is C semi-concave on Td if for any x Td, there exists a linear form l : Rd →R such t−hat for any y Td, ∈ x → ∈ C f(y) f(x) l (y x)+ d(x,y)2. x − ≤ − 2 Here d(x,y) is understood as the distance on the torus, and the vector y x is − interpreted as any vector from x to y on the torus. The linear form l is called a x subderivative of f at x. Lemma 3.1 ([9], Proposition 4.7.3). If f is continuous and C semi-concave on Td, − then there exists a unique C > 0 depending only on C such that f is C Lipshitz. ′ ′ − Let Cω = F . The action function Aω,b has the following properties. j k jkC2 m,n Lemma 3.2. For any b Rd, the action function Aω,b satisfy the following proper- ∈ m,n ties: For any m < k < n, Aω,b (x,x) = inf Aω,b (x,x )+Aω,b (x ,x) . • If (x ,v )n is a minimm,nizer, t′hen Aωx,bk∈(Txd{,xm.)n= nk 1 Aωm,b.n(x k,x ′ }). • j j j=m m,n m n j=−m m,n j j+1 The function Aω,b (x,x) is 1 semi-concave in the second component, and is • m,n ′ − P Cω semi-concave in the first component. m− If (x ,v )n is a minimizer, then for any k such that m < k < n, the • j j j=m derivatives ∂ Aω,b (x ,x ) and ∂ Aω,b(x ,x ) exist. Furthermore, 2 m,k m k 1 k,n k n ∂ Aω,b (x ,x ) = ∂ Aω,b = v b. 2 m,k m k − 1 k,n k − If (x ,v )n is a minimizer, then v + b is a subderivative of Aω,b ( ,x ) • j j j=m − m m,n · n and v +b is a subderivative of Aω,b (x , ). n m,n m · MINIMIZERS AND VISCOSITY SOLUTIONS 9 Proof. The first two conclusions follow directly from the definition. For the third statement, note that a function of x, 1 Aω,b (x,x) = inf (x˜ x)2 b (x˜ x) Fω(x) . m,m+1 ′ x˜′=x′modZd(cid:26)2 ′ − − · ′ − − m (cid:27) For each different lift x˜, 1(x˜ x)2 b (x˜ x) Fω(x) is C2 with a C2 bound ′ 2 ′ − − · ′ − − m 1 + Cω, and hence are 1 + Cω semi-concave. It follows directly from the definition m m that minimum of C semi-concave functions are still C semi-concave. Similarly, we − − conclude that Aω,b is 1 semi-concave in the second component. m,m+1 − Toprovethesemi-concavityingeneral,let(x ,v )n beaminimizerforA (x ,x ). j j j=m m,n m n Note that A (x ,x ) A (x ,x ) m,n ′m n − m,n m n A (x ,x )+A (x ,x ) A (x ,x ) ≤ m,m+1 ′m m+1 m+1,n m+1 n − m,n m n A (x ,x ) A (x ,x ), (14) ≤ m,m+1 ′m m+1 − m,m+1 m m+1 and hence (1 + Cω) semi-concavity of A implies (1 + Cω) semi-concavity of m − m,m+1 m − A (in the first component). Similarly, the semi-concavity of A in the second m,n m,n component depends only on A . n 1,n Note that if (x ,v ), (x ,−v ) is a minimizer of Aω,b (x ,x ), we have m m m+1 m+1 m,m+1 m m+1 ∂ Aω,b = v b and ∂ Aω,b = v b. This implies the fourth conclusion. 2 m,m+1 m+1 − − 1 m,m+1 m − Use (14) again, we conclude that if (x ,v ) is a minimizer for A (x ,x ), the j j m,n m n subderivative of A (x ,x ) in the first component is also a subderivative of m,m+1 m m+1 A (x ,x ) in the first component. The fifth conclusion follows. (cid:3) m,n m n In particular, we have the following corollary of the third conclusion of Lemma 3.2. Corollary 3.3. For any ϕ C(Td), the function Kω,b ϕ(x) is 1 semi-concave; the ∈ m,n − function K˜ω,b (x) is Cω semi-concave. − m,n m− Wesay that theinterval [m ,n ]has anǫ narrow place for the actionif there exists 0 0 − [m,n] [m ,n ] 0 0 ⊂ (1) There exists M ,M Td such that any minimizer (x )n0 satisfy x M 1 2 ∈ i i=m0 m ∈ 1 and x M . n 2 ∈ (2) For any x ,x M and x ,x M , we have that 1 2 ∈ 1 ′1 ′2 ∈ 2 Aω,b (x ,x ) Aω,b (x ,x ) ǫ. | m,n 1 ′1 − m,n 2 ′2 | ≤ Proposition 3.4. [12]Fix b Rd, assume that foranyǫ > 0 there exists an ǫ narrow ∈ − place contained in the interval ( ,n ] and [n , ), then there exists a unique Lip- 0 0 −∞ ∞ shitz function ψ (x,j) such that for any n n , − 0 ≤ lim sup Kω,bϕ(x) ψ (x,n) = 0. m→−∞ϕ C(T)/Rk m,n − − k ∈ 10 KONSTANTINKHANINAND KEZHANG Similarly, there exists a unique Lipshitz function ψ+(x,j) such that for any m n , 0 ≥ lim sup K˜ω,b ϕ(x) ψ+(x,m) = 0. n→∞ϕ C(T)/Rk m,n − k ∈ Corollary 3.5. For n n , ψ ( ,n) is 1 semi-concave; for m n , ψ+( ,m) is 0 − 0 ≤ · − ≥ − · Cω semi-concave. m− We have that for a.e. ω, ǫ narrow places exists for one sided intervals. − Proposition 3.6. [12] Assume that assumption 1 or 2 holds, then for each b Rd and n Z, for a.e. ω, for any ǫ > 0, there exists an ǫ narrow place contain∈ed in 0 ∈ − the interval ( ,n ] and [n , ). The same holds with assumption 3 and b = 0. 0 0 −∞ ∞ Sincethefunctionsψ aresemi-concave, byLemma 3.1theyareLipshitz. Itfollows ± fromtheRadmachertheoremthattheyareLebesguealmosteverywheredifferentiable. The points of differentiability are tied to the minimizers. Proposition 3.7. (1) If x is suchthat ψ (x ,0)exists, then forv = ψ (x ,0)+ −0 ∇ − −0 0− ∇ − −0 b, the backward orbit of (x ,v ) is a backward minimizer. −0 0− (2) If x+ is such that ψ+(x+,0) exists, then for v+ = ψ+(x+,0)+b, the forward 0 ∇ 0 0 ∇ 0 orbit of (x+,v+) is a forward minimizer. 0 0 (3) Assume that, for any ǫ > 0, there exists an ǫ narrow place contained in − the interval ( ,0] and [0, ). Then the full orbit of (x ,v ), where v = 0 0 0 −∞ ∞ ψ (x ,0)+b, is a global minimizer if and only if x is a point of minimum − 0 0 ∇ for the function ψ (x,0) ψ+(x,0). − − Theorem 2.1 follows from Proposition 3.4, Proposition 3.6 and the first two state- ment of Proposition 3.7. Furthermore, to prove the uniqueness of the global min- imizer, we only need to show that the function ψ (x,0) ψ+(x,0) has a unique − minimum on Td. We need to consider potentials of the typ−e (11). Given a family of potentials Fω = ξi(ω)F (x), let c = ξ1, ,ξM . We treat c as a parameter of the j j i { 0 ··· 0 } system. Inthefollowing lemma, we willshow that thefunctionψ (x,0) ψ+(x,0) de- P − − compose into a semi-concave part independent of c and a smooth function depending on c and x. Lemma 3.8. There exists a semi-concave function ψ such that M ψ (x,0) ψ+(x,0) = ψ(x) c F (x). − i i − − i=1 X Proof. The functions A for m < 0 and the functions A for 1 k < n are m,0 k,n ≤ independent of c. As a consequence we have that the functions ψ (x,m) for m 0 − ≥ and the functions ψ+(x,k) for k 1 are all independent of c. ≥