ebook img

Effective dynamics for a kinetic Monte-Carlo model with slow and fast time scales PDF

0.45 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Effective dynamics for a kinetic Monte-Carlo model with slow and fast time scales

Effective dynamics for a kinetic Monte-Carlo model 3 with slow and fast time scales 1 0 2 Salma Lahbabi1,2,3 and Fr´ed´eric Legoll4,2 n 1 CERMICS, E´cole des Ponts ParisTech, Universit´eParis-Est, a J 6 et 8 avenueBlaise Pascal, 77455 Marne-La-Vall´ee Cedex 2, France 2 [email protected] 2 INRIARocquencourt,MICMAC team-project, ] R Domaine de Voluceau, B.P. 105, 78153 Le Chesnay Cedex, France P 3 CNRS & Laboratoire deMath´ematiques (UMR 8088), Universit´ede Cergy-Pontoise, . h 95000 Cergy-Pontoise Cedex, France at 4 Laboratoire Navier, E´cole desPonts ParisTech, Universit´e Paris-Est, m 6 et 8 avenueBlaise Pascal, 77455 Marne-La-Vall´ee Cedex 2, France [ [email protected] 1 v December 11, 2013 6 6 2 0 Abstract . 1 Weconsiderseveralmultiscale-in-timekineticMonteCarlomodels, 0 in which some variables evolve on a fast time scale, while the others 3 evolve on a slow time scale. In the first two models we consider, a 1 : particleevolvesinaone-dimensionalpotentialenergylandscapewhich v has some small and some large barriers, the latter dividing the state i X space into metastable regions. In the limit of infinitely large barriers, r we identify the effective dynamics between these macro-states, and a prove the convergence of the process towards a kinetic Monte Carlo model. We next consider a third model, which consists of a system of two particles. The state of each particle evolves on a fast time-scale while conserving their respective energy. In addition, the particles can exchange energy on a slow time scale. Considering the energy of the first particle, we identify its effective dynamics in the limit of asymptotically small ratio between the characteristic times of the fast and the slow dynamics. For all models, our results are illustrated by representative numerical simulations. 1 1 Introduction Langevindynamicsiscommonlyusedincomputationalstatistical physicsto model the evolution of atomistic systems at finitetemperature. The state of thesystemevolves accordingtoastochasticdifferentialequation, andisthus modelledasarealvectorvaluedMarkovprocess. Generically,thestatespace ofsuchatomisticsystemscanbedecomposedintoseveralmetastableregions, separatedbyhighenergybarriers. Itisthereforenaturaltointroducekinetic Monte-Carlo models as a simplification of the continuous-in-space reference model, where the state space is coarse-grained into discrete states that each corresponds to a metastable region of the continuous model. We refer e.g. to [11] for a formalization of this idea. The resulting dynamics is a time continuous Markov chain, also called jump process. In this work, we consider such a jump process, with the particularity that two different time scales are present in the system. On a typical tra- jectory, many jumps of the fast degrees of freedom occur before a significant evolution of the slowly varying variables is observed. Therefore, a direct discretization is numerically very costly (this problem is known as the small barrier problem). The aim of this work is to find an effective dynamics for the slow variables (which turns out to be again a kinetic Monte Carlo model) while filtering out the fast variables. This effective dynamics is de- rived in the regime of large time scale separation between the slow and the fast variables. We will successively perform this derivation for three different models. First, in Section 2, we consider a particle subjected to a potential energy presenting two macro-states separated by a high energy barrier. Inside each macro-state, therearefinitely many micro-states separated by relatively low energy barriers (see Fig. 1). The ratio between the low energy barriers and the large energy barriers is characterized by a parameter ǫ that we will take asymptotically small. This ratio encodes the difference of time scales between the dynamics within a macro-state (only low energy barriers have tobeovercome, andthedynamicsisthereforefast),andtheglobaldynamics (for which large energy barriers have to be overcome, making this dynamics slow). See Section 2.1 for a complete description of the model. We are interested in the long time behavior of functions of the slow variables. We consider in this article the simplestcase of such function, that is,themacro-stateinwhichtheparticleislocated. Atthepriceofadditional technicalities, ourapproachcarriesovertomoregeneralfunctionsoftheslow variables. 2 Under an irreducibility assumption on the dynamics within the macro- states, we prove that, in the limit of asymptotically large time scale sep- aration (namely when ǫ goes to zero), the dynamics of the slow variable converges to a jump process over the two macro-states. The transition rates of this limiting processare, in somesense, theweighted averages of thetran- sition rates of the reference model. We underline that our convergence is a convergence on the path of the system, and not only on the state of the system at any given time. Our main result, Theorem 2.3, is presented in Section 2.1 and proved in Section 2.2. In Section 2.3, we present detailed numerical results illustrating our the- oretical conclusions. In particular, we monitor the probability distribution of the first waiting time in a macro-state, and check that this distribution indeed converges to the asymptotic distribution. In Section 3, we turn to our second model, which is a generalization of the model considered in Section 2 where the potential energy presents infinitely many macro-states instead of two. To simplify the problem, we assume that the internal dynamics within each macro-state are identical (see Section 3.1 for a detailed presentation of the model). In this case, the effectivedynamicsisatimecontinuousrandomwalkwithPoissonianwaiting times, asstated inourmainresultofthatSection, Theorem3.1. We provide some representative numerical results in Section 3.2. We finally turn in Section 4 to our third model, which is different in spirit from the models studied in Sections 2 and 3. One interest of this last section is to show that the arguments employed to analyze the first two models can be used to study a model different in nature. The system at hand in Section 4 contains two particles, each one being described by k spin-like variables. The system evolves either due to the internal evolution of each particle (which occurs on afast time-scale), or dueto theinteraction between the two particles (which occurs on a slow time-scale). In the first case, the energy of each particle is preserved while in the second, there is an exchange of energy between the two particles. Note that the total energy of the system is preserved in both cases. Our quantity of interest is the energy of the first particle, which is indeed a slow observable (see Section 4.1 for a complete description of the model). We show that the dynamics of the first particle energy converges to a jump process on the (finite) set of admissible energies, this set being determined by the initial energy (see Section 4.2, Theorem 4.1, for our main result). We collect in Section 4.3 some numerical illustrations. 3 Thedifficultyofthequestionweaddressstemsfromthefactthattheslow observable is not a Markov process: this is a closure problem. A typical tool in this context is the Mori-Zwanzig projection formalism, which is described indetailsin[6]. Thisleadstoapproximatingtheslowobservablebyaprocess which has some memory in time. In our work, we assume that a time-scale separation is present in the system. Memory effects may then be neglected, and the slow observables be approximated by a Markov process. As often the case in such settings, an essential ingredient of our proof is an averaging principle (see [15] for a comprehensive review of that principle in various contexts). We refer to [5, 14, 17, 18] for related works in the framework of discrete time Markov chains in a discrete state space. As pointed out above, kinetic Monte Carlo models are somewhat ob- tained as a coarse-grained approximation of real valued Markov processes, suchastheLangevinequation(oritsoverdampedlimit). Inthatframework, the construction and the analysis of effective dynamics has been undertaken in several works, see e.g. [12, 13] and the comprehensive bibliography con- tained therein. Throughout this paper, we use several well-known results that we recall in Appendix A below. 2 A particle in a potential energy landscape with two macro-states Inthissection westudythedynamicsofaparticle inapotentialenergywith two macro-states (see Fig. 1). The state of the particle is represented by a macroscopic variable (the index of the macro-state), which can take here only two values, and a microscopic variable (the index of the micro-state within the macro-state). We are concerned with the long time behaviour of themacroscopic variable. InSection 2.1, wepresentthemodelandstate our convergence result (Theorem 2.3), the proof of which is given in Section 2.2. Numerical results illustrating our theoretical conclusions are gathered in Section 2.3. 2.1 Presentation of the model and main result We now formalize the model described above. We introduce a parameter ǫ which represents the ratio between the characteristic time of the internal dynamic inside a given macro-state (fast time scale) and the characteristic time of evolution of the macro-state, namely the characteristic time the 4 Figure 1: Example of a potential energy with two macro-states of energy wells. system spends in a given macro-state before going to the other one. For simplicity, we assume that both macro-states contain the same number of micro-states. The macro-states are labelled by 0 and 1, whereas the micro- states are labelled as 1, 2, ..., m. We set M = {1,2,...,m}. The state of the particle is modelled by a time continuous Markov chain Yǫ = Xǫ,Zǫ , which takes its values in the space E = M ×{0,1}. The t t t first coordinate of Yǫ represents the micro-state of the particle inside a (cid:0) (cid:1) t given macro-state, and thus takes its value in M. The second coordinate determines in which macro-state the particle is located at time t: Zǫ = 0 or t 1. We denote by Qǫ the transition matrix of the process Yǫ. Let Q and t 0 Q be two m×m matrices that determine the internal dynamic within each 1 macro-state and let C and C betwo m×m matrices that determine the 0,1 1,0 coupling between micro-states that belong to different macro-states. The transition rates of Yǫ are given by t Qǫ (x,z), x′,z = Q x,x′ , z = 0 or 1, x 6= x′, z Qǫ (cid:0)(x,z), (cid:0)x′,z′(cid:1)(cid:1) = ǫCz(cid:0),1−z (cid:1)x,x′ for z 6= z′. ǫ (cid:0) (cid:0) (cid:1)(cid:1) (cid:0) (cid:1) Thus, Q is of the form Qǫ = Q0 ǫC0,1 . ǫC Q 1,0 1 (cid:18) (cid:19) Remark 2.1. As always for Markov jump processes, the diagonal entries of the transition matrix are irrelevant. Our convention is to take them equal to zero. The process Yǫ is a jump process. It means that, when it is in a state t (x,z), then 5 • it stays there for a time S, which is a random variable distributed according to an exponential distribution of parameter qǫ(x,z) := Qǫ (x,z), x′,z′ , (x′X,z′)∈E (cid:0) (cid:0) (cid:1)(cid:1) (x′,z′)6=(x,z) that is P(S ≤ t) = 1−exp(−qǫ(x,z)t). • At this time S, it jumps to another state. The probability that it jumps to the state (x′,z′) 6= (x,z) is given by Qǫ((x,z),(x′,z′)) . qǫ(x,z) Note that the paths of a jump process are by convention right continuous, with left limits (they are thus c`ad-la`g functions). We are interested in the behaviour of a macroscopic observable, that is ǫ a function of the slow variable Z . The dynamic inside a given macro-state, t i.e. when the variable z does not change, has a characteristic time of the order of O(1) (i.e. independentof ǫ), whereas the characteristic time for the particle to go from one macro-state to the other is of the order of O ǫ−1 . ǫ We therefore consider henceforth the rescaled-in-time process Z . We t/ǫ (cid:0) (cid:1) t introducetheprocessYǫ := Yǫ ,whichisajumpprocessofinte(cid:16)nsity(cid:17)matrix t t/ǫ Qǫ given by ǫ−1Q C Qǫ = 0 0,1 . (1) C ǫ−1Q 1,0 1 (cid:18) (cid:19) We assume that the matrices Q and Q are irreducible, (2) 0 1 therefore admitting unique invariant measures denoted by π and π , re- 0 1 spectively. Remark 2.2. Due to our convention on the transition matrix (see Re- mark 2.1), the invariant measure π of a transition matrix Q = {q } i,j 1≤i,j≤m satisfies πTQ = πT∆, where ∆ is a diagonal matrix with ∆ = m q . i j=1 i,j P 6 Definitions and notations We denote by DR[0,∞) the set of c`ad-la`g functions defined on [0,∞) and valued in R, and by CR[0,∞) the set of continuous functions defined on [0,∞) and valued in R. Endowed with the Skorohod metric (see e.g. [4, p. 116–118]), DR[0,∞) is a complete separable space. A family of probability measures Pn on DR[0,∞) is said to weakly con- vergetoaprobabilitymeasureP onDR[0,∞)if,foranyboundedcontinuous function Φ on DR[0,∞), lim ΦdP = ΦdP. n n→∞ Z Z A family of random variables Xn valued in DR[0,∞) is said to converge in distribution to X ∈ DR[0,∞) if the distribution of Xn weakly converges to the distribution of X. Otherwise stated, the family X converges in n distribution to X if, for any bounded continuous function Φ on DR[0,∞), we have lim E[Φ(X )] = E[Φ(X)]. n n→∞ Throughout this article, we use the symbol ⇒ for the convergence in dis- tribution of c`ad-la`g stochastic processes or the weak convergence of their corresponding distributions. Main result We are now in position to present the main result of this section. For z ∈ {0,1}, we define C (x) = C x,x′ z,1−z z,1−z x′∈M X (cid:0) (cid:1) and λ = C (x)π (x) = π (x) C x,x′ . (3) z z,1−z z z z,1−z x∈M x∈M x′∈M X X X (cid:0) (cid:1) Theorem2.3. LetYǫ = (Xǫ,Zǫ)bethejumpprocess ofintensitymatrix (1) t t t and starting from an initial condition Y = (X ,Z ) independent of ǫ. We 0 0 0 make the assumption (2). We denote by Pǫ the distribution of the process (Zǫ) and by P the distribution of the jump process of initial condition Z t 0 and of intensity matrix 0 λ 0 , (4) λ 0 1 (cid:18) (cid:19) where λ and λ are defined by (3). Then, we have Pǫ ⇒ P as ǫ goes to 0. 0 1 7 Note that, in (4), we have used the convention detailed in Remark 2.1. The above result confirms the intuition according to which, when ǫ goes to zero, the internal dynamic within each macro-state is speeded up, thus attaining a local equilibrium where configurations are distributed according to the invariant measures π and π within the macro-states. In the limit 0 1 whenǫgoesto0,thetransitionfromonemacrostatez totheotherone,1−z, occurs with the frequency λ , which is a weighted average (over the micro- z states x, with weights given by the invariant measure π ) of the frequencies z C (x). In turn, these frequencies are the transition frequencies from the z,1−z micro-state x of the macro-state z to the other macro-state. As already emphasized in the introduction, we point out that the above theorem states a convergence result on the path (Zǫ) , and not only of the t t≥0 random variable Zǫ at any time t. t 2.2 Proofs To simplify the notation, we first consider the case when both macro-states are similar: in that case, Q = Q = Q and C = C = C. The proof 0 1 0,1 1,0 of Theorem 2.3 is performed in Section 2.2.2, and uses some intermediate results shown in Section 2.2.1. We briefly mention in Section 2.2.3 how to adapt the proof to handle the general case. The following computation will be very useful in what follows. Recall that the generator of the process Yǫ is given by t Lǫϕ(x,z) = ǫ−1Q x,x′ ϕ x′,z −ϕ(x,z) x′∈M X (cid:0) (cid:1)(cid:0) (cid:0) (cid:1) (cid:1) + C x,x′ ϕ x′,1−z −ϕ(x,z) . x′∈M X (cid:0) (cid:1)(cid:0) (cid:0) (cid:1) (cid:1) We refer the reader to the textbook [4, Section 4.2] for more details on semi-groups and generators associated to jump processes. Taking ϕ(x,z) = 1 (x,z) in the above relation, we obtain z=1 Lǫ1 (x,z) = − C x,x′ 1 (x,z)+ C x,x′ 1 (x,z), z=1 z=1 z=0 x′∈M x′∈M X (cid:0) (cid:1) X (cid:0) (cid:1) and thus, taking (x,z) = Yǫ = (Xǫ,Zǫ), we have t t t Lǫ1 (Yǫ)= C Xǫ,x′ (1−2Zǫ)= C(Xǫ)(1−2Zǫ) z=1 t t t t t x′∈M X (cid:0) (cid:1) 8 where C(x) = C(x,x′). We now define the process (Mǫ) by x′∈M t t≥0 P t Mǫ = 1 (Yǫ)−1 (Yǫ)− Lǫ1 (Yǫ) ds t z=1 t z=1 0 z=1 s Z0 t = Zǫ−Z − C Xǫ,x′ (1−2Zǫ)ds. (5) t 0 s s Z0 x′∈M X (cid:0) (cid:1) Using Proposition A.1, we see that Mǫ is a martingale with respect to the t filtration Fǫ = σ(Yǫ,s ≤ t), and that its quadratic variation is given by t s t hMǫi = (Lǫ1 (Yǫ)−21 (Yǫ) Lǫ1 (Yǫ))ds t z=1 s z=1 s z=1 s Z0 t = C(Xǫ)(1−2Zǫ)−2ZǫC(Xǫ) (1−2Zǫ)ds s s s s s Z0 t = C(Xǫ) (1−2Zǫ)2ds s s Z0 t = C(Xǫ)ds s Z0 t = g(Xǫ)ds+λt, (6) s Z0 where λ = λ = λ (see (3)) and 0 1 g(x)= C(x)−λ = C x,x′ −λ. (7) x′∈M X (cid:0) (cid:1) We have used in the above computation the fact that (1−2Zǫ)2 = 1, a s direct consequence of the fact that Zǫ = 0 or 1. s In what follows, we will use the fact that t t Zǫ = Z + f (Yǫ)ds+ λ(1−2Zǫ) ds+Mǫ (8) t 0 s s t Z0 Z0 with f(x,z) = C x,x′ −λ (1−2z), (9) ! x′∈M X (cid:0) (cid:1) which is a straightforward reformulation of (5). 9 2.2.1 Some intermediate results The following results are useful in the proof of Theorem 2.3. Lemma 2.4. Let L be a m×m matrix and let x ∈ Rm. Assume that for any y ∈ Rm such that yTL = 0, we have yTx = 0. Then, there exists z ∈ Rm such that Lz = x. Proof. We denote by C , ..., C the columns of the matrix L and by 1 m V = Span{C ,...,C }. The assumption on x is that x ∈ V⊥ ⊥ = V. 1 m Therefore, x can be written in the form x = m α C for some coefficients i=1 i i (cid:0) (cid:1) (α ) . Let z = m α e , with (e ) the canonical basis of Rm. i 1≤i≤m i=1 i i i 1≤iP≤m We check that z satisfies Lz = x. P Lemma 2.5. Let F = {0,1}, Z be a random variable valued in F, λ ,λ ≥ 0 0 1 0, and (Z ) be a stochastic process on F. If the process t t≥0 t M = Z −Z − (λ −(λ +λ )Z )ds t t 0 0 0 1 s Z0 is a martingale with respect to the natural filtration of (Z ) , then (Z ) t t≥0 t t≥0 is a Markov jump process of initial condition Z and of intensity matrix 0 given by 0 λ R = 0 . (10) λ 0 1 (cid:18) (cid:19) Proof. We use the uniqueness result of the martingale problem associated to the Markov jump process with intensity matrix R introduced by D.W. Stroock and S.R.S. Varadhan (see e.g. [8, Theorem 21.11]). We recall a simple version of that result in Lemma A.2 below. In view of that result, we only need to check that, for any bounded function ϕ : F 7→ R, the process t ϕ M = ϕ(Z )−ϕ(Z )− Lϕ(Z )ds t t 0 s Z0 is a martingale, where L is the generator of the jump process associated to the intensity matrix (10), which reads Lϕ(z) = R(z,z′) ϕ(z′)−ϕ(z) . z′∈F X (cid:0) (cid:1) We note that Lϕ(z = 0) = λ (ϕ(1)−ϕ(0)), Lϕ(z = 1) = λ (ϕ(0)−ϕ(1)). 0 1 10

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.