ebook img

Refined Second Law of Thermodynamics for fast random processes PDF

0.39 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Refined Second Law of Thermodynamics for fast random processes

Refined Second Law of Thermodynamics for fast random processes 2 Erik Aurell1,2,3, Krzysztof Gaw¸edzki4, Carlos Mej´ıa-Monasterio5, 1 0 Roya Mohayaee6, Paolo Muratore-Ginanneschi7 2 n 1ACCESS Linnaeus Center, KTH, 10044 Stockholm, Sweden a J 2Computational Biology Department, KTH, 6 AlbaNova University Center, 10691 Stockholm, Sweden 1 3Department of Information and Computer Science, Aalto University, ] PO Box 15400, 00076 Aalto, Finland h c 4CNRS, Laboratoire de Physique, ENS Lyon, Universit´e de Lyon, e m 46 All´ee d’Italie, 69364 Lyon, France - 5Laboratory of Physical Properties, t a Department of Rural Engineering, Technical University of Madrid, t Avenida Complutense s/n, 28040 Madrid, Spain s . at 6CNRS, Institut d’Astrophysique de Paris, Universit´e Pierre et Marie Curie, m 8 bis boulevard Arago, 75014 Paris, France - 7Department of Mathematics and Statistics, University of Helsinki, d n P.O. Box 68 FIN-00014, Helsinki, Finland o c [ Abstract 1 WeestablisharefinedversionoftheSecondLawofThermodynamicsforLangevinstochasticprocesses v describing mesoscopic systems driven by conservative or non-conservative forces and interacting with 7 thermal noise. The refinement is based on the Monge-Kantorovich optimal mass transport. General 0 2 discussionisillustratedbynumericalanalysisofamodelformicron-sizeparticlemanipulatedbyoptical 3 tweezers. . 1 0 2 I. INTRODUCTION 1 : v Inrecentyearsanincreasedinterestinfluctuations ofmesoscopicsystemsinteractingwith noisyenvi- i X ronmenthasledto thedevelopmentof“StochasticThermodynamics”thatrevisitedrelationsbetween thermodynamical principles and statistical description within simple models based on stochastic dif- r a ferential equations, see [33, 34] and references therein. The aim of this note is to make a junction between two circles of ideas in the context of Stochastic Thermodynamics of systems whose evolu- tion is described by the overdamped Langevin equation. One circle concerns the versions [13, 20] of the Second Law of Thermodynamics and of its reformulation in the framework of Thermodynamics of Computation by Landauer [6, 26]. The other circle deals with the optimal control problems in Stochastic Thermodynamics that were recently connected in [1] to the Monge-Kantorovich optimal mass transport and the Burgers equation. The result of the junction will be a refinement, relevant for fastprocesses,ofthe SecondLawof StochasticThermodynamics. Ourimprovementofthe Second Law does not go in the direction of a better control of fluctuations of thermodynamical quantities [35], as do various Fluctuation Relations studied intensively in last years, see [14, 16, 23]. Instead, it establishes the optimal lower bound on the total entropy production in non-equilibrium processes of fixed duration. The paper is organized as follows. In Sec.II, we define for overdamped Langevin evolution with conservative driving forces the concepts of performed work, heat release, and entropy production, 1 and we recall the basic laws of Stochastic Thermodynamics. Sec.III contains a brief discussion of the relation between the Second Law of Thermodynamics and the Landauer principle. In Sec.IV, we replacetheminimizationofthetotalentropyproductioninoverdampedLangevinprocessesthatinter- polate in a fixed time window between given statistical states by a minimization problem considered by Benamou-Brenier in [4] and shown there to be equivalent to the Monge-Kantorovichoptimal mass transport problem that is the subject of Sec.V. The latter two sections briefly review the classical mathematical results about the optimal mass transport [36] needed in our argument. In particular, the approach of [4] establishes a direct connection between the Monge-Kantorovich problem and the inviscid Burgers equation for potential velocities that plays a crucial role below. On the basis of the above results, we establish in Sec.VI the refined version of the Second Law of Stochastic Thermody- namics, applying it to the example of Gaussian processes considered already in a similar context in [1] and, in a special case, in [30]. Sec.VII discusses the corresponding refinement of the Landauer principle, illustrating it by the numerical analysis of a simple model of a micron-size particle in time- dependent optical traps. Sec.VIII extends the refined Second Law to the case of Langevin evolutions with non-conservative forces, showing that the preceding analysis covers also that case. Conclusions and remarks about open problems make up Sec.IX. Acknowledgements. K.G. thanks S. Ciliberto and U. Seifert for inspiring discussions. His work waspartlydonewithinthe frameworkofthe STOSYMAPprojectANR-11-BS01-015-02. R.M.’swork waspartlysupportedbythe OTARIEprojectANR-07-BLAN-0235. P.M.-G.acknowledgessupportof the Center of Excellence ”Analysis and Dynamics” of the Academy of Finland. II. STOCHASTIC THERMODYNAMICS FOR LANGEVIN EQUATION Weconsidersmallstatistical-mechanicalsystems,forexamplecomposedofmesoscopicparticles,driven by time-dependent conservative forces and interacting with a noisy environment. The temporal evo- lution of such a system my often be well described by the overdamped stochastic Langevin equation dx = M∇U(t,x)dt + dζ(t) (2.1) − in d-dimensional space of configurations, with a smooth potential U(t,x) and a white noise dζ(t) whose covariance is dζa(t)dζb(t′) = 2Dabδ(t t′)dt, (2.2) − (cid:10) (cid:11) where denotes the expectation value. The mobility and diffusivity matrices M = (Mab) and − D =(D(cid:10)ab) (cid:11)occurring above are assumed positive and x-independent (the latter assumption is for the sake of simplicity and could be relaxed at the cost of few corrective terms). To assure that the noise models the thermal environment at absolute temperature T, we impose the Einstein relation D = k T M, (2.3) B where k is the Boltzmann constant. Potentials U (x) U(t,x) are assumed to be sufficiently B t ≡ confining so that the solutions of the stochastic equation (2.1) do not explode in finite time. Given a probability density ρ (x) at the initial time t = 0, they define then for t 0 a, in general i ≥ non-stationary, Markov diffusion process x(t). The relation d g(x(t) = (L g)(x(t) , (2.4) dt t (cid:10) (cid:11) (cid:10) (cid:11) holding for smooth functions g(x), determines the 2nd order differential operator L = (∇U ) M∇+k T ∇ M∇, (2.5) t t B − · · the (time-dependent) generator of diffusion x(t). The instantaneous distributions of the process, describing its statistical properties at fixed times, are given by the probability densities ρ(t,x) = δ(x x(t)) exp R(t,x) , (2.6) (cid:10) − (cid:11) ≡ h− kBT i thatweassumesmooth,positive,andwithfinitemoments. TheyevolveaccordingtotheFokker-Planck equation ∂ ρ = L†ρ, (2.7) t t 2 where L† is the 2nd order differential operator adjoint to L . Explicitly, t t L† = ∇ M(∇U ) + k T ∇ M∇. (2.8) t · t B · In what follows, it will be crucial that the Fokker-Planck equation (2.7) may be rewritten as the advection equation ∂ ρ+∇ (ρv) = 0 (2.9) t · in the deterministic velocity field v(t,x) = M ∇U + k T ρ−1∇ρ (t,x) = M∇(U R)(t,x). (2.10) B − − − (cid:0) (cid:1) The time-dependent vector field v(t,x), called current velocity in [29], has the interpretation of the mean local velocity of the process x(t) defined by the limiting procedure δ(x x(t))(x(t+ǫ) x(t ǫ)) v(t,x) = lim − − − (2.11) ǫ→0 (cid:10) 2ǫ δ(x x(t)) (cid:11) − (cid:10) (cid:11) (the limit has to be taken after the expectation as the trajectories of the diffusion process are not differentiable). The setup of Langevin equation permits simple definitions of thermodynamical quantities. The fluctuating (i.e. trajectory-dependent) work performed on the system between initial time t=0 and final time t=t is given by the Jarzynski expression [22] f tf W = ∂ U(t,x(t))dt (2.12) t Z 0 and the fluctuating heat released into the environment during the same time interval by the formula tf Q = ∇U(t,x(t)) dx(t) (2.13) −Z ·◦ 0 with the Stratonovichstochastic integral(symbolized by ” ”). The expectation value of work is then ◦ given by the identity tf W = dt ∂ U(t,x)ρ(t,x)dx, (2.14) t Z Z (cid:10) (cid:11) 0 where dx denotes the standard d-dimensional volume element. In order to calculate the expectation valueofheatrelease,onerewritesthedefinition(2.13)intermsoftheItoˆstochasticintegral,including a corrective term: tf tf Q = ∇U(t,x(t)) dx(t) k T ∇ M∇U(t,x(t)) dt B −Z · − Z · 0 0 tf tf = (∇U) M(∇U) k T(∇ M∇U) (t,x(t)) dt ∇U(t,x(t)) dζ(t). (2.15) B Z · − · − Z · 0 (cid:0) (cid:1) 0 The last term does not contribute to the expectation value due to the martingale property of the Itoˆ integral so that tf Q = dt (∇U) M(∇U) k T (∇ M∇U) (t,x) ρ(t,x) dx, (2.16) B Z Z · − · (cid:10) (cid:11) 0 (cid:0) (cid:1) or, upon integration by parts over space, tf Q = dt ∇U ∇R (t,x) M(∇U)(t,x) ρ(t,x) dx (2.17) Z Z − · (cid:10) (cid:11) 0 (cid:0) (cid:1) (here and below, we assume that the spatial boundary terms in integration by parts vanish; this is assured for confining potentials and fast decaying initial density of the process). 3 The First Law of Thermodynamics, expressing the conservation of energy, takes in the context of overdamped Langevin dynamics the form of an identity W Q = ∆U (2.18) − where ∆U = U(t ,x(t )) U(0,x(0)) (2.19) f f − is the difference of the potential energy between the end point and the initial point of the process trajectory. Eq.(2.20) holds for the fluctuating quantities and not only as the relation W Q = ∆U (2.20) − (cid:10) (cid:11) (cid:10) (cid:11) (cid:10) (cid:11) for the expectation values with ∆U = U(t ,x) ρ (x) dx U(0,x) ρ (x) dx, (2.21) f f i Z − Z (cid:10) (cid:11) where ρ ρ and ρ ρ . i ≡ 0 f ≡ tf In[30]and[1]itwasassumedthatattheinitialandthefinaltimesthepotentialmayundergojumps from U (x) to U(0,x) and from U(t ,x) to U (x), leading to the modified expression for the work i f f tf W = U(0,x(0)) U (x(0)) + ∂ U(t,x(t))dt + U (x(t )) U(t ,x(t )), (2.22) i t f f f f − Z − 0 with the contributionsfromthe initial andfinaljumps ofthe potentialincluded. Of course,Eq.(2.22) may be obtained from (2.12) by an appropriate limiting procedure where the jumps are smoothened over short initial and final time intervals. Within such a procedure, the process itself is not modified in the limit onthe time interval [0,t ] and the limiting heat releaseis still givenby expression(2.13). f The First Law (2.18) continues to hold, provided we replace formula (2.19) for ∆U by ∆U = U (x(t )) U (x(0)). (2.23) f f i − The expectation value of work may now be expressed in the form W = U (x) ρ (x) dx U (x) ρ (x) dx + Q (2.24) f f i i Z − Z (cid:10) (cid:11) (cid:10) (cid:11) with the average heat release given by Eq.(2.17). Let us pass to the discussion of the Second Law of Thermodynamics in the context of Langevin dynamics(2.1)(eventualjumpsofpotentialattheendsofthetimeintervalwillnotaffecttheformulae below). The instantaneous entropy of the system is given by the usual Gibbs-Shannon formula S (t) = k ln(ρ(t,x)) ρ(t,x) dx. (2.25) sys B − Z For its time derivative, one obtains from the Fokker-Planck equation (2.7) the expression d S (t) = k ln(ρ (x))(L†ρ )(x)dx dt sys − BZ t t t 1 = R (x) ∇ M(∇U ) ∇ M(∇R ) (x) ρ (x) dx t t t t T Z · − · (cid:0) (cid:1) 1 = (∇R )(x) M(∇U ∇R )(x) ρ (x) dx, (2.26) t t t t −T Z · − where the last equality follows again by integration by parts. Integrating over time, one gets for the change of the entropy of the system in the time interval [0,t ] the formula f 1 tf ∆S S (t ) S (0) = dt (∇R)(t,x) M(∇U ∇R)(t,x) ρ(t,x) dx. (2.27) sys sys f sys ≡ − −T Z Z · − 0 4 Since the system evolves interacting with the thermal environment, the entropy of the latter also changes. The change of entropy of environment is related to the average heat release by the thermo- dynamic formula 1 ∆S = Q . (2.28) env T (cid:10) (cid:11) For the total entropy production, Eqs.(2.17) and (2.27) give ∆S = ∆S + ∆S tot sys env 1 tf = dt (∇U ∇R)(t,x) M(∇U ∇R)(t,x) ρ(t,x) dx T Z Z − · − 0 1 tf = dt v M−1v (t,x) ρ(t,x) dx, (2.29) T Z Z · 0 (cid:0) (cid:1) wherethelastlineisexpressedintermsofthemeanlocalvelocitygivenbyEq.(2.10). Similarformulae for the entropy production appeared e.g. in [2, 12, 27, 32]. In the obvious way, identity (2.29) implies the Second Law of Stochastic Thermodynamics: ∆S 0 (2.30) tot ≥ statingthatthetotalentropyproductioncomposedofthe changesofentropyofthesystemandofthe environmenthas to be non-negative. Inequality (2.30) may be also rewritten as a lowerbound for the average heat release: Q T∆S . (2.31) sys ≥ − (cid:10) (cid:11) III. LANDAUER PRINCIPLE In the form (2.31), the Second Law of Stochastic Thermodynamics is closely related to the Landauer principle [6, 26] stating that the erasure of one bit of information during a computation process conducted in thermal environment requires a release of heat equal (in average) to at least (ln2)k T. B Asanexample,considerabi-stablesystemthatmaybeintwodistinctstatesandundergoesaprocess that at final time leaves it always in, say, the second of those states. Such a device may be realized in the context of Stochastic Thermodynamics by an appropriately designed Langevin evolution that startsfromtheGibbs statecorrespondingtoapotentialwithtwosymmetricwellsseparatedbyahigh barrier and ends in a Gibbs state corresponding to a potential with only one of those wells [13]. The change of system entropy in such a process is approximately 1 1 ∆S = (ln1)k + 2(ln ) k = (ln2)k (3.1) sys − B 2 2 B − B and Landauer’s lower bound for average heat release follows from inequality (2.31). Note that in this situation we fix the initial and the final state of Langevin evolution, inquiring how much heat is released during a process that interpolates between those states. As is well known, in order to saturatethe lowerbounds (2.30) or(2.31), one has to moveinfinitely slowly sothat the systempasses at intermediate times through a sequence of equilibrium states. Suppose however, that we cannot afford to go too slowly. Indeed, in computational devices, we are interested in fast dynamics that arrivesatthe finalstate quickly but producesas little heatas possible. We are thereforenaturallyled to two questions: What is the lower bound for the total entropy production or the average heat release in the • process that interpolates between given states in a time interval of fixed length? What is the dynamical protocol that leads to such a minimal total entropy production or heat • release? These questions make sense in more general setups but we shall study them below in the context of Stochastic Thermodynamics of processes described by Langevin equation (2.1). The initial and final states will be given by probability densities ρ (x) and ρ (x). The dynamical protocols will be i f determinedbyspecifyingfor 0 t t atimedependentsteeringpotential U(t,x),thatwillbecalled f ≤ ≤ the “control” below. In such a setup, the question about the minimum of total entropy production or average heat release becomes an optimization problem in Control Theory [15, 19]. It was recently discussed, together with the optimization of average performed work, in refs.[1, 30], see also [2, 18]. 5 IV. OPTIMAL CONTROL OF ENTROPY PRODUCTION We shall describe below a relation of the minimization problem for total entropy production or the averageheatreleasetotheoptimalmasstransport[36]andtheinviscidBurgersdynamics[10]. Toour knowledge, such a relation was first established in ref.[1] using stochastic optimization. Nevertheless, connections between stochastic control and (viscous) Burgers equation and between Fokker-Planck equationandoptimalmasstransportareoldthemes,seee.g. ChapterVIof[15],or[21]inaparticular case, for the first ones and [24] for the second ones. Here, inspired by the discussion in [2], we shall minimize the total entropy production given by Eq.(2.29) by a direct argument in the spirit of deterministic optimal control. Our strategy is basedon the subsequentuse of the obvious fact that if a minimizer of a function on abiggersetliesina smalleronethenitrealizesalsothe minimumofthe functionoverthe smallerset. We shall minimize the functional 1 tf [v,ρ ] = dt v M−1v (t,x) ρ(t,x) dx, (4.1) i A T Z Z · 0 (cid:0) (cid:1) where ρ(t,x) is determined by the advection equation (2.9) from the initial density ρ and the i velocity field v(t,x), over all velocity fields v under the constraint that ρ(t ,x) = ρ (x). Such an f f extended minimization problem was considered in [4]. The crucial but simple additional step will be the observationthatthe optimalvelocityfield v(t,x) forwhichtheconstraintminimumis attainedis alocalmeanvelocityforacertaincontrol U(t,x). SuchanoptimalcontrolrealizesthentheLangevin dynamics that interpolates on the time interval [0,t ] between densities ρ and ρ with minimal f i f total entropy production ∆S . tot In[4], seealso[5],it wasshownhow onemay reducethe constraintminimizationoffunctional(4.1) totheoptimalmasstransportproblem. Hereisaslightmodificationofthatargument. Weshalladmit smooth velocity fields v for which the Lagrangiantrajectories x(t) solving the equation x˙(t) = v(t,x(t)), (4.2) where the dot stands for t-derivative, do not blow up. E.g., we may take v bounded by a linear function of x. The solution of the advection equation (2.9) is then given by the formula | | ρ(t,x) = δ(x x(t;x )) ρ (x ) dx , (4.3) i i i i Z − where x(t;x ) denotes the Lagrangiantrajectory that passes through x at time t=0. The substi- i i tution of Eq.(4.3) into definition (4.1) results in the identity 1 tf [v,ρ ] = dt x˙(t;x ) M−1x˙(t;x ) ρ (x ) dx . (4.4) i i i i i i A T Z Z · 0 Since velocity field v(t,x) may be recovered from its Lagrangian flow x(t;x ), the minimization of i [v,ρ ] over velocity fields may be replaced by the minimization of the right hand side of (4.4) over i A Lagrangianflows such that the map x x(t ;x ) x (x ) is constrained by the condition i f i f i 7→ ≡ ρ (x) = δ(x x (x )) ρ (x ) dx , (4.5) f f i i i i Z − or, equivalently, denoting by ∂(xf(xi)) the Jacobian of the map x x (x ), by the requirement ∂(xi) i 7→ f i that ∂(x (x )) ρ (x (x )) f i = ρ (x ). (4.6) f f i ∂(x ) i i i In other words, the Lagrangian map x x (x ) should transport the initial density ρ into the i f i i 7→ final one ρ . Upon exchange of the order of integration, the minimization of functional (4.4) may be f done in three steps: First, we fix a smooth Lagrangianmap • x x (x ) (4.7) i f i 7→ with a smooth inverse x x (x ) such that constraint (4.6) holds. f i f 7→ 6 Second, for each x , we minimize i • tf x˙(t;x ) M−1x˙(t;x ) dt (4.8) i i Z · 0 over the curves [0,t ] t x(t;x ) starting from x and ending at x (x ). Due to the f i i f i ∋ 7→ positivity of matrix M, the minimal curves are just the straight lines t t t [0,t ] t x(t;x ) = f − x + x (x ) (4.9) f i i f i ∋ 7−→ t t f f with constant time-derivative x˙(t;x )=x (x ) x . i f i i − Third, we minimize the “quadratic cost functional” • [x ()] = (x (x ) x ) M−1(x (x ) x ) ρ (x ) dx (4.10) f f i i f i i i i i K · Z − · − over the maps x x (x ) satisfying constraint (4.6). i f i 7→ In principle, the above three-step minimization is over a broader class of maps x(t;x ) which might i be non-invertible for fixed intermediate t, not representing the Lagrangian flow of any velocity field v(t,x). As we shall see in the next section, however, the minimizer (4.9) represents such a flow if x (x ) minimizes the cost function (4.10) under constraint (4.6). f i V. MONGE-KANTOROVICH MASS TRANSPORT AND BURGERS EQUATION The minimization of the quadratic costfunction (4.10) overinvertible Lagrangianmaps x x (x ) i f i 7→ satisfyingconstraint(4.6)isthecelebratedMonge-Kantorovichoptimalmasstransportproblem[25,28] relatedtotheinviscidBurgersequation[4,5,36]. Forreader’sconvenience,weshallbrieflyrecallthat relation in the present section. Observethatconstraint(4.6)mayberewrittenintheequivalentformintermsofinverseLagrangian maps as the identity ∂(x (x )) ρ (x ) = ρ (x (x )) i f . (5.1) f f i i f ∂(x ) f Inthe latterform,itimpliesforthe infinitesimalvariation δx (x ) ofthe inverseLagrangianmapthe i f condition δxi(xf)·(∇xiρi)(xi(xf)) + ρi(xi(xf))∂x∂fax(xbi) ∂δ∂xbix(axf) = 0, (5.2) i f where the 2nd term comes from the variation of the Jacobian ∂(xi(xf)). The last equation may be ∂(xf) rewritten as a no-divergence requirement: ∇xi · ρi(xi)δxi(xf(xi)) = 0. (5.3) (cid:0) (cid:1) Changing variables in the expression (4.10) and using constraint (5.1), we may re-express the cost function in an equivalent form involving the final density: [x ()] = (x x (x )) M−1(x x (x ) ρ (x ) dx . (5.4) f f i f f i f f f f K · Z − · − The variation of the latter is δ [x ()] = 2 (x (x ) x ) M−1δx (x ) ρ (x ) dx f i f f i f f f f K · Z − · = 2 (x x (x )) M−1δx (x (x )) ρ (x ) dx . (5.5) i f i i f i i i i Z − · Forthe extremalmaps x x (x ), variation(5.5)hastovanishforall δx (x (x )) satisfying(5.3). i f i i f i This occurs if and only if 7→M−1(x x (x )) is a gradient, i.e. if there exists a function F(x ) such i f i i − that x (x ) = M∇F(x ). (5.6) f i i 7 Substituting this relation to expression (4.6) for the constraint one infers that function F solves the Monge-Amp`ere equation ∂2F ρ M∇F(x ) det Mac (x ) = ρ (x ) (5.7) f i (cid:18) ∂xb∂xc i (cid:19) i i (cid:0) (cid:1) i i and, in particular, that ∂2F det Mac (x ) > 0 (5.8) (cid:18) ∂xb∂xc i (cid:19) i i (in the above relations, the mobility matrix M may be absorbed by the linear change of variables x x′ = √Mx). The crucial input from the the theory of Monge-Kantorovich optimal mass 7→ transport is the result that the minimizer x x (x ) of the cost function exists and is the unique i f i 7→ extremum corresponding to a function F which is convex [17, 36]. Note that it follows then from Eq.(5.8) that the Hessian matrix of F is everywhere strictly positive. Now, interpolating between 1x M−1x and function F(x ), set 2 i· i i t t t F (x ) = f − x M−1x + F(x ) (5.9) t i i i i 2t · t f f for 0 t t . Hence f ≤ ≤ t t t M∇F (x ) = f − x + x (x ) = x(t;x ), (5.10) t i i f i i t t f f giving the linear interpolation between x and x (x ), just like in (4.9). Since i f i ∂xa(t;x ) ∂2F t t t ∂2F i (x ) = Mac t (x ) = f − δa + Mac (x ), (5.11) ∂xb i ∂xb∂xc i t b t ∂xb∂xc i i i i f f i i it followsthat matrix M−1∂xb(t;xi) , equalto the Hessianmatrix of F , is also everywherepositive ab ∂xc t (cid:16) i (cid:17) for the minimizer and even bounded below by the matrix tf−tM−1. This implies that the map tf x x(t;x ) is locally invertible and injective for all t. The latter property is a consequence of the i i 7→ “monotonicity” expressed by the inequalities x1 x0 M−1 x(t;x1) x(t;x0) i − i · i − i (cid:0) (cid:1) (cid:0) (cid:1) 1 ∂2F = x1a x0a t (1 s)x0+sx1 x1b x0b ds Z i − i ∂xa∂xb − i i i − i 0 (cid:0) (cid:1) i i(cid:0) (cid:1)(cid:0) (cid:1) t t f − (x1 x0) M−1(x1 x0) > 0 (5.12) ≥ t i − i · i − i f whichalsoimplythat x(t,x1) hastosweepthe wholespacewhen (x1 x0) M−1(x1 x0) increases i i− i · i− i from zero to infinity. Hence the global invertibility of the maps x x(t;x ). It then makes sense i i 7→ to define a function Ψ(t,x) by the relation 1 1 Ψ(t,x) = x M−1x x M−1x + F (x ) . (5.13) i t i th2 · − · ix(t;xi)=x Note that the derivative over x of the term [ ] on the right hand side vanishes for x(t;x )=x i i ··· due to Eq.(5.10). It follows that 1 1 ∂ Ψ(t,x)= (x x ) M−1(x x ), ∇Ψ(t,x)= M−1 x x (5.14) t −2t2 − i · − i t − i (cid:0) (cid:1) for x(t;x )=x. Comparing the last two equations, we infer that function Ψ(t,x) satisfies the non- i linear evolution equation 1 ∂ Ψ + (∇Ψ) M(∇Ψ) = 0 (5.15) t 2 · that implies the inviscid Burgers equation (the Euler equation without pressure) ∂ v + (v ∇)v = 0 (5.16) t · 8 for the velocity field v(t,x) = M∇Ψ(t,x). (5.17) Eqs.(5.10) and (5.14) entail that 1 1 x˙(t;x ) = x (x ) x = x(t;x ) x ) = M(∇Ψ)(t,x(t;x )) = v(t,x(s;x )) (5.18) i f i i i i i i t − t − f (cid:0) (cid:1) (cid:0) so that the interpolating maps x(t;x) provide the Lagrangian flow of the Burgers velocity field v and that the latter is conserved along that flow. This is a general fact: Lagrangian trajectories of a velocity field solving the inviscid Burgers equation have constant velocities. Letusdefinetheintermediatedensities ρ(t,x) thatinterpolateoverthetimeinterval [0,t ] between f ρ and ρ by Eq.(4.3) so that they evolve according to the advection equation (2.9) in the Burgers i f velocity field v of Eq.(5.17). It is the assumption that the initial and final densities are smooth that assures that such velocities do not involve shocks on the time interval [0,t ]. f Summarizing the above discussion, we infer that the Burgers velocity field v(t,x) of Eq.(5.17), together with the densities ρ(t,x) of Eq.(4.3), minimize functional [v,ρ ] of Eq.(4.1) over the i A space of velocities v(t,x) and densities ρ(t,x) that evolve for 0 t t by the advection equation f ≤ ≤ (2.9) interpolating between ρ and ρ . The minimal value of functional [v,ρ ] under the above i f i A constraint is 1 = , (5.19) min min A t T K f where is the value of the quadratic cost function (4.10) on the minimizer x x (x ). These min i f i K 7→ are the main results of [4, 5], see also Chapter 8 of [36] for more details. That had to be min A inversely proportionalto the length of the time intervalcould have been inferred directly by rescaling of time in functional (4.1) [31]. Below,weshallusethefollowingfactorizationpropertyoftheoptimalmasstransportproblemwith the cost function (4.10) holding if the mobility matrix has the block form M1 0 M = . (5.20) 0 M2 (cid:16) (cid:17) If, with respect to the corresponding decomposition of the d-dimensional space, both initial and final densities have the product form: ρ (x) = ρ1(x1)ρ2(x2), ρ (x) = ρ1(x1)ρ2(x2) (5.21) i i i f f f for x = (x1,x2), then the Lagrangian map minimizing cost function (4.10) also factorizes into the product of minimizers of the lower dimensional problems: x (x ) = M∇F(x ) = x1(x1), x2(x1) = M1∇F1(x1), M2∇F2(x2) (5.22) f i i f i f i i i (cid:0) (cid:1) (cid:0) (cid:1) andtheminimalcostisthesumofthelower-dimensionalones. Thisfollowsfromtheuniquenessofthe minimizer and its characterization in terms of the gradient of a convex function. The corresponding Burgers potential Ψ(t,x) is then the sum, and the interpolating density ρ(t,x) the product, of the ones obtained from the lower dimensional minimizers. VI. SECOND LAW OF STOCHASTIC THERMODYNAMICS AT SHORT TIMES Let us denote by R(t,x) the dynamic potential related by Eq.(2.6) to the optimal densities ρ(t,x). Set U(t,x) = R(t,x) Ψ(t,x), (6.1) − where Ψ is the Burgers potential (5.13). Eq.(5.17) for the optimal velocity may be rewritten as v = M∇(R U). (6.2) − 9 and the advection equation (2.9) for ρ becomes ∂ ρ = M∇(ρ∇(R U)) = L†ρ, (6.3) t − − t where L is the time-dependent generator (2.5) for the Langevin process with control U. We infer t that the optimal ρ describes the instantaneous probability densities of such a process with initial values distributed with density ρ and that the optimal v is its mean local velocity. It follows then i from relation (2.29) that control U provides the optimal protocol on the time interval [0,t ] that f evolves the initial state ρ to the final state ρ under the Langevin dynamics (2.1) with the minimal i f totalentropyproductionequalto of Eq.(5.19). Weobtainthiswayarefinementforfinitetime min A intervals of the Second Law (2.30) of Stochastic Thermodynamics: Theorem. For the Langevin dynamics (2.1) on the time interval [0,t ] that evolves between states f ρ and ρ , i f 1 ∆S , (6.4) tot min ≥ t T K f withtheinequalitysaturatedbytheoptimalevolutionwiththe timedependentpotential U(t,x) constructed above. Here, as in relation (2.30), ∆S = ∆S +∆S denotes the total entropy change, composed of tot sys env the change of entropy of the system ∆S and the change of entropy of the thermal environment sys ∆S = 1 Q during the process. The theorem states that the total change of entropy during env T Langevinevo(cid:10)luti(cid:11)on(2.1)isnotsmallerthantheminimalquadraticcostfunction(involvingthemobility matrix M) forthetransportofinitialprobabilitydistributiontothefinalone,dividedbytheproduct of time length t of the process by temperature T of the environment. Since the cost function is f strictly positive whenever the initial and final probability distributions are different, it follows that the shorter the time length of the process and the smaller temperature, the bigger minimal total entropy production. The latter may approachzero only for (adiabatically slow) processes taking very long time. Inequality (6.4) provides then a quantitative refinement of the Second Law of Stochastic Thermodynamics (2.30) for processes whose time span does not exceed t . In order to determine the f optimal protocol U(t,x) of Eq.(6.1), one has to find subsequently: 1. the minimizer x x (x )=M∇F(x ) ofthe costfunction ofEq.(4.10) under the constraint i f i i 7→ (4.6) such that = [x ()]; min f K K · 2. the solution Ψ given by Eq.(5.13) of the Burgers equation (5.15) for potentials; 3. the solution ρ given by Eq.(4.3) of the advection equation (2.9) in the Burgers velocity field v =M∇Ψ. The refined Second Law (6.4) may be rewritten as a refinement of the lower bound (2.31) for the heat release in processes with fixed initial and final densities that takes the form 1 Q T ∆S . (6.5) min sys ≥ t K − (cid:10) (cid:11) f andissaturatedforthesameoptimalprotocolthattheinequality(6.4). Ifoneadmitsinitialandfinal jumps of control U , as discussed in Sec.II, then the problem considered in [1, 30] of minimization t of average work (2.24) for fixed initial control U , initial density ρ , and final control U , but for i i f arbitraryfinaldensity ρ , isverycloselyrelatedtotheproblemofminimizingtheheatrelease. Indeed, f we may first minimize Q for fixed ρ and ρ and then minimize the right hand side of Eq.(2.24) i f h i over ρ . This gives the inequality f 1 W min + (U R )(x) ρ (x) dx (U R )(x) ρ (x) dx, (6.6) min f f f i i i (cid:10) (cid:11) ≥ ρf htf K Z − i − Z − where, as before, denotes the minimal value of the cost function (4.10) for the transport of ρ min i K to ρ . The abovebound is saturatedfor the protocol U(t,x) thatminimizes the averageheatrelease f for the fixed final density ρ corresponding to the minimizer of the expression in the square brackets f on the right hand side. 10

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.