ebook img

Analyzing a stochastic process driven by Ornstein-Uhlenbeck noise PDF

0.55 MB·
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Analyzing a stochastic process driven by Ornstein-Uhlenbeck noise

AnalyzingastochasticprocessdrivenbyOrnstein-Uhlenbecknoise B. Lehle1 and J. Peinke1 1Institute of Physics, University of Oldenburg, D-26111 Oldenburg, Germany A scalar Langevin-type process X(t) that is driven by Ornstein-Uhlenbeck noise η(t) is non-Markovian. However,thejointdynamicsofXandηisdescribedbyaMarkovprocessintwodimensions.Buteventhough thereexistsavarietyoftechniquesfortheanalysisofMarkovprocesses,itisstillachallengetoestimatethe processparameterssolelybasedonagiventimeseriesofX. Suchapartiallyobserved2D-processcould,e.g., beanalyzedinaBayesianframeworkusingMarkovchainMonteCarlomethods. Alternatively,anembedding strategycanbeapplied,wherefirstthejointdynamicofX anditstemporalderivativeX˙ isanalyzed. Subse- quentlytheresultscanbeusedtodeterminetheprocessparametersofX andη. Inthispaper, weproposea moredirectapproachthatispurelybasedonthemomentsoftheincrementsofX,whichcanbeestimatedfor 7 differenttime-incrementsτ fromagiventimeseries.FromastochasticTaylor-expansionofX,analyticexpres- 1 sionsforthesemomentscanbederived,whichcanbeusedtoestimatetheprocessparametersbyaregression 0 strategy. 2 n PACSnumbers:02.50.Ey,02.50.Ga a Keywords:Markovprocesses,Stochasticprocesses J 1 3 I. INTRODUCTION locityincrementsbyaprocessinscale[4,5]. Here,thelimit timescaleisreplacedbyitsspatialanalogon,whichisdenoted ] A stochastically forced, first order differential equation pro- asMarkov-Einsteincoherencelengthin[6]. n a vides an appropriate description for the evolution of many Although there are many systems where θ is sufficiently - physical, chemical or biological systems. For simplicity, we small and can be neglected, in a variety of systems such an a restrictourselvestotheevolutionofthescalarquantityX(t) idealizationleadstonotabledifferences. Forsuchsystems,it t a inthefollowing. Additionally,weassumethatthecoefficient isnolongerjustifiedtoignorethecorrelationsofη(t). How- d functions in its evolution equation do not explicitely depend ever, if we want to account for these correlations, we need a . s ontime. Thus,weconsideranequationoftheform descriptionoftheevolutionofη(t)thatgoesbeyonda”purely c random Gaussian process” [8]. The most natural and simple i s ∂ generalization of Gaussian white noise is provided by expo- X = f(X)+g(X)η(t), (1) y ∂t nentiallycorrelatedGaussiannoise,asgeneratedbyastation- h aryOrnstein-Uhlenbeckprocess. Evenifthisisnotthemost p where η(t) denotes the stochastic force. Such an equation generaldescriptionofcolorednoise,theassumptionthatη(t) [ arises not only for the ”obvious” case, where a determinis- isastationaryprocessobeying ticsystemisdrivenbysomeexternalstochasticforce,butalso 1 v forcomplexdynamicalsystemsconsistingofalargenumber ∂ 1 1 2 ofsubsystems.Here,thephenomenonofself-organisationcan η = − η+ ξ(t), θ >0, (2) ∂t θ θ 3 giverisetoadynamicofso-calledorderparametersthat”en- 0 slave”thedynamicofthemicroscopicsubsystems[1],leading makesEq.(1)applicabletoamuchlargerclassofproblems. 0 toanequationoftheaboveform. However,nowthestochas- Here, ξ(t)denotesGaussianwhitenoisewith(cid:104)ξ(t)(cid:105)=0 and 0 ticalforceη(t)cannolongerbeconsideredtobeexternalbut (cid:104)ξ(t)ξ(t(cid:48))(cid:105)=δ(t−t(cid:48)). Thecharacteristictime-scaleofη(t)is . 2 isanintrinsicpartofthesystemdynamic. determinedbytheparameterθ. Inthelimitθ →0,thecaseof 0 Sofar,thestatisticalpropertiesofη(t)havenotbeenspec- Gaussianwhitenoiseisrecovered. 7 ified. In practice, this force quite often is treated as Gaus- Basedonthisassumptionforη(t),Eqs.(1)and(2)describe 1 sian white noise. Frequently, the central limit theorem can a Markov process in two dimensions. However, the analysis : v beinvoked,whichthenjustifiestheassumptionofaGaussian ofthisprocessishamperedbythefactthatusuallyonlya1D- Xi probabilitydensity. Theassumptionofdelta-correlatednoise, series of values of X(t) will be available in practice. In the however,isanidealization. Real-worldsystemsusuallyhave mathematicalcommunitythisproblemisknownas”partially r a some finite correlation time θ. How strong the correlations observed diffusions” [9, 10]. There are approaches to deal of η(t) affect the statistics of X(t) depends on the ratio of θ withsuchproblems. InaBayesianframework,e.g.,onecould and the characteristic time-scale T of X(t) [2]. For θ (cid:28) T useMarkovchainMonteCarlomethodsforanestimationof theforceη(t)canbeapproximatedbydelta-correlatednoise, f, g and θ (see, e.g., [11]). Alternatively, an embedding ap- leading to a Markovian description. The probably most fa- proach could be used, where first a series of velocities X˙(t) mousexampleisgivenbyEinsteinsdescriptionofBrownian is calculated from the values of X(t) and subsequently the motionbyaWiener-process[3].Eventhoughthetrueprocess 2D-system[X(t),X˙(t)]isanalyzed. Thedrift-anddiffusion is non-Markovian on a microscopic scale, the Markov prop- functionsofthis2D-systemcanthenbeusedtodeterminef, erty can be taken for given for increments larger than some g and θ. However, some care has to be taken with this latter limittimescale. Thisapproachhasalsosuccessfullybeenap- approach,becausethevelocitiesneedtobeestimatednumer- plied to other problems like the description of turbulent ve- ically. Thisleadstospuriouscorrelationsthatmayaffectthe 2 results[12]. chooset ≡0,whichsimplifiesnotation. Onethenfinds 0 Here, we propose a more direct approach that is purely 1(cid:90) t basedonthemomentsoftheconditionalincrements η(t) = η(0)e−t/θ+ e(s−t)/θξ(s)ds. (4) θ 0 (cid:12) (cid:12) ∆X(τ,t0)(cid:12)x0 := X(t0+τ)(cid:12)x0−x0, (3) Thisequationholdsforarbitraryvaluesofη(0). Itdescribes a realization of the process η(t), i.e., a trajectory in time, in where(..)| denotesconditioningonX(t )=x . Inthefol- termsofη(0)andatrajectoryofξ(t). Expectationvaluesof x0 0 0 lowing, we restrict ourselves to a statistically stationary pro- functionalsofη(t)thusareobtainedbyaveragingoverthere- cessX(t),i.e.,themomentsof∆X donotdependont and alizationsofη(0)andξ(t).Asthesequantitiesarestatistically 0 canbeestimatedfromasingletimeseriesofX(t)fordiffer- independent,averagingmaybeperformedintwostepsusing entvaluesofτ andx0. Ourstrategyforparameterestimation (cid:10)..(cid:11) = (cid:10)(cid:10)..(cid:11) (cid:11) =(cid:10)(cid:10)..(cid:11) (cid:11) . (5) isbasedonastochasticTaylorexpansion,whichallowsusto ξ,η(0) ξ η(0) η(0) ξ express the increments ∆X by an infinite sum that involves In the next section, η(t) will be expressed as derivative of a multiple integrals with respect to a noise generating process. noise-generatingprocessV(t)withV(0)≡0. Thisimplies Since the Ornstein-Uhlenbeck process can be solved analyt- ically, explicit expressions for the correlations of these inte- (cid:90) t gralscanbegiven.Assumingτ andθtobesmallascompared V(t) := η(s)ds. (6) 0 tothecharacteristicaltime-scaleofX(t),themomentsof∆X canbeapproximatedbyafinitebaseoffunctionsr (τ,θ)that This process plays a comparable role for η(t) as the Wiener i are weighted by coefficients λ (x ,θ). These functions are processdoesforGaussianwhitenoise. UsingEq.(4),wealso i 0 usedtofitthemomentsof∆X andthusallowforanestima- maydescribeatrajectoryofV(t)directlyintermsofη(0)and tion of the process parameters. In a first step, the parameter atrajectoryofξ(t), θ is estimated bya non-linear minimization procedure. Sub- V(t) = η(0)θ(1−e−t/θ) sequently, by linear fitting, the coefficients λ are estimated, i (cid:90) t(cid:104) (cid:105) whicharethenusedtodeterminef andg. + 1−e(s−t)/θ ξ(s)ds. (7) It may be noted that in the limit θ → 0 the functions ri 0 reducetopowersofτ. Theaboveapproachthenrecoversthe ItmayeasilybecheckedthatV(t)approachestheWienerpro- so-calleddirectestimationmethod,whichisapplicabletopro- (cid:82)t cessW(t):= ξ(s)dsinthelimitθ →0. cessesdrivenbyGaussianwhitenoise.Duetoitssimplicityof 0 Finally,weconsidersomepropertiesofthestationarypro- use,thismethodhasfoundwide-spreaduse. Foranoverview cess. WithEq.(4)thestationaryprobabilitydensityfunction seee.g. [13]. (PDF)ofη(t)isfoundtobeaGaussianwithvariance1/(2θ) The standard rules for integro-differential equations apply and vanishing mean. Furthermore, the autocorrelation func- tothecalculationsinthispaperforθ > 0,i.e.,forcorrelated tionisfoundtobe drivingnoise.Therefore,weadopttheStratonovichdefinition 1 ofstochasticintegralsinthelimitθ = 0. Bythischoice, the (cid:104)η(t)η(t(cid:48))(cid:105) = e−|t−t(cid:48)|/θ. (8) resultswillremainvalidinunchangedformalsointhewhite- 2θ noiselimit[14]. Thismeansthatη(t)isnormalizedinthefollowingsense: its Thispaperisstructuredasfollows. InSec.II,wecompile strength, i.e., theintegraloveritsautocorrelationfunction, is somepropertiesofthestochasticforceη(t).Subsequently,the constantandequalsunity.Therefore,inthelimitθ→0theau- stochasticTaylorexpansionofX(t)isgiveninSec.III,which tocorrelationapproachesδ(t−t(cid:48))andη(t)approachesGaus- is used in Sec. IV to provide a series representation for the sianwhitenoiseξ(t). momentsoftheconditionalincrementsofX. Thefunctional formoftheseriestermsisdiscussedinSec.V.Subsequently,a seriestruncationisperformedinSec.VI,whichthenisusedin III. STOCHASTICTAYLOREXPANSION Sec.VIItoformulateastrategyforparameterestimation. To verifytheanalyticalresults,anumericalexamplewillfinally Since, later on, we will focus on moments of conditional in- begiveninSec.VIII. crements of X(t), we need an analytic description of these increments. Assumingsmoothfunctionsf andg, suchade- scription can be provided by a stochastic Taylor expansion, which allows us to express a trajectory of X(t) in terms of II. STOCHASTICFORCE X(0), values and derivatives of f and g at X(0), and a tra- jectoryofη(t). Suchexpansionsaredescribedingreatdetail, Weassumethatη(t)isastationaryOrnstein-Uhlenbeckpro- e.g.,in[15],andwewillcloselyfollowtheselines. cessobeyingEq.(2). Ornstein-Uhlenbeckprocessesarewell ThestartingpointisEq.(1)intheformdX =fdt+gηdt. understoodandcanbesolvedanalytically. Therealizationof Expressing η(t) as derivative of a noise generating process η(t ≥ t ) can explicitely be expressed in terms of an initial 0 V(t)asdefinedbyEq.(6),thismaybewrittenas value η(t ) and the realization of ξ(t ≥ t ). Since we fo- 0 0 cusinthefollowingonthestationaryprocess, wearefreeto dX(t) = f[X(t)]dt+g[X(t)]dV(t). (9) 3 Next, the infinitesimal increment of an arbitrary, smooth, level,leadingtoconstantdoubleintegralsplusvariabletriple functionh(X)isconsidered. SinceX(t)iscontinuouslydif- integrals—andsoon. Intheend,oneisleftwithaninfinite ferentiable,thestandardchain-ruleofdifferentiationapplies, sumofmultipleintegrals,whichonlydependontandthere- alizationofV(t), thataremultipliedbycoefficientfunctions ∂h[X(t)] dh[X(t)] = dX(t). (10) that only depend on values and derivatives of f, g and h at ∂X(t) X(0), ExpressingdX byEq.(9)andintroducingtheoperators (cid:90) t h{t} = h{0}+[L h]{0} ds ∂ 0 L0 := f[X(t)]∂X(t), (11a) (cid:90) t 0 +[L h]{0} dV(s) 1 ∂ L1 := g[X(t)]∂X(t), (11b) +[L L h]{0}0(cid:90) t(cid:90) sds(cid:48)ds 0 0 thismaybewrittenas (cid:90)0t(cid:90)0s +[L L h]{0} dV(s(cid:48))ds dh{t} = [L h]{t}dt+[L h]{t}dV(t). (12) 1 0 0 1 0 0 +... (17) Here,weusethenotation{t}toindicatethatallargumentsof afunctionorexpressionaretobeevaluatedattimet.Nowthe Usingamulti-indexα,definedas actualexpansionofhcanbestarted.Inintegralform,Eq.(12) α := (α ,...,α ), n∈N, α ∈{0,1}, (18) reads 1 n i (cid:90) t theexpansionofh[X(t)]canbecompactlywrittenas h{t} = h{0}+ [L0h]{s}ds (cid:88) h[X(t)] = h[X(0)]+ c [X(0)]J (t), (19) 0 α α (cid:90) t α + [L h]{s}dV(s). (13) 1 0 wherethecoefficientfunctionscαaregivenby aSriencaelswoesmcoonostihdefurendctfio,ngsaonfdXh.tCoobnesesmquoeontthly,,LE0qh.a(1n3d)Lc1ahn c(α1,...,αn)[X(0)] := [Lα1...Lαnh]{0}, (20) beapplied, andtheintegralsJαby (cid:90) s (cid:90) t (cid:90) sn (cid:90) s2 [L0h]{s} = [L0h]{0}+ [L0L0h]{s(cid:48)}ds(cid:48) J(α1,...,αn)(t) := ··· (cid:90) s 0 sn=0 sn−1=0 s1=0 + [L L h]{s(cid:48)}dV(s(cid:48)), (14) ×dZ (s )···dZ (s ), (21) 1 0 α1 1 αn n 0 (cid:90) s with [L1h]{s} = [L1h]{0}+ [L0L1h]{s(cid:48)}ds(cid:48) (cid:26)ds ,j =0 (cid:90) s 0 dZj(s) := dV(s) ,j =1 . (22) + [L L h]{s(cid:48)}dV(s(cid:48)). (15) 1 1 0 Ingeneral,theseintegralsarefunctionalsoftherealizationof InsertingtheseresultsintoEq.(13)thenyields V(t)respectivelyη(t)andthusstochasticquantities.Onlyfor α =...=α =0theintegralsbecomepurelydeterministic (cid:90) t 1 n andevaluateto h{t} = h{0}+ [L h]{0}ds 0 0 1 (cid:90) t(cid:90) s J (t) = tn. (23) + [L L h]{s(cid:48)}ds(cid:48)ds (0,...,0) n! 0 0 (cid:90)0t(cid:90)0s So far, the expansion of some arbitrary function h(X) has + [L L h]{s(cid:48)}dV(s(cid:48))ds been considered. Being interested in the expansion of X(t) 1 0 0 0 itself, we choose h(X) ≡ X in the following. Additionally, (cid:90) t wefixthevalueX(0)tox ,whichthenleavesuswith + [L h]{0}dV(s) 0 1 0 (cid:12) (cid:88) (cid:12) +(cid:90) t(cid:90) s[L L h]{s(cid:48)}ds(cid:48)dV(s) X(t)(cid:12)x0 = x0+ cα(x0)Jα(t)(cid:12)x0, (24) 0 1 α 0 0 (cid:90) t(cid:90) s wherethecoefficientfunctionsarenowdefinedas + [L L h]{s(cid:48)}dV(s(cid:48))dV(s). (16) 1 1 (cid:12) 0 0 c(α1,...,αn)(x0) := [Lα1...LαnX]{0}(cid:12)x0. (25) ThesingleintegralsfromEq.(13),whichhadtime-dependent Omitting arguments and using a prime to denote derivatives integrands [L h]{s}, are now replaced by single integrals i withrespecttoX,thefirstfewofthesefunctions(tobeeval- withconstantintegrands[L h]{0}plusadditionaldoubleinte- i uatedatx )read grals with time-dependent integrands [L L h]{s(cid:48)}. Express- 0 i j ingthesefunctionsbyEq.(13)willputthegameonthenext c = f, c =ff(cid:48), c =gf(cid:48), ... (0) (0,0) (1,0) 4 c = g, c =fg(cid:48), c =gg(cid:48), ... . (26) φ (τ,x), because a product J J can be expressed by a (1) (0,1) (1,1) α,β α β sumofintegralsJ (seeAppendixA), γ TheconditioningofJ inEq.(24)deservessomecomment. α (cid:88) Afterall,arealizationofJ (t)doesnotdependonX(0)but J J = J , (33) α α β γ isapurefunctionalofη(t),whichitselfisafunctionalofη(0) γ∈M(α,β) and the realization of ξ(t). However, the PDF of η(0) will, whichimplies in general, depend on X(0) [see, e.g., Appendix D for the (cid:88) expectation value of η(0)|x0]. As a consequence, ensemble φα,β = φγ −φαφβ. (34) averagesofanyconditionedfunctionalF[η(t)]| needtobe x0 γ calculatedbyaveragingovertherealizationsofξ(t)andover the conditional realizations η(0)| . Averaging may still be x0 performedintwosteps,e.g.,by V. FUNCTIONALFORMOFφα (cid:68) (cid:2) (cid:3)(cid:12) (cid:69) (cid:68)(cid:68) (cid:2) (cid:3)(cid:69) (cid:69) F η(t) (cid:12) = F η(t) . (27) Thestartingpointforthecalculationofφ isthedefinitionof x0 ξ η(0)|x0 the integral J , as provided by Eq. (21).αLet us consider an α indexvectorαoflengthnanddenotethenumberofitsnon- zero entries by m. Using dV = ηdt, Eq. (21) may then be IV. CONDITIONALMOMENTSOF∆X writtenasn-foldintegralwithrespecttotimeoveranm-fold productofη, Wenowturntomeanandvarianceoftheconditionalprocess incrementsofX(t), (cid:90) (cid:104) (cid:89) (cid:105) J (τ) = η(s ) ds ···ds . (35) α j 1 n M(1)(τ,x) := (cid:68)∆X(τ)(cid:12)(cid:12) (cid:69), (28a) Ω(τ) αj=1 x Here,theshortcutΩ(τ)hasbeenintroducedtodenotethein- M(2)(τ,x) := (cid:68)(cid:2)∆X(τ)(cid:12)(cid:12)x−M(1)(τ,x)(cid:3)2(cid:69), (28b) tegration domain (a simplex in Rn wi(cid:12)th 0 ≤ si ≤ si+1 and sn ≤τ). TheensembleaverageofJα(cid:12)xthenreads wheretheincrementsaredenotedby (cid:90) ∆X(τ)(cid:12)(cid:12) := X(t+τ)(cid:12)(cid:12) −x. (29) φα(τ,x)= Cη(sj1,...,sjm,x)ds1···dsn, (36) x X(t)=x Ω(τ) where the values j ,...,j denote the positions of the non- Since we are conditioning on the value of X at some arbi- 1 m zero entries in α and C the m-point correlation function of trarytimet,wedenotethisvaluebyxinsteadofbyx0. Ad- (cid:12) η ditionally, we suppress the function argument t, because the η(t)(cid:12)x. Forarbitrarytimest1,...,tm thisfunctionisdefined as statisticalpropertiesoftheincrements∆X donotdependon time for a stationary process. Stationarity also implies that C (t ,...,t ,x):=(cid:10)(cid:10)η(t )···η(t )(cid:11) (cid:11) . (37) themomentsM(k) canbeestimatedfromagiventimeseries η 1 m 1 m ξ η(0)|x of X(t) by replacing the above ensemble-averages by time- According to Eq. (4), η may be splitted up into one part de- averages(tacitlyassumingergodicity). Usingtheresultsfrom pendingonlyonη(0)andanotheronedependingonlyonξ, theprevioussection,wealreadyhaveananalyticaldescription e−t/θ fortheincrements, η(t) = Y √ +u(t), (38) θ (cid:12) (cid:88) (cid:12) ∆X(τ)(cid:12)x = cα(x)Jα(τ)(cid:12)x. (30) with √ α Y := θη(0), (39) Hence,theconditionalmomentsaregivenby 1(cid:90) t u(t) := e(s−t)/θξ(s)ds. (40) M(1)(τ,x) = (cid:88)c (x)φ (τ,x), (31a) θ 0 α α α Consequently, Cη can be expressed in terms of conditional M(2)(τ,x) = (cid:88)c (x)c (x)φ (τ,x), (31b) momentsofY andcorrelationfunctionsofu(t),denotedas α β α,β (cid:10) (cid:11) α,β C(t1,...,tk) := u(t1)···u(tk) ξ. (41) with(omittingarguments) Forexample,wefind φα := (cid:10)Jα(cid:12)(cid:12)x(cid:11), (32a) C (t ,t ,x) = (cid:10)Y2|x(cid:11)e−(t1+t2)/θ (cid:10) (cid:12) (cid:12) (cid:11) (cid:10) (cid:12) (cid:11)(cid:10) (cid:12) (cid:11) η 1 2 θ φα,β := Jα(cid:12)xJβ(cid:12)x − Jα(cid:12)x Jβ(cid:12)x . (32b) (cid:10) (cid:11)e−t1/θ + Y|x √ C(t ) Asaresult,wenowhaveanalyticdescriptionsofthemoments 2 θ — but unfortunately in terms of infinite series. In order to obtainapproximatedescriptionswithafinitenumberofterms, +(cid:10)Y|x(cid:11)e−√t2/θC(t ) 1 thefunctionalformofφ (τ,x)needstobeinvestigatedinthe θ α following. This also provides us with the functional form of +C(t ,t ). (42) 1 2 5 ExplicitexpressionsforC canbefoundbyvirtueofEq.(40). and It turns out that the correlation functions of u(t) have the n samestructureasthoseofGaussianwhitenoiseξ(t)(seeAp- (cid:96)(α) := n− m = (cid:88)(1−α /2). (49) pendixB).Thek-pointcorrelationofuvanishesforoddval- 2 i i=1 ues of k, while for even values it can be expressed by a sum ofproductsofthetwo-pointcorrelation[Eq.(B6)]. Wethusmayexpressb intheform(cid:80)λ˜ r˜ . Note,thatthe k ij ij SinceC(t1,...,tk)vanishesforoddk,theexpressionsf(cid:12)or coefficientsλ˜ij donotdependonθ,becausethisdependency CηmaycontaineitheronlyevenoronlyoddmomentsofY(cid:12)x. is completely accounted for by the functions r˜ij. This prop- Toprovideanexample: erty will be useful in the next section, when we consider the magnitudeofindividualterms. However, sincethefunctions C (t ,t ,t ,x) = (cid:10)Y3|x(cid:11)e−(t1+t2+t3)/θ r˜ij dependon(cid:96)(α),thispropertycannotbesustainedforthe η 1 2 3 θ followingbaseoffunctionsr ,whichisusedtodescribethe ij (cid:10) (cid:11)e−t1/θ τ-dependencyofb ,andthusofφ ,forarbitraryvectorsα, + Y|x √ C(t ,t ) k α 2 3 +(cid:10)Y|x(cid:11)e−√t2θθ/θC(t1,t3) B := (cid:8)∪r(cid:8)0br(aτb)((cid:12)(cid:12)τb)(cid:12)(cid:12)∈a,Nb(cid:9)∈∪N(cid:8)(cid:9)r,a0(τ)(cid:12)(cid:12)a∈N(cid:9) (50) (cid:10) (cid:11)e−t3/θ with + Y|x √ C(t ,t ). (43) 1 2 θ r (τ) := 1−e−bτ/θ, (51a) 0b With the above results, the intgral on the right-hand side of 1 r (τ) := τa, (51b) Eq.(36)canbeevaluated,whichleadsto a0 a! r (τ) := (τ/θ)ae−bτ/θ. (51c) 2k≤m ab φ (τ,x)= (cid:88) (cid:10)Ym−2k|x(cid:11)a (τ). (44) α k Note that the product of any two functions of this base lies k=0 in the linear span of B. According to Eq. (34), therefore, B Here, we introduced the shortcuts ak(τ) to denote the func- notonlyprovidesabaseforthefunctionsφα butalsoforthe tionsthatstemfromtheintegrationswithrespecttotime. Ac- functionsφαβ. tually,thesefunctionsdependalsoontheindexvectorα,but we supressed this argument for notational simplicity. For an VI. SERIESTRUNCATION explicitexampleseeAppendixC. Lateron,itprovestobeusefultore-arrangetheright-hand sideofthisequationbyexpressingthepowersofY interms Withtheresultsfromtheprevioussection,theseriesrepresen- ofHermitepolynomialsinY. Reorderingtermsthenyields tationofthemomentsM(k),Eq.(31),cannowbeexpressed intermsoffunctionsr ∈B, i 2k≤m φ (τ,x)= (cid:88) (cid:10)H (Y)|x(cid:11)b (τ), (45) M(k)(τ,x) = (cid:88) λ(k)(x)r (τ), (52) α m−2k k i i k=0 ri∈B wherethek-thHermitepolynomialisdefinedas whereeachcoefficientfunctionλi consistsofaninfinitesum ofterms. Theseterms,ingeneral,areformedbypowersofθ, (cid:18) ∂ (cid:19)k thefunctionsc fromtheTaylorexpansion[Eq.(25)],andthe H (y):=(−1)key2 e−y2, (46) α k ∂y expectationvalues(cid:104)Hn(Y)|x(cid:105). In order to approximate M(k) by a finite number of func- andthefunctionsb arelinearcombinationsofthefunctions tions,itbecomesnecessarytomakesomeassumptionsonthe k a .Forexample,aright-handsideoftheform(cid:10)Y2|x(cid:11)a +a magnitudeoftheindividualterms.First,weassumethatX(t) k 0 1 becomes(cid:10)H (Y)|x(cid:11)b +b withb = a /4andb = a + has been normalized to ensure a characteristic time-scale of 2 0 1 0 0 1 1 a0/2. unityandcoefficientfunctionscα oforderO(1). Second,we Bymathematicalinduction,itmaybeshownthatthefunc- assume tionsa (τ),andthusalsothefunctionsb (τ),arelinearcom- k k τ =O(ε), θ =O(ε2), (53) binationsofthefunctions r˜ (τ) := θ(cid:96)(α)(cid:2)1−e−bτ/θ(cid:3), (47a) whereεhasbeenintroducedtodenoteaquantitythatissmall 0b ascomparedtounity. Wemayidentifyεasthelargestincre- r˜ (τ) := θ(cid:96)(α)(τ/θ)a, (47b) ment τ for which we assume that our truncated description a0 r˜ (τ) := θ(cid:96)(α)(τ/θ)ae−bτ/θ, (47c) of M(k) holds. The terms (cid:104)Hn(Y)|x(cid:105), finally, are (for the ab time being) treated as O(1) terms, because Y := θ1/2η is with a Gaussian random variable with a constant variance of 1/2. It remains to ask for the magnitude of r (τ). According to i a,b∈N, a≤(cid:96)(α), b≤m (48) Eq.(51),thisisatermoforderO(εa)forr = r ,whereas i a0 6 for r = r or r = r it may be treated as term of order with i 0b i ab O(1), because in this case the value range of ri is finite and (cid:40)τ −θ(1−e−τ/θ), i=1 dependsonlyonaandb. r (τ) = . (59) We now focus on a description of M(1), in which only i i1!τi−θri−1(τ), i=2,3 termsuptoorderO(ε3)areconsidered.AccordingtoEq.(47) Thecoefficientsofr arefoundtobe(omittingarguments) andtheaboveassumptions,afunctionφ mayonlygiverise 1 α tloowteersmtosrodferorcdoenrtrOib(uθtjioτnks)iwnittehrmjs+okfε=ar(cid:96)e(αof).orTdheerrOef(oεr(cid:96)e(,αt)h)e. λ(11) = f + 21gg(cid:48)+ 21θ(cid:8)f(cid:48)gg(cid:48)−fg(cid:48)g(cid:48)(cid:9), (60a) WethusmaywriteEq.(31a)intheform λ(2) = gg+θ(cid:8)f(cid:48)gg−fgg(cid:48)(cid:9). (60b) 1 (cid:88) M(1)(τ,x) = cα(x)φα(τ,x)+O(ε4). (54) These equation will allow us to determine f and g, once we (cid:96)(α)≤3 managetoprovidevaluesforλ(k)andθ. 1 Evaluatingthesefunctionsφ andre-sortingtermsthenpro- α videsanapproximationofM(1) intermsofsevenbasefunc- VII. PARAMETERESTIMATION tions r . Of course, the resulting coefficients of these func- i tions are only truncated versions of the coefficients λ in Eq.(52). Thecoefficientofr1,0 ≡τ,e.g.,isonlyaccuratei up Now the estimation of λ(ik) and θ can be addressed. For a toorderO(θ)—butthiswillbesufficientforourpurposes. given time series of X, the moments M(k) can be estimated Sofar,theexpectationvalues(cid:104)Hn(Y)|x(cid:105)havebeentreated foranumberofN time-incrementsτν with astermsoforderO(1).Actually,however,thisisonlyalower τ ≤ τ , ν = 1,...,N. (61) limitfortheirorderofmagnitude. Thisbecomesobviousby ν max looking at the Fokker-Planck equation of the stationary 2D- These estimates of M(k) can be fitted by means of Eq. (58) process [X(t),η(t)], where it turns out that (cid:104)H1(Y)|x(cid:105) is of in a least-square sense. Estimates of θ and λ(k) thus may be orderO(ε)(seeAppendixD), i obtainedbyminimizingtheresiduals (cid:104)H (Y)|x(cid:105) = 2θ1/2(cid:104)η|x(cid:105) = −θ1/22f(x). (55) (cid:88)N (cid:104) 1 g(x) R(k)(x,λ(k),θ) := M(k)(x,τ ) ν ν=1 For n > 1, such explicit results are not available. Never- 3 theless, the magnitude of terms can be shown to obey (see −(cid:88)λ(k)r (τ ,θ)(cid:105)2, (62) i i ν AppendixE) i=1 (cid:104)H (Y)|x(cid:105) = O(θn/2) = O(εn). (56) wherethevaluesλ(k)havebeencombinedintothevectorλ(k) n i forsyntacticalconvenience. Additionally, thedependencyof With these findings, a number of terms become sufficiently the functions r on θ has been made explicit by the syntax. i small to be neglected, which leads to an approximation of Due to this dependency, a non-linear approach is needed for M(1)intermsofthefunctionbase{r ,r ,r ,r }. Ad- 0,1 1,0 2,0 3,0 theminimizationoftheaboveresiduals. ditionally, it turns out that terms (cid:104)Hn(Y)|x(cid:105) with n > 1 are It may be noted that minimizing R(k) includes the esti- nolongerpresentinthecoefficientsofthesefunctions. mation of θ for each value of x and k — despite the fact As a last step, we switch to a modified base that θ is a constant. This is neither the most efficient nor {r0,1,r1,r2,r3}, where the new base functions ri are the most accurate way for parameter estimation. Instead, linearcombinationsoftheformerones, we will estimate θ only once, based on the autocovariance (cid:26)r (τ)−θr (τ), i=1 A(τ):=(cid:104)X(τ)X(0)(cid:105),respectivelyitsincrements 1,0 0,1 r (τ) := . (57) i r (τ)−θr (τ), i=2,3 ∆A(τ ) := A(τ )−A(0) i,0 i−1 ν ν (cid:10)(cid:2) (cid:3) (cid:11) = X(τ )−X(0) X(0) . (63) Thisbasenotonlyleadstosimplercoefficientsingeneral,but ν most importantly, the coefficient of r0,1 now becomes suffi- Estimates of ∆A are much more accurate than estimates of cientlysmalltobeneglected, whichleavesuswithabaseof M(k),becausethelatterarebasedonfarlessdata,duetothe onlythreefunctions. conditioningonx. Anapproximationof∆Athatisaccurate Following the above lines, also an approximation of M(2) uptotermsoforderO(ε3)isgivenby(seeAppendixG) can be obtained. As it is the case for the approximation of M(1), calculations are straightforward but cumbersome. (cid:88)3 ∆A(τ) ≈ λ r (τ,θ). (64) Therefore, we only give the final results here, which can be i i summarizedasfollows. ThemomentsM(k) canbeapproxi- i=1 matedby Anestimateofθmaythusbeobtainedbyminimizing 3 N 3 M(k)(τ,x) ≈ (cid:88)λ(k)(x)r (τ), (58) R(λ,θ) := (cid:88)(cid:104)∆A(τ )−(cid:88)λ r (τ ,θ)(cid:105)2. (65) i i ν i i ν i=1 ν=1 i=1 7 Forfixedθ,theoptimalvaluesλ∗(θ)canbeobtainedbylinear We use this system of equations to generate discrete time i regression. Thismeans: wecanexplicitlycalculate series of X(t), consisting of 107 points, using a sampling timestepdt=0.005. IntegrationisperformedusingtheEuler- λ∗(θ) = argmin R(λ,θ) (66) schemewithaninternaltimestepδt=0.02×min(θ,dt). The λ globaltime-scaleofX(t)canbeestimatedfromitsautocorre- aswellasthecorrespondingresidualvalueR[λ∗(θ),θ]. The lationfunctionandapproximatelyequalsunityforsmallval- optimal value θ∗, which corresponds to the global minimum ues of θ. In Fig. 1 excerpts of the generated time series are R[λ∗(θ∗),θ∗],isthusformallygivenby shownfordifferentvaluesofθ,andinFig.2thecorrespond- ingprobabilitydensitiesp(X)andtheincrements∆Aofthe θ∗ = argmin R[λ∗(θ),θ]. (67) autocorrelation functions are provided. While stronger cor- θ relations of the driving noise lead to notably smoother time Inpractice,θ∗maybefoundnumerically,e.g.,byusingsome series,almostnoeffectontheprobabilitydensitycanbeseen. recursive strategy to search for the minimum of R[λ∗(θ),θ] within the interval [0,θ ]. We safely may choose θ = max max τ ,because,accordingtoourassumptionsonthemagnitude max of τ and θ, Eq. (53), we anyway need to rely on θ < τ . max Otherwiseourseries-truncation,Eq.(58),wouldnolongerbe valid. Once θ has been estimated, estimates of λ(k) become ac- i cessible by a linear regression strategy, which allows us to explicitlycalculate λ∗(k)(x,θ∗) = argmin R(k)(x,λ(k),θ∗). (68) λ(k) Asalaststep,itremainstodeterminef andg usingtheesti- mates of θ and λ(k). This is achieved by writing Eq. (60) in 1 theform(omittingargumentsanddroppingasteriscs) f = λ(1)− 1gg(cid:48)− 1θ(cid:8)f(cid:48)gg(cid:48)−fg(cid:48)g(cid:48)(cid:9), (69a) 1 2 2 FIG.1: Excerptsofthegeneratedtimeseries. Forincreasingvalues (cid:113) ofθ,thecurvesbecomesmoother. g = λ(2)−θ(cid:8)f(cid:48)gg−fgg(cid:48)(cid:9). (69b) 1 Becauseθisassumedtobesmall,f andgcanbedetermined byafixed-pointiteration. Forgivenvaluesf(n) andg(n),the right-handsidesoftheaboveequationsprovidethedefinitions forf(n+1) andg(n+1). However, asthisrequirestheevalua- tion of spatial derivatives of f(n) and g(n), one needs to si- multaneouslyiteratethevaluesatdifferentlocationsx . The i requiredderivativescanthenbeestimatedbysomenumerical differencingscheme.Appropriatestartingvaluesareprovided byf(0)(x )=λ(1)(x )andg(0)(x )=[λ(2)(x )]1/2. i 1 i i 1 i VIII. NUMERICALEXAMPLE In the following, we investigate a numerical test case, for which we use a non-symmetric, heavy-tailed, process with multiplicativenoise, FIG.2:PDFsoftheexperimentaldata(a)andincrements∆A(τ)of X˙ = f(X)+g(X)η(t), (70) theirautocorrelation(b). 1 1 η˙ = − η+ ξ(t), (71) Fortheanalysisofagiventimeseries,theregressionfunc- θ θ tions r (τ,θ) play a central role. Therefore, we give them i with explicitelyhereagain, f(x) = −x+ 1x2− 1x3, (72) r1(τ,θ) = τ −θ(1−e−τ/θ) (74a) 2 4 r (τ,θ) = τ2/2−θr (τ,θ) (74b) 1 2 1 g(x) = 1+ x2. (73) r (τ,θ) = τ3/6−θr (τ,θ). (74c) 4 3 2 8 Theactualanalysiscanbesummarizedas: excellentlybefittedwiththefunctionsr . Themeanerroris i onlyabout2.5×10−4. 1) Estimatethecorrelationtimeθbynon-linearfittingthe increments of the autocovariance of X with the func- tionsr (τ,θ). i 2) Use the estimated value θ∗ to estimate the values λ(k)(x,θ∗) by linear fitting the moments M(k)(τ,x) 1 withthefunctionsr (τ,θ∗). i 3) Usetheestimatedvaluesλ∗(k)(x,θ∗)tocalculateesti- 1 matesforf andgusingEq.(69). These steps will now be detailed. We first consider the esti- mation of the correlation time θ. As mentioned in Sec. VII, an estimate θ∗ may be found by minimizing the residual R[λ∗(θ),θ],wherethevectorλ∗(θ)isobtainedfromalinear fit of ∆A(τ) using the functions r (τ,θ). To find the mini- i mumofRinaninterval[θ ,θ ],weusearecursivestrat- min max egy.First,theresidualisevaluatedforanumberofequidistant FIG.4: EstimatesofM(1)(a)andM(2)(b),obtainedforabincen- valuesθ coveringthewholeinterval.Next,theintervalisnar- teredatx = −0.9. Estimatedvaluesareshownassymbolsandthe i corresponding fits as solid lines. Additionally, the moments in the rowedandrepositionedsuchthatitonlycoversthevicinityof thevalueθ∗,forwhichtheresidualwasfoundtobesmallest. limitθ→0areindicatedbydashedlines. i These steps can now be repeated until the desired numerical accuracyisreached. Finally, we consider the estimates of the coefficients λ(k) 1 In our example, we first use the values ∆A(νdt) with and of the functions f, f + gg(cid:48)/2 and g. We do not show 1 ≤ ν ≤ 60 for the fits. This corresponds to a maximum the results for θ = 0.001, because these would look almost time increment τ = 0.3. Since we use a truncated series identical to the results for θ = 0.01, which are presented in max representationforthedescriptionof∆A,thevalueofτ af- Fig. 5. Here, we find that the estimates of f and g are in max fects the systematic errors of the fits and should be choosen very good accordance with the true values. Additionally, it as small as possible. Therefore, once we have calculated shows that — for the given value of θ — the values of λ(1) θ∗ with√τmax = 0.3, we restrict the maximum increment to and [λ(2)]1/2 are almost identical to the values of f +gg(cid:48)1/2 τ∗ = θ∗,whichisconsistentwithourassumptionsonthe 1 max andg, respectively. Butthiswillchange, whenlargervalues magnitudeofterms,andrepeatthecalculationofθ∗. ofθareconsidered. Estimatesforθthatareobtainedbyfollowingthisstrategy areshowninFig.3fortherange0.001 ≤ θ ≤ 0.1. Evenifθ seemstobeslightlyunderestimatedforθ > 0.01,theoverall accuracyisquitegood. FIG.3:Ratioθ∗/θofestimatedandtruecorrelationtime. With an estimate θ∗ at hand, the coefficients λ(k) are ob- 1 tained from linear fits of the moments M(k) using the func- FIG.5: Estimatesofλ(11),f andf +gg(cid:48)/2(a)and[λ(12)]1/2 andg tions r (τ,θ∗). For the estimation of these moments, we (b)forθ = 0.01. Estimatedvaluesareshownassymbolsandthe i truefunctionsassolidlines. use a binning approach, where the range −2 ≤ x ≤ 3 is divided into 25 bins. For each bin we estimate the values M(k)(νdt,x) with 1 ≤ ν ≤ 60 from the data. Here, x is InFig.6theresultsforθ =0.05areshown,where[λ(2)]1/2 1 takentobethepositionofthebin-center. InFig.4,estimated clearly deviates from g. However, since we account for this values and resulting fits of M(k) at x = −0.9 are shown for deviationbyEq.(69),westillobtainaccurateestimatesforf different values of θ. The values obtained from the data can andg. 9 where θ is assumed to be small as compared to the charac- teristic time-scale of X. Actually, the method presented in this paper is accurate up to first order terms in θ. In princi- ple, however, also higher order approximations are possible. The method may be seen as generalization of the direct esti- mation method [13], which also formally is recovered in the limitθ →0. The applicability and accuracy of our approach has been demonstrated by a numerical example, where reasonable ac- curateresultsareobtainedevenforvaluesofθaslargeasten percentoftheglobaltime-scale. Forsmallervaluesofθ, the resultsare(asidefinite-sizefluctuations)closetoexact. The presented approach is straightforward to implement and neither demanding with regard to memory nor to CPU power. An analysis of a series of 107 values is performed FIG.6: Estimatesofλ(1),f andf +gg(cid:48)/2(a)and[λ(2)]1/2 andg withinafewsecondsonastandarddesktopPC. 1 1 (b)forθ = 0.05. Estimatedvaluesareshownassymbolsandthe It would be interesting to also apply our approach to an truefunctionsassolidlines. analysisofturbulentvelocityincrementsasdescribedin[4,5], wherethedescriptionoftheincrementsisapproximatedbya Markov-processinscale. Theobservedconditionalmoments Toalsoshowthelimitationsofourapproach,theresultsfor oftheprocess-incrementsshowastrikingsimilaritytothemo- θ = 0.1 arepresented inFig. 7. Here, theestimates become ments M(k) [Fig. 4] of a process that is driven by Ornstein- less accurate outside the range −1 < x < 2. Especially for Uhlenbeck noise. However, since the drift- and diffusion x > 2theestimatesofλ(1) nowshowfluctuationsthatham- functions for this problem clearly depend on scale, our ap- 1 pertheestimationofthespatialderivativesthatareneededto proachfirstneedstobeextendedtonon-stationaryprocesses, calculatef andgusingEq.(69). whichisataskforthefuture. AppendixA:ProductsofintegralsJ α AccordingtoEq.(21),theintegralsJ aredefinedas α (cid:90) τ (cid:90) sn (cid:90) s2 J (τ) := ··· (α1,...,αn) sn=0 sn−1=0 s1=0 ×dZ (s )···dZ (s ), (A1) α1 1 αn n with (cid:26) ds ,j =0 dZ (s) := , (A2) j dV(s) ,j =1 Thereareanumberofobvioussolutions,like FIG.7: Estimatesofλ(1),f andf +gg(cid:48)/2(a)and[λ(2)]1/2 andg 1 1 J (τ) = τ, J (τ) = V(τ). (A3) (b)forθ=0.1.Estimatedvaluesareshownassymbolsandthetrue (0) (1) functionsassolidlines. Additionally, it is possible to express a product of two inte- grals by a sum of single integrals: According to the above definition,anintegralJ maybewrittenas α IX. CONCLUSIONS (cid:90) τ J (τ) := J (s)dZ (s), (A4) α α− αn s=0 Aparameter-freeapprochhasbeendevelopedthatallowsfor wherethesyntax(α ,...,α )−:=(α ,...,α )hasbeen theanalysisofastochasticprocessX(t)thatisdrivenbyex- 1 n 1 n−1 introduced. ThedifferentialincrementofJ isthusgivenby ponentiallycorrelated,Gaussiannoise.Thisanalysisispurely α basedonthemomentsoftheconditionalincrementsofX and dJ (τ) = J (τ)dZ (τ). (A5) providesestimatesforthedrift-anddiffusion-functionsofthe α α− αn process as well as for the correlation time θ of the driving ForthedifferentialincrementofaproductJ J onefinds α β noise. It should be noted that we use a perturbative approach, d(J J ) = J dJ +J dJ . (A6) α β α β β α 10 (cid:69) Inintegratedform,weobtain(assumingβtohavemcompo- ×ds ···ds 1 n nents) 1 (cid:90) t1 (cid:90) tn (cid:10) (cid:11) (cid:90) τ = θn ··· ξ(s1)···ξ(sn) Jα(τ)Jβ(τ) = Jα(s)Jβ−(s)dZβm(s) ns1=0 sn=0 s=(cid:90)0τ ×(cid:89)e(si−ti)/θds ···ds . (B1) 1 n + J (s)J (s)dZ (s). (A7) α− β αn i=1 s=0 Thisequationcannowbeappliedrecursivelytotheproducts As ξ(t) is Gaussian white noise, the expectation values withintheintegrals.Intheend,thisleadstoasumofintegrals (cid:104)ξ(t1)···ξ(tn)(cid:105) are well known. For odd values of n they J withindex-vectorsoflengthn+m, arevanishing, γ JαJβ = (cid:88) Jγ. (A8) (cid:104)ξ(t1)···ξ(tn)(cid:105) = 0, n=2k+1. (B2) γ∈M(α,β) All even correlations can be expressed in terms of two-point Actually,thevectorsγ ∈ M(α,β)representall(cid:0)n+m(cid:1)pos- correlations. For n = 2k, this leads to a sum of k-fold pro- n sibilities to mix the indices of α and β while keeping their ductesofdeltafunctions.Thissumcontains1×3×···×(n−1) relative ordering, This means, the position of α in γ must terms, which is the number of possibilities to permutate the i always preceed that of α , and the same musthold for the functionargumentsofsuchaproductwhenonlydistinguish- i+1 componentsofβ. Asanexample,oneobtains able functions are allowed (functions may be indistinguish- able due to the symmetry of the delta function or due to the (cid:8) M[(α1,α2),(β1)] = (β1,α1,α2),(α1,β1,α2), commutativityofmultiplication). Upton=4thisreads (cid:9) (α ,α ,β ) . (A9) 1 2 1 (cid:104)ξ(t )ξ(t )(cid:105) = δ(t −t ), (B3) 1 2 1 2 Withthisruleathand,onefinds,e.g., (cid:104)ξ(t )ξ(t )ξ(t )ξ(t )(cid:105) = δ(t −t )δ(t −t ) 1 2 3 4 1 2 3 4 +δ(t −t )δ(t −t ) J J = 2J , (A10a) 1 3 2 4 (0) (0) (0,0) +δ(t −t )δ(t −t ). (B4) J J = 3J , (A10b) 1 4 2 3 (0) (0,0) (0,0,0) .. AccordingtoEq.(B1),itfollowsimmediatelythatCalsovan- . ishesforoddvaluesofn, J J = 2J , (A10c) (1) (1) (1,1) J(1)J(1,1) = 3J(1,1,1), (A10d) C(t1,...,tn) = 0, n=2k+1. (B5) . . . Forn=2onefinds whichimplies(assumingavectoroflengthn) 1 (cid:90) t1 (cid:90) t2 C(t ,t ) = δ(s −s ) J (τ) = 1 [J (τ)]n = 1 τn, (A11) 1 2 θ2 s1=0 s2=0 1 2 (0,...,0) n! (0) n! ×e(s1+s2−t1−t2)/θds ds 1 2 1 1 J(1,...,1)(τ) = n![J(1)(τ)]n = n![V(τ)]n. (A12) = 1 (cid:90) min(t1,t2)e(2s−t1−t2)/θds θ2 Onemayalsoderiverelationslike s=0 (cid:40) e(t1−t2)/θ−e(−t1−t2)/θ, t ≤t J(0)J(1) = J(1,0)+J(0,1), (A13) = 2θ 1 2 .(B6) J(20)J(1) = 2[J(1,0,0)+J(0,1,0)+J(0,0,1)], (A14) e(t2−t1)/θ−2θe(−t2−t1)/θ, t1 >t2 J(0)J(21) = 2[J(0,1,1)+J(1,0,1)+J(1,1,0)], (A15) The higher order correlation functions of u(t) can be ex- J J = 2J +4J . (A16) pressed in terms of two-point correlations like in the case of (1,0) (1,0) (1,0,1,0) (1,1,0,0) Gaussian white noise. This can be seen when inserting the expressions for (cid:104)ξ(t )···ξ(t )(cid:105) into Eq. (B1). For each of 1 2k AppendixB:Correlationfunctionsofu(t) the k-fold products of the delta function, the integral factor- izesintoak-foldproductofintegralsoftheformofEq.(B6). Using the definition of u(t) [Eq. (40)], the definition of C Therefore, the structure of the correlation functions of ξ(t) directlytranslatestothatofu(t). Onefinds [Eq.(41)]reads (cid:68) 1 (cid:90) t1 (cid:90) tn C(t1,t2,t3,t4) = C(t1,t2)C(t3,t4) C(t ,...,t ) = ··· 1 n θn +C(t1,t3)C(t2,t4) n s1=0 sn=0 +C(t1,t4)C(t2,t3), (B7) (cid:89) × e(si−ti)/θξ(s ) . i . . i=1

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.