ebook img

An Introduction to Interacting Simulated Annealing PDF

26 Pages·2012·1.52 MB·English
by  
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview An Introduction to Interacting Simulated Annealing

An Introduction to Interacting Simulated Annealing Juergen Gall, Bodo Rosenhahn, and Hans-PeterSeidel Max-Planck Institutefor ComputerScience Stuhlsatzenhausweg 85, 66123 Saarbru(cid:127)cken, Germany Abstract. Human motion capturing can be regarded as an optimiza- tionproblemwhereonesearchesfortheposethatminimizesapreviously de(cid:12)ned error function based on some image features. Most approaches for solving this problem use iterative methods like gradient descent ap- proaches.Theyworkquitewellaslongastheydonotgetdistractedbylo- caloptima.Weintroduceanovelapproachforglobaloptimizationthatis suitableforthetasksastheyoccurduringhumanmotioncapturing.We callthemethodinteractingsimulatedannealingsinceitisbasedonanin- teractingparticlesystemthatconvergestotheglobaloptimumsimilarto simulatedannealing.Weprovideadetailedmathematicaldiscussionthat includesconvergenceresultsandannealingproperties.Moreover,wegive two examples that demonstrate possible applications of the algorithm, namely a global optimization problem and a multi-view human motion capturingtaskincludingsegmentation,prediction,andpriorknowledge. A quantative error analysis also indicates the performance and the ro- bustness of the interacting simulated annealing algorithm. 1 Introduction 1.1 Motivation Optimization problems arise in many applications of computer vision. In pose estimation, e.g. [28],and human motion capturing,e.g. [31],functions are mini- mized at various processing steps. For example, the marker-lessmotion capture system[26]minimizesina(cid:12)rststepanenergyfunctionforthesegmentation.Ina secondstep,correspondencesbetween the segmented imageanda3D modelare established. The optimal pose is then estimated by minimizing the error given by the correspondences. These optimization problems also occur, for instance, inmodel(cid:12)tting[17,31].Theproblemsaremostlysolvedbyiterativemethodsas gradient descent approaches. The methods work very well as long as the start- ing point is near the global optimum, however, they get easily stuck in a local optimum. In order to deal with it, several random selected starting points are used and the best solution is selected in the hope that at least one of them is nearenoughto the globaloptimum, cf. [26].Although it improvesthe results in many cases, it does not ensure that the global optimum is found. In this chapter, we introduce a global optimization method based on an inter- acting particle system that overcomes the dilemma of local optima and that is 2 J. Gall, B. Rosenhahnand H.-P.Seidel suitable for the optimization problems as they arise in human motion captur- ing. In contrast to many other optimization algorithms, a distribution instead of a single value is approximatedby a particle representationsimilar to particle (cid:12)lters [10]. This property is bene(cid:12)cial, particularly, for trackingwhere the right parametersarenot alwaysexactatthe globaloptimumdependingonthe image features that are used. 1.2 Related Work Apopularglobaloptimizationmethodinspiredbystatisticalmechanicsisknown assimulatedannealing [14,18].Similarto ourapproach,afunction V 0inter- (cid:21) preted as energy is minimized by means of an unnormalized Boltzmann-Gibbs measure that is de(cid:12)ned in terms of V and an inverse temperature (cid:12) >0 by g(dx)=exp( (cid:12)V(x)) (cid:21)(dx), (1.1) (cid:0) where(cid:21) isthe Lebesguemeasure.Thismeasurehasthe propertythat the prob- ability mass concentrates at the global minimum of V as (cid:12) . !1 The key idea behind simulated annealing is taking a random walk through the search space while (cid:12) is successively increased. The probability of accepting a new value in the space is given by the Boltzmann-Gibbs distribution. While values with less energy than the current value are accepted with probability one, the probability that values with higher energy are accepted decreases as (cid:12) increases. Other related approaches are fast simulated annealing [30] using a Cauchy-Lorentz distribution and generalized simulated annealing [32] based on Tsallis statistics. Interactingparticlesystems[19]approximateadistributionofinterestbya(cid:12)nite number of weighted random variables X(i) called particles. Provided that the weights (cid:5)(i) are normalized such that (cid:5)(i) =1, the set of weighted particles determines a random probability measures by P n (cid:5)(i)(cid:14) . (1.2) X(i) i=1 X Depending on the weighting function and the distribution of the particles, the measure converges to a distribution (cid:17) as n tends to in(cid:12)nity. When the par- ticles are identically independently distributed according to (cid:17) and uniformly weighted, i.e. (cid:5)(i) =1=n, the convergencefollows directly from the law of large numbers [3]. Interacting particle systems are mostly known in computer vision as particle (cid:12)lter [10] where they are applied for solving non-linear, non-Gaussian (cid:12)ltering problems.However,these systemsalsoapplyfortrappinganalysis,evolutionary algorithms, statistics [19], and optimization as we demonstrate in this chapter. They usually consist of two steps as illustrated in Figure 1. During a selec- tion step, the particles areweighted accordingto a weightingfunction and then resampled with respect to their weights, where particles with a great weight generate more o(cid:11)spring than particles with lower weight. In a second step, the particles mutate or are di(cid:11)used. Interacting SimulatedAnnealing 3 particle weighted particle Weighting: weighting function Resampling: Diffusion: Fig.1.Operationofaninteractingparticlesystem.Afterweightingtheparticles(black circles), the particles are resampled and di(cid:11)used(gray circles). 1.3 Interaction and Annealing Simulated annealing approaches are designed for global optimization, i.e. for searching the global optimum in the entire search space. Since they are not ca- pable of focusing the search on some regions of interest in dependency on the previous visited values, they are not suitable for tasks in human motion cap- turing. Our approach, in contrast, is based on an interacting particle system that uses Boltzmann-Gibbs measures (1.1) similar to simulated annealing. This combination ensures not only the annealing property as we will show, but also exploits the distribution of the particles in the space as measure for the uncer- tainty in an estimate. The latter allows an automatic adaption of the search on regionsofinterestduringtheoptimizationprocess.Theprincipleoftheannealing e(cid:11)ect is illustrated in Figure 2. A (cid:12)rst attempt to fuse interaction and annealing strategies for human motion capturing has become known as annealed particle (cid:12)lter [9]. Even though the heuristic is not based on a mathematical background, it already indicates the potential of such combination. Indeed, the annealed particle (cid:12)lter can be re- garded as a special case of interacting simulated annealing where the particles are predicted for each frame by a stochastic process, see Section 3.1. 1.4 Outline The interactingannealing algorithmis introduced in Section 3.1 and its asymp- toticbehaviorisdiscussedinSection3.2.Thegivenconvergenceresultsarebased on Feynman-Kac models [19] which are outlined in Section 2. Since a general treatment including proofs is out of the scope of this introduction, we refer the interested reader to [11] or [19]. While ourapproachis evaluated for a standard global optimization problem in Section 4.1, Section 4.2 demonstrates the per- formance of interacting simulated annealing in a complete marker-less human 4 J. Gall, B. Rosenhahnand H.-P.Seidel Fig.2.Illustrationoftheannealinge(cid:11)ectwiththreeruns.Duetoannealing,theparti- clesmigratetowardstheglobalmaximumwithoutgettingstuckinthelocalmaximum. motion capture system that includes segmentation, pose prediction and prior knowledge. 1.5 Notations WealwaysregardEasasubspaceofRd,andlet (E)denoteitsBorel(cid:27)-algebra. B B(E) denotes the set of bounded measurablefunctions, (cid:14) is the Diracmeasure x concentratedinx E, istheEuclideannorm,and isthewell-known 2 k(cid:1)k2 k(cid:1)k1 supremum norm. Let f B(E), (cid:22) be a measure on E, and let K be a Markov 2 kernel on E1. We write (cid:22);f = f(x)(cid:22)(dx), (cid:22);K (B)= K(x;B)(cid:22)(dx) for B (E). h i h i 2B ZE ZE Furthermore, U[0;1] denotes the uniform distribution on the interval [0;1] and osc(’):= sup ’(x) ’(y) : (1.3) fj (cid:0) jg x;y2E is an upper bound for the oscillations of f. 2 Feynman-Kac Model Let (Xt)t2N0 be an E-valued Markov process with family of transition kernels (Kt)t2N0 and initial distribution (cid:17)0. We denote by P(cid:17)0 the distribution of the 1 A Markov kernel is a function K : E (cid:2) B(E) ! [0;1] such that K((cid:1);B) is B(E)-measurable 8B and K(x;(cid:1)) is a probability measure 8x. An example of a Markov kernel is given in Equation (1.11). For more details on probability theory andMarkov kernels, we refer to[3]. Interacting SimulatedAnnealing 5 Markovprocess, i.e. for t N , 0 2 P (d(x ;x ;:::;x ))=K (x ;dx ):::K (x ;dx )(cid:17) (dx ), (cid:17)0 0 1 t t(cid:0)1 t(cid:0)1 t 0 0 1 0 0 andbyE []the expectationwith respecttoP .Thesequenceofdistributions ((cid:17)t)t2N0 o(cid:17)n0 E(cid:1) de(cid:12)ned for any ’2B(E) and t2(cid:17)0N0 as t(cid:0)1 (cid:13) ;’ t (cid:17) ;’ := h i, (cid:13) ;’ :=E ’(X )exp (cid:12) V(X ) , h t i (cid:13)t;1 h t i (cid:17)0" t (cid:0) s s !# h i s=0 X is called the Feynman-Kac model associated with the pair (exp( (cid:12) V);K ). t t (cid:0) The Feynman-Kac model as de(cid:12)ned above satis(cid:12)es the recursion relation (cid:17) = (cid:9) ((cid:17) );K , (1.4) t+1 t t t h i where the Boltzmann-Gibbs transformation (cid:9) is de(cid:12)ned by t E exp t(cid:0)1(cid:12) V(X ) (cid:17)0 (cid:0) s=0 s s (cid:9) ((cid:17) )(dy )= exp( (cid:12) V (y )) (cid:17) (dy ). t t t E hexp(cid:16) Pt (cid:12) V(X )(cid:17)i (cid:0) t t t t t (cid:17)0 (cid:0) s=0 s s h (cid:16) (cid:17)i P The particle approximation of the (cid:13)ow (1.4) depends on a chosen family of Markovtransition kernels (Kt;(cid:17)t)t2N0 satisfying the compatibility condition (cid:9) ((cid:17) );K := (cid:17) ;K . h t t ti h t t;(cid:17)ti A family (Kt;(cid:17)t)t2N0 of kernels is not uniquely determined by these conditions. As in [19, Chapter 2.5.3], we choose K =S K , (1.5) t;(cid:17)t t;(cid:17)t t where S (x ;dy )=(cid:15) exp( (cid:12) V (x )) (cid:14) (dy ) t;(cid:17)t t t t (cid:0) t t t xt t +(1 (cid:15) exp( (cid:12) V (x ))) (cid:9) ((cid:17) )(dy ), (1.6) t t t t t t t (cid:0) (cid:0) with (cid:15) 0 and (cid:15) exp( (cid:12) V) 1. The parameters (cid:15) may depend on the t (cid:21) tk (cid:0) t k1 (cid:20) t current distribution (cid:17) . t 3 Interacting Simulated Annealing Similar to simulated annealing, one can de(cid:12)ne an annealing scheme 0 (cid:12) 0 (cid:20) (cid:20) (cid:12) ::: (cid:12) in order to search for the global minimum of an energy function 1 t (cid:20) (cid:20) V. Under some conditions that will be stated in Section 3.2, the (cid:13)ow of the Feynman-Kacdistributionbecomesconcentratedinthe regionof globalminima of V astgoestoin(cid:12)nity. Sinceit isnotpossibletosamplefromthe distribution directly, the (cid:13)ow is approximated by a particle set as it is done by a particle (cid:12)lter. We call the algorithm for the (cid:13)ow approximation interacting simulated annealing (ISA). 6 J. Gall, B. Rosenhahnand H.-P.Seidel 3.1 Algorithm TheparticleapproximationfortheFeynman-Kacmodeliscompletelydescribed bytheEquation(1.5).Theparticlesystemisinitializedbynidentically,indepen- dently distributed random variables X(i) with common law (cid:17) determining the 0 0 randomprobabilitymeasure(cid:17)n := n (cid:14) =n.SinceK canberegardedas 0 i=1 X(i) t;(cid:17)t 0 the compositionofapairof selectionandmutationMarkovkernels,wesplit the P transitions into the following two steps (cid:17)n Selection (cid:17)(cid:20)n Mutation (cid:17)n , t (cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)! t (cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)! t+1 where n n 1 1 (cid:17)n := (cid:14) , (cid:17)(cid:20)n := (cid:14) . t n Xt(i) t n X(cid:20)t(i) i=1 i=1 X X During the selection step each particle X(i) evolves according to the Markov t transition kernel St;(cid:17)tn(Xt(i);(cid:1)). That means Xt(i) is accepted with probability (cid:15) exp( (cid:12) V(X(i))), and we set X(cid:20)(i) = X(i). Otherwise, X(cid:20)(i) is randomly se- t (cid:0) t t t t t lected with distribution n exp( (cid:12) V(X(i))) (cid:0) t t (cid:14) . n exp( (cid:12) V(X(j))) Xt(i) Xi=1 j=1 (cid:0) t t P ThemutationstepconsistsinlettingeachselectedparticleX(cid:20)(i) evolveaccording t to the Markov transition kernel K (X(cid:20)(i); ). t t (cid:1) Algorithm 1 Interacting Simulated Annealing Algorithm Requires: parameters ((cid:15)t)t2N0, number of particles n, initial distribution (cid:17)0, energy function V, annealing scheme ((cid:12)t)t2N0 and transitions (Kt)t2N0 1. Initialization { Sample x(i) from (cid:17) for all i 0 0 2. Selection { Set (cid:25)(i) exp((cid:0)(cid:12)tV(x(ti))) for all i { For i from 1 to n: Sample (cid:20) from U[0;1] If (cid:20)(cid:20)(cid:15)t(cid:25)(i) then ? Setx(cid:20)(i) x(i) t t Else ? Setx(cid:20)(i) x(j) with probability (cid:25)(j) t t Pnk=1(cid:25)(k) 3. Mutation { Sample x(ti+)1 from Kt(x(cid:20)(ti);(cid:1)) for all i and go to step2 Interacting SimulatedAnnealing 7 There are several ways to choose the parameter (cid:15) of the selection kernel (1.6) t that de(cid:12)nes the resampling procedure of the algorithm, cf. [19]. If (cid:15) :=0 t, (1.7) t 8 the selection can be done by multinomial resampling. Provided that2 n sup(exp((cid:12) osc(V)), t (cid:21) t another selection kernel is given by 1 (cid:15) ((cid:17) ):= . (1.8) t t n (cid:17) ;exp( (cid:12) V) t t h (cid:0) i In this case the expression (cid:15) (cid:25)(i) in Algorithm 1 is replaced by (cid:25)(i)= n (cid:25)(k). t k=1 A third kernel is determined by P 1 (cid:15) ((cid:17) ):= , (1.9) t t inf y R : (cid:17) ( x E : exp( (cid:12) V(x))>y )=0 t t f 2 f 2 (cid:0) g g yielding the expression (cid:25)(i)=max (cid:25)(k) instead of (cid:15) (cid:25)(i). 1(cid:20)k(cid:20)n t Pierre del Moral showed in [19, Chapter 9.4] that for any t N and ’ B(E) 0 2 2 the sequence of random variables pn( (cid:17)n;’ (cid:17) ;’ ) h t i(cid:0)h t i converges in law to a Gaussian random variable W when the selection kernel (1.6) is used to approximate the (cid:13)ow (1.4). Moreover, it turns out that when (1.8)ischosen,thevarianceofW isstrictlysmallerthaninthecasewith(cid:15) =0. t We remark that the annealed particle (cid:12)lter [9] relies on interacting simulated annealing with (cid:15) =0. The operation of the method is illustrated by t (cid:17)n Prediction (cid:17)^n ISA (cid:17)n . t (cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)! t+1 (cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)! t+1 TheISAisinitializedbythepredictedparticlesX^(i) andperformsM timesthe t+1 (i) selection and mutation steps. Afterwards the particles X are obtained by an t+1 additionalselection.Thisshowsthattheannealedparticle(cid:12)lterusesasimulated annealing principle to locate the global minimum of a function V at each time step. 3.2 Convergence This section discusses the asymptotic behavior of the interacting simulated an- nealingalgorithm.Forthispurpose,weintroducesomede(cid:12)nitionsinaccordance with [19] and [15]. 2 The inequalitysatis(cid:12)es the condition (cid:15)tkexp((cid:0)(cid:12)tV)k1 (cid:20)1 for Equation(1.6). 8 J. Gall, B. Rosenhahnand H.-P.Seidel De(cid:12)nition 1. A kernel K on E is called mixing if there exists a constant 0< "<1 such that K(x ; ) "K(x ; ) x , x E. (1.10) 1 2 1 2 (cid:1) (cid:21) (cid:1) 8 2 The condition can typically only be established when E Rd is a bounded (cid:26) subset,whichisthecaseinmanyapplicationslikehumanmotioncapturing.For example the (bounded) Gaussian distribution on E 1 1 K(x;B):= exp (x y)T (cid:6)(cid:0)1(x y) dy, (1.11) Z (cid:0)2 (cid:0) (cid:0) ZB (cid:18) (cid:19) where Z := exp( 1(x y)T (cid:6)(cid:0)1(x y)dy, is mixing if and only if E is E (cid:0)2 (cid:0) (cid:0) bounded. Moreover, a Gaussian with a high variance satis(cid:12)es the mixing condi- R tion with a larger " than a Gaussian with lower variance. De(cid:12)nition 2. The Dobrushin contraction coe(cid:14)cient of a kernel K on E is de- (cid:12)ned by (cid:12)(K):= sup sup K(x ;B) K(x ;B) . (1.12) 1 2 j (cid:0) j x1;x22EB2B(E) Furthermore, (cid:12)(K) [0;1] and (cid:12)(K K ) (cid:12)(K )(cid:12)(K ). 1 2 1 2 2 (cid:20) WhenthekernelM isacompositionofseveralmixingMarkovkernels,i.e.M := K K :::K , and each kernel K satis(cid:12)es the mixing condition for some " , s s+1 t k k theDobrushincontractioncoe(cid:14)cientcanbeestimatedby(cid:12)(M) t (1 " ). (cid:20) k=s (cid:0) k The asymptotic behavior of the interacting simulated annealing algorithm is Q a(cid:11)ectedbytheconvergenceofthe (cid:13)owoftheFeynman-Kacdistribution(1.4)to the regionofglobalminimaofV asttendstoin(cid:12)nityandbytheconvergenceof the particleapproximationtothe Feynman-Kacdistribution ateachtime step t as the number of particles n tends to in(cid:12)nity. Convergence of the (cid:13)ow We suppose that K = K is a Markov kernel sat- t isfying the mixing condition (1.10) for an " (0;1) and osc(V) < . A time 2 1 mesh is de(cid:12)ned by t(n):=n(1+ c(") ) c("):=(1 ln("=2))="2 for n N . (1.13) 0 b c (cid:0) 2 Let0 (cid:12) (cid:12) :::beanannealingschemesuchthat(cid:12) =(cid:12) isconstantin 0 1 t t(n+1) (cid:20) (cid:20) the interval(t(n);t(n+1)].Furthermore,wedenoteby(cid:17)(cid:20) theFeynman-Kacdis- t tribution after the selection step, i.e. (cid:17)(cid:20) =(cid:9) ((cid:17) ). According to [19, Proposition t t t 6.3.2], we have Theorem 1. Let b (0;1) and (cid:12) =(n+1)b. Then for each (cid:14) >0 t(n+1) 2 lim (cid:17)(cid:20) (V V +(cid:14))=0, t(n) ? n!1 (cid:21) where V =sup v 0; V v a:e: . ? f (cid:21) (cid:21) g The rate of convergence is d=n(1(cid:0)b) where d is increasing with respect to b and c(") but doesnot depend onn asgivenin [19,Theorem6.3.1].This theoremes- tablishesthatthe(cid:13)owoftheFeynman-Kacdistribution(cid:17)(cid:20) becomesconcentrated t in the region of global minima as t + . ! 1 Interacting SimulatedAnnealing 9 Convergence of the particle approximation Del Moral established the following convergencetheorem [19, Theorem 7.4.4]. Theorem 2. For any ’ B(E), 2 t 2osc(’) E (cid:17)n ;’ (cid:17) ;’ 1+ r (cid:12)(M ) , (cid:17)0 h t+1 i(cid:0)h t+1 i (cid:20) pn s s ! s=0 (cid:2)(cid:12) (cid:12)(cid:3) X (cid:12) (cid:12) where t r :=exp osc(V) (cid:12) , s r ! r=s X M :=K K :::K , s s s+1 t for 0 s t. (cid:20) (cid:20) Assuming that the kernels K satisfy the mixing condition with " , we get a s s rough estimate for the number of particles 4osc(’)2 t t t 2 n 1+ exp osc(V) (cid:12) (1 " ) (1.14) (cid:21) (cid:14)2 ( r! (cid:0) k )! s=0 r=s k=s X X Y needed to achieve a mean error less than a given (cid:14) >0. Fig.3. Impact of the mixing condition satis(cid:12)ed for "s = ". Left: Parameter c(") of the time mesh (1.13). Right: Rough estimate for the number of particles needed to achieve a mean error less than (cid:14)=0:1. Optimal transition kernel The mixing condition is not only essential for the convergence result of the (cid:13)ow as stated in Theorem 1 but also in(cid:13)uences the time mesh by the parameter ". In view of Equation (1.13), kernels with " close to 1 are preferable, e.g. Gaussian kernels on a bounded set with a very 10 J. Gall, B. Rosenhahnand H.-P.Seidel high variance. The right hand side of (1.14) can also be minimized if Markov kernels K are chosen such that the mixing condition is satis(cid:12)ed for a " close s s to 1, as shown in Figure 3. However, we have to consider two facts. First, the inequality in Theorem 2 provides an upper bound of the accumulated error of the particle approximationup to time t+1. It is clearthat the accumulation of theerrorisreducedwhentheparticlesarehighlydi(cid:11)used,butitalsomeansthat the information carried by the particles from the previous time steps is mostly lost by the mutation. Secondly, we cannot sample from the measure (cid:17)(cid:20) directly, t insteadweapproximateitbynparticles.Nowthefollowingproblemarises.The mass of the measure concentrates on a small region of E on one hand and, on the other hand, the particles are spread over E if " is large. As a result we get a degenerated system where the weights of most of the particles are zero and thus the global minima are estimated inaccurately, particularly for small n. If we choose a kernel with small " in contrast, the convergence rate of the (cid:13)ow is very slow. Since neither of them is suitable in practice, we suggest a dynamic variance scheme instead of a (cid:12)xed kernel K. Itcanimplemented byGaussiankernelsK with covariancematrices(cid:6) propor- t t tional to the sample covarianceafter resampling. That is, for a constant c>0, n n c 1 (cid:6) := (x(i) (cid:22) ) (x(i) (cid:22) )T, (cid:22) := x(i), (1.15) t n 1 t (cid:0) t (cid:26) t (cid:0) t (cid:26) t n t (cid:0) i=1 i=1 X X where ((x) ) = max(x ;(cid:26)) for a (cid:26) > 0. The value (cid:26) ensures that the variance (cid:26) k k does not become zero. The elements o(cid:11) the diagonal are usually set to zero, in order to reduce computation time. Optimal parameters The computation cost of the interacting simulated an- nealing algorithm with n particles and T annealing runs is O(n ), where T n :=n T. (1.16) T (cid:1) While more particles give a better particle approximation of the Feynman-Kac distribution,the (cid:13)owbecomesmoreconcentratedintheregionofglobalminima asthe numberofannealingrunsincreases.Therefore,(cid:12)nding the optimalvalues is a trade-o(cid:11) between the convergence of the (cid:13)ow and the convergence of the particle approximation provided that n is (cid:12)xed. T Another important parameter of the algorithm is the annealing scheme. The scheme given in Theorem 1 ensures convergence for any energy function V | even for the worst one in the sense of optimization | as long as osc(V) < 1 but is too slow for most applications, as it is the case for simulated annealing. In our experiments the schemes (cid:12) = ln(t+b) for some b>1 (logarithmic); (1.17) t (cid:12) = (t+1)b for some b (0;1) (polynomial) (1.18) t 2 performed well. Note that in contrast to the time mesh (1.13) the schemes are not anymore constant on a time interval.

Description:
bustness of the interacting simulated annealing algorithm treatment including proofs is out of the scope of this introduction, we refer the interested
See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.