ebook img

Numerical Dynamic Programming in Economics PDF

167 Pages·0.954 MB·English
by  Rust J.
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Numerical Dynamic Programming in Economics

Numerical Dynamic Programming in Economics JohnRust Yale University Contents 1 1. Introduction 2. MarkovDecisionProcesses(MDP’s)andthe TheoryofDynamic Programming 2.1DefinitionsofMDP’s,DDP’s,andCDP’s 2.2Bellman’sEquation,ContractionMappings,andBlackwell’sTheorem 2.3ErrorBoundsforApproximateFixedPointsofApproximateBellmanOperators 2.4AGeometricSeriesRepresentationforMDP’s 2.5ExamplesofAnalyticSolutionstoBellman’sEquationforSpecific“TestProblems” 2.6EulerEquationsandEulerOperators 3. ComputationalComplexityandOptimalAlgorithms 3.1DiscreteComputationalComplexity 3.2ContinuousComputationalComplexity 3.3ComputationalComplexityoftheApproximationProblem 4. NumericalMethodsforContractionFixedPoints 5. NumericalMethodsforMDP’s 5.1DiscreteFiniteHorizonMDP’s 5.2DiscreteInfiniteHorizonMDP’s 5.2.1SuccessiveApproximation,PolicyIteration,andRelatedMethods 5.2.2NumericalComparisonofMethodsinAutoReplacementProblem 5.2.3NewApproachestoSolvingLargeScaleProblems 5.3ContinuousFiniteHorizonMDP’s 5.3.1DiscreteApproximationMethods 5.3.2SmoothApproximationMethods 5.4ContinuousInfiniteHorizonMDP’s 5.4.1DiscreteApproximationMethods 5.4.2SmoothApproximationMethods 6. Conclusions 1RevisedNovember1994draftforHandbookofComputationalEconomics,H.Amman,D.KendrickandJ.Rust,(eds.). IamgratefulforhelpfulcommentsbyHansAmman,KenJudd,DavidKendrick,MartinPuterman,andtwonotvery anonymousreferees,CharlesTapieroandJohnTsitsiklis. 7. References 2 1. Introduction This chapter surveys numerical methods for solving dynamic programming (DP) problems. The DP framework has been extensively used in economic modeling because it is sufficiently rich to model almost any problem involving sequential decision making over time and under uncertainty.2 Byasimplere-definition ofvariablesvirtuallyanyDPproblemcanbeformulatedas aMarkovdecisionprocess(MDP)inwhichadecisionmakerwhoisinstates attimet = 1;:::;T t takes an action a that determines current utility u(s ;a ) and affects the distribution of next t t t period’sstates viaaMarkovtransitionprobabilityp(s s ;a ). Theproblemistodetermine t+1 t+1 t t j an optimal decision rule (cid:11) that solves V(s) max E T (cid:12)tu(s ;a ) s = s where E (cid:11) (cid:11) t=0 t t 0 (cid:11) (cid:17) j denotes expectation with respect to the controlled stochansPtic process st;at indouced by the f g decision rule (cid:11) (cid:11) ;:::;(cid:11) , and (cid:12) (0;1) denotes the discount factor. What makes these 1 T (cid:17) f g 2 problemsespeciallydifficult isthatinsteadofoptimizingoveranexantefixedsequenceofactions a ;:::;a one needs to optimize over sequences of functions (cid:11) ;:::;(cid:11) that allow the ex 0 T 0 T f g f g post decisiona tovaryasabestresponsetothecurrentstateoftheprocess,i.e. a = (cid:11) (s ). The t t t t method of dynamic programming provides a constructive, recursive procedure for computing (cid:11) usingthevaluefunctionV asa“shadowprice”todecentralizeacomplicatedstochastic/multiperiod optimization problem into a sequence of simpler deterministic/static optimization problems.3 In infinite horizon problems V is the unique solution to a Bellman’s equation, V = (cid:0)(V), where the Bellmanoperator (cid:0)is defined by: (cid:0)(V)(s) = max [u(s;a)+(cid:12) V(s )p(ds s;a)]; (1:1) 0 0 a A(s) j Z 2 andtheoptimal decisionrulecanberecoveredfrom V by finding avalue(cid:11)(s) A(s)thatattains 2 themaximumin(1.1)foreachs S. WereviewthemaintheoreticalresultsaboutMDP’sinsection 2 2, providing an intuitive derivation Bellman’s equation in the infinite horizon case and showing thatthevaluefunctionV correspondingtoapolicy(cid:11)issimplyamultidimensionalgeneralization (cid:11) of a geometric series. This implies that V can be computed as the solution to a system of linear (cid:11) equations, the key step in the Bellman 1955, 1957 and Howard 1960 policy iteration algorithm. The Bellman operator has a particularly nice mathematical property: (cid:0) is a contraction mapping. 2SeeStokeyandLucas1987forexamplesofDPmodelsineconomictheory. SeeRust1994a,1994bforexamplesof ofDPmodelsineconometrics. 3Infinite horizonproblemsV actuallydenotesanentiresequenceof valuefunctions, V VT;:::;VT , justas (cid:11) (cid:17) f 0 T g denotesasequenceofdecisionrules. Intheinfinite-horizon case,thesolution(V;(cid:11))reducestoapairoffunctionsof thecurrentstates. 3 Alargenumberofnumericalsolutionmethodsexploitthecontractionpropertytoyieldconvergent, numericallystablemethodsforcomputingapproximatesolutionstoBellman’sequation,including theclassicmethodofsuccessiveapproximations. SinceV canberecoveredfrom(cid:11)andviceversa, the rest of this chapter focuses on methods for computing the value function V, with separate discussionsofnumericalproblemsinvolvedinapproximating(cid:11) whereappropriate.4 Fromthestandpointofcomputation,thereisanimportantdistinctionbetweendiscreteMDP’s whosestateandcontrolvariablescanassumeonlyafinite numberofpossiblevalues,andcontinuous MDP’s whose state and control variables can assume a continuum of possible values. The value functions for discrete MDP’s live in a subset B of the finite-dimensional Euclidean space RS j j (where S isthe numberofelementsinS),whereasthevaluefunctionsofcontinuousMDP’slive j j in a subset B of the infinite-dimensional Banach space (S) of bounded, measurable real-valued B functions on S. Thus, discrete MDP problems can be solved exactly (modulo rounding error in arithmetic operations), whereas the the solutions to continuous MDP’s can generally only be approximated. Approximate solution methods may also be attractive for solving discrete MDP’s withalargenumber ofpossiblestates S oractions A . j j j j TherearetwobasicstrategiesforapproximatingthesolutiontoacontinuousMDP:1)discrete approximation and 2) smooth approximation. Discrete approximation methods solvea finite state MDPproblemthatapproximatestheoriginalcontinuousMDPproblemoverafinite gridofpointsin thestatespaceS andactionspaceA. SincethemethodsforsolvingdiscreteMDP’shavebeenwell developedand expositedin the operations research literature (e.g. Bertsekas, 1987, Porteus, 1980 andPuterman,1990,1994),thischapterwillonlybrieflyreviewthestandardapproaches,focusing instead on recent research on discretization methods for solving continuous MDP’s. Although smooth approximation methods also have a long history (dating back to Bellman, 1957 Bellman and Dreyfus, 1962 and Bellman, Kalaba and Kotkin 1963), there has been a recent resurgence of interest in these methods as a potentially more efficient alternative to discretization methods (Christiano and Fisher, 1994, Johnson et. al. 1993, Judd and Solnick 1994, Marcet and Marshall, 1994, and Smith 1991). Smooth approximation methods treat the value function V and/or the decision rule (cid:11) as smooth, flexible functions of s and a finite-dimensional parameter vector (cid:18): examples include linear interpolation and a variety of more sophisticated approximation methods 4Special care must be taken in problems with a continuum of actions, since discontinuities or kinks in numerical estimates of the conditional expectation of V can create problems for numerical optimization algorithms used to recover(cid:11). Also,certainsolutionmethodsfocusoncomputing(cid:11)directlyratherthanindirectlybyfirst computingV. Wewillprovidespecialdiscussionsofthesemethodsinsection5. 4 such as polynomial series expansions, neural networks, and cubic splines. The objective of these methods is to choose a parameter ^(cid:18) such that the implied approximations V or (cid:11) “best fit” (cid:18)^ (cid:18)^ the true solution V and (cid:11) according to some metric. 5 In order to insure the convergence of a smooth approximation method, we need a parametrization that is sufficiently flexible to allow us to approximate arbitrary value functions V in the set B. One way to do this is via an expanding sequenceofparameterizationsthatare ultimatelydenseinB in thesensethatforeach V B6 2 lim inf V V = 0; (1:2) k (cid:18)2Rkk (cid:18) (cid:0) k !1 where V = sup V(s) denotes the usual “sup-norm”. For example, consider an infinite s S k k 2 j j horizon MDP problem with state space S = [ 1;1]. A natural choice for an expanding family of (cid:0) smoothapproximationsto V is k V (s) = (cid:18) p (s) (1:3) (cid:18) i i i=1 X wherep (s) = cos(icos 1(s))istheithChebyshevpolynomial.7 TheWeierstraussapproximation i (cid:0) theoremimpliesthattheChebyshevpolynomialsareultimatelydenseinthespaceB = C[ 1;1]of (cid:0) continuousfunctionson[ 1;1].8 Undertheleastsquarescriterionofgoodnessoffit, theproblem (cid:0) isto chooseaparameter (cid:18) (cid:2) Rk to minimizethe errorfunction(cid:27) defined by: N 2 (cid:26) N (cid:27) ((cid:18)) V (s ) (cid:0)^(V )(s ) 2; (1:4) N v (cid:18) i (cid:18) i (cid:17) u j (cid:0) j ui=1 uX t where (cid:0)^ is some computableapproximation to the trueBellmanoperator (cid:0). Methodsof thisform are known as minimum residual (MR) methods since the parameter (cid:18) is chosen to set the residual functionR(V )(s) = V (s) (cid:0)^(V )(s)asclosetothezerofunctionaspossible. Thephilosophyof (cid:18) (cid:18) (cid:18) (cid:0) thesemethodsisthatifthetruevaluefunctionV canbewellapproximatedbyaflexibleparametric function V for a small number of parameters k, we will be able to find a better approximation to (cid:18) 5Othervariantsofsmoothapproximationmethodsusenonlinearequationsolverstofind avalue(cid:18)(cid:3)thatsatisfies asetof “orthogonalityconditions”.Infinite horizonMDP’sinterpolationmethodsareusedtostorethevaluefunction. Other methodssuchasJohnsonet. al. 1993uselocalpolynomialinterpolationofV overagridofpointsinS. 6IftheMDPproblemhasfurtherstructurethatallowsustorestrictthetruevaluefunctiontosomesubsetofBsuchas thespaceofcontinuousfunctions,C(S),thenthelimitinequation(1.2)conditionneedonlyforeachV inthissubset. 7SeeJuddchapter6foradefinition ofChebyshevandotherfamiliesoforthogonalpolynomials. 8Infactwehavethefollowingerrorboundforequation(1.2): inf(cid:18)2RkkV(cid:18)(cid:0)Vk(cid:20)blog(k)=kforsomeconstantb>0. SeeJuddchapter6foraformalstatementofthisresultandRivlin1969foraproof. 5 V in significantly less cpu time by minimizing an error function such as (cid:27) ((cid:18)) in (1.4) than by N “bruteforce”discretizationmethods. Atasufficient levelofabstractionthedistinctionbetweendiscreteandsmoothapproximation methods starts to evaporate since discrete approximation based on N grid points can always be regarded as a particular way of parametrizing the solution to the MDP problem (e.g. as a simple linear spline function that interpolates V at the N grid points). Also, as equation N (1.4)illustrates,practicalimplementationofsmoothapproximationmethodsrequirediscretization methods of various sorts to compute various integrals, to provide grid points for interpolation, knotsforsplines,andso forth. Howeverthereis adistinctionbetweenthe twoapproachesthat we believe is key to understanding their relative strengths and weaknesses: Discrete approximation methodsfocusonsolvinganapproximatefiniteMDPproblemwhoseassociatedBellmanoperator (cid:0) : RN RN fully preserves the contraction mapping property of the true Bellman operator N ! (cid:0) : B B. Inaddition,anapproximatefixedpointV to(cid:0) typicallypreservesthemonotonicity N N ! andconcavitypropertiesofthetruevaluefunctionV. Smoothapproximationmethodssuchasthe minimumresidualmethodoutlinedinequations(1.3)to(1.4)transformtheMDPproblemintothe problemofminimizingasmoothfunction(cid:27) ((cid:18))thatgenerallydoesnothaveanyeasilyexploitable N mathematical structure (beyond smoothness in (cid:18)). Furthermore the parametrized value function V may not retain any of the concavity or monotonicity properties that are known to hold for the (cid:18) true solution V. Of course, there are exceptions to this distinction as well. Section 5 describes a discrete approximation method of Coleman 1990, 1993 that computes the optimal decision rule (cid:11) directly as the solution to a functional equation known as the Euler equation. The Euler equation has an associated Euler operator that maps policy functions into policy functions just as the Bellman equation has an associated Bellman operator that maps value functions into value functions. Unfortunately, the Euler operator is not a contraction mapping, which makes it more difficult to deriveerror bounds and prove the convergence of iterativemethods such as successive approximations. AnotherexceptionisthesmoothapproximationmethodofJuddandSolnick1994 thatuses “shapepreservingsplines”to preserveconcavity andmonotonicitypropertiesofV. Approximatesolutionmethodsforceus toconfrontatradeoffbetweenthe maximumallow- ableerror(cid:15)inthenumericalsolutionandtheamountofcomputertime(andstoragespace)needed to compute it. Solution time will also be an increasing function of any relevant measure of the size or dimension d of the MDP problem. It goes without saying that economists are interested in using the fastest possible algorithm to solve their MDP problem given any specified values 6 of ((cid:15);d). Economists are also concerned about using algorithms that are both accurate and nu- merically stable since the MDP solution algorithm is often embedded or “nested” as a subroutine inside a larger optimization or equilibrium problem. Examples include computing competitive equilibriaofstochasticgeneralequilibriummodelswithincompletemarkets(HansenandSargent, 1993, Imrohorog˘lu and Imrohorog˘lu, 1993, McGuire and Pakes, 1994), maximum likelihood es- timation of unknown parameters of u and p using data on observed states and decisions of actual decision-makers(EcksteinandWolpin,1987,Rust,1994andSargent,1979),andcomputationand econometric estimation of Bayesian Nash equilibria of dynamic games (McKelvey and Palfrey, 1992). All of these problems are solved by “polyalgorithms” that contain MDP solution subrou- tines. 9 Since the MDP problem must be repeatedly re-solved for various trial sequences of the “primitives”((cid:12);u;p),speed,accuracy,andnumericalstabilityare critical. Alargenumberofalternativesolutionmethodshavebeenproposedinthelast40years. Bell- manwasoneoftheoriginalcontributorstothisliterature,withpioneeringworkonbothdiscreteand smoothapproximationmethods(e.g. thepreviouslycitedworkonpolicyiterationandpolynomial approximations of the value function). Recently there has been considerable controversy in the economicsliteratureabouttherelativemeritsofdiscreteversussmoothapproximationmethodsfor solving continuous MDP problems. Part of the controversyarose from a “horse race” in the 1990 JournalofBusinessand Economic Statistics(TaylorandUhlig,1990) in whicha number ofalter- native solution methods competed in their ability to solve the classical Brock-Mirman stochastic growthmodelwhichwewilldescribeinsection2.6. MorerecentlyJudd1993 hasclaimed that Approximating continuous-state problems with finite state Markov chains limits the range of problems which can be analyzed. Fortunately, state-space discretization is unnecessary. For the past thirty years, the standard procedure in Operations Research literature (see Bellman, 1963, Dantzig 1974, Daniel, 1976) has been to approximate value functions and policy rules over continuous state spaces with orthogonal poly- nomials, splines, or other suitable families of functions. This results in far faster algorithms and avoids the errors associated with making the problem unrealistically “lumpy”. (p. 3) This chapter offers some new perspectives on this debate by providing a conceptual framework for analyzing the accuracy and efficiency of various discrete and smooth approximation methods for continuous MDP problems. The framework is the theory of computational complexity (Garey and Johnson 1983, Traub and Woz´niakowski 1980, and Traub, Wasilikowski and Woz´niakowski 9WewillseethattheMDPsubroutineisitselfafairlycomplicatedpolyalgorithmconsistingofindividualsubroutines fornumericalintegration,optimization,approximation,andsolutionofsystemsoflinearandnonlinearequations. 7 (TWW), 1988). We provide a brief review of the main results of complexity theory in section 3. Complexity theory is of particular interest because it has succeeded in characterizing the form of optimalalgorithmsforvariousmathematicalproblems. ChowandTsitsiklis1989usedthistheory to establish a lower bound on the amount of computer time required to solveof a general class of continuousMDP’s. Asubsequentpaper,ChowandTsitsiklis1991,presentedaparticulardiscrete approximation method — a multigrid algorithm — that is approximately optimal in a sense to be made precise in section 5.3. Thus, we will be appealing to complexity theory as a way of finding optimalstrategiesforfindingoptimalstrategies. Therearetwomainbranchesofcomplexitytheory,correspondingtodiscreteandcontinuous problems. Discrete (or algebraic) computational complexityapplies to finite problemsthat can be solvedexactlysuchasmatrixmultiplication,thetravelingsalesmanproblem,linearprogramming, and discrete MDP’s. 10 The size of a discrete problem is indexed by an integer d and the (worst case) complexity, comp(d), denotes the minimal number of computer operations necessary to solve the hardest possible problem of size d, (or if there is no algorithm capable of solving 1 the problem). Continuous computational complexity theory applies to continuous problems such asmultivariateintegration,functionapproximation,nonlinearprogramming,andcontinuousMDP problems. Noneof theseproblemscan be solvedexactly,but in each case the truesolution can be approximatedtowithinanarbitrarilysmallerrortolerance(cid:15). Problemsizeisindexedbyaninteger d denoting the dimension of the space that the continuous variable lives in (typically a subset of Rd), and the complexity, comp((cid:15);d), is defined as the minimal computational cost (cpu time) of solvingthe hardestpossibled-dimensionalproblemtowithin atoleranceof(cid:15). Complexity theory enables us to formalize an important practical limitation to our ability to solve increasingly detailed and realistic MDP problems: namely Bellman’s 1955 curse of dimensionality. This is the well-known exponential rise in the amount of time and space required tocomputethesolutiontoacontinuousMDPproblemasthenumberofdimensionsd ofthestate s variableor the number of dimensions d of the control variableincreases. Although onetypically a thinksofthecurseofdimensionalityasarisingfromthediscretizationofcontinuousMDP’s,italso occursinfinite MDP’sthathavemanystateandcontrolvariables. Forexample,afinite MDPwith d state variables each of which can take on S > 1 possible values has a total of S ds possible s j j j j states. The amount of computer time and storage required to solve such a problem increases exponentially fast as d increases. Smooth approximation methods cannot escape the curse of s 10Inthelattertwoproblemsweabstractfromroundingerrorcomputerarithmetic. 8 dimensionality: for exampleusing the Chebyshev approximationin equation (1.3),the dimension k of the parameter vector (cid:18) must increase at a sufficiently rapid rate as d increases in order to s guaranteethat V V (cid:15). Indeed,theliteratureonapproximationtheory(Pinkus,1985,Novak k (cid:18)^(cid:0) k (cid:20) 1988) shows that k must increase at rate (1=(cid:15))(d=r) in order to obtain uniform (cid:15)-approximations of smooth multidimensional functions which are r-times continuously differentiable. Although there are certain nonlinear approximation methods such as neural networks that can be shown to require a polynomially rather than exponentially increasing number of parameters k to obtain (cid:15)- approximationsforcertainsubclassesoffunctions(i.e. onlyk = O(1=(cid:15)2)parametersarerequired to approximate the subclass offunctions consideredin Barron 1993, and Hornik, Stinchombe and White 1992) are still subject to curseof dimensionalitydueto curse ofdimensionalityinvolvedin fitting theparameters(cid:18)oftheneuralnetbysomeminimizationsuchasnonlinearleastsquares: the amountofcomputertimerequiredtofind an(cid:15)-approximationtoaglobalminimizerofanarbitrary smoothfunctionincreasesexponentiallyfastasthenumberofparameterskincreases(Nemirovsky andYudin,1983),atleastonaworstcasebasis. Variationsofsmoothapproximationmethodsthat involve solutions of systems k nonlinear equations or which require multivariate interpolation or approximationare alsosubjectto thecurseofdimensionality(Sikorski, 1985,TWW,1988). Definition: A class of discrete MDP problems with ds state variables and da control variables is subject to the curse of dimensionality if comp(ds;da) = (cid:10)(2ds+da). A class of continuous MDP problems is subject to the curse of dimensionality if comp((cid:15);da;da) = (cid:10)(1=(cid:15)(ds+da)).11 Inthecomputerscienceliteratureproblemsthataresubjecttothecurseofdimensionalityarecalled intractable.12 If the complexityof the problem hasan upperbound that only grows polynomially in d we say that the problem is in the class P of polynomial-time problems. Computer scientists refertopolynomial-timeproblemsastractable.13 11Thenotation(cid:10)denotesa“lowerbound”oncomplexity,i.e. comp(d)=(cid:10) g(d) iflim g(d)=comp(d) < . d!1 j j 1 12We prefer the terminology “curseof dimensionality”since the common(cid:0)use o(cid:1)f the term “intractable”connotes a problem that can’t be solved. Computer scientists have a specific terminology for problems that can’t be solved in anyfinite amounttime: theseproblemshaveinfinite complexity,andareclassified asnon-computable. Howevereven thoughintractableproblemsarecomputableproblemsinthecomputerscienceterminology,astheproblemgrowslarge thelowerboundonthesolutiontimegrowssoquicklythattheseproblemsarenotcomputableinanypracticalsense. 13Hereagainitisimportanttonotethedifferencebetweenthecommonmeaningoftheterm“tractable”andthecomputer sciencedefinition. Evenso-called“tractable”polynomial-timeproblemscanquicklybecomecomputationallyinfea- sibleifcomplexitysatisfies comp(d) O(dk)forsomelargeexponentk. Howeveritseemstobeafortunateactof (cid:21) naturethatthemaximumexponentkformostcommonpolynomialtimeproblemsisfairlysmall;typicallyk [2;4]. 2 9 Alargenumberofmathematicalproblemshavebeenproventobeintractableonaworstcase basis. These problems include multivariateintegration, optimization, and function approximation (TWW, 1988), and solution of multidimensional partial differential equations (PDE’s), and Fred- hom integral equations (Werschulz, 1991). Lest the reader get too depressed about the potential usefulnessofnumericalmethodsattheoutset,wenotethattherearesomemathematicalproblems that are not subject to the curse of dimensionality. Problems such as linear programming and solutionofordinarydifferentialequationshavebeenproventobeintheclassP ofpolynomialtime problems(NemirovskyandYudin,1983andWerschulz,1991). UnfortunatelyChowandTsitsiklis 1989 proved that the general continuous MDP problem is also intractable. They showed that the complexityfunctionforcontinuousMDP’swithd statevariablesandd controlvariablesisgiven s a by: 1 comp((cid:15);d ;d ;(cid:12)) = (cid:2) ; (1:5) s a 0 (1 (cid:12))2(cid:15) 2ds+da1 B (cid:0) C @(cid:16) (cid:17) A wherethesymbol(cid:2)denotesbothanupperandlowerboundoncomplexity.14 Insubsequentwork, ChowandTsitsiklis1991developedthata“onewaymultigrid”algorithmthatcomeswithinafactor of 1= log((cid:12)) of achieving this lower bound on complexity, and in this sense is an approximately j j optimal algorithm for the MDP problem. As we will see, the multigrid algorithm is a particularly simpleexampleofadiscreteapproximationmethodthatisbasedonsimpleequi-spacedgridsofS andA.15 The fact that the Chow-Tsitsiklis lower bound on complexity increases exponentially fast in d and d tells us that the curse of dimensionality is an inherent aspect of continuous MDP s a problemsthatcan’tbecircumventedbyanysolutionalgorithm,nomatterhowbrilliantlydesigned. Thereare,however,threepotentialwaystolegitimatelycircumventthecurseofdimensionality: 1) we can restrict attention to a limited class of MDP’s such as the class of linear-quadratic MDP’s, 2) we can use algorithms that work well on an average rather than worst case basis, or 3) we can userandomratherthan deterministicalgorithms. Chapter5bySargent,McGrattanandHansendemonstratesthepayoffstothefirst approach: theypresenthighlyefficient polynomial-timealgorithmsforsolvingthesubclassoflinear-quadratic 14Formally,comp(d)=(cid:2)(g(d))ifthereexistconstants0 c c suchthatc g(d) comp(d) c g(d). 1 2 1 2 (cid:20) (cid:20) (cid:20) (cid:20) 15Discrete approximation methods have been found to be approximately optimal algorithms for other mathematical problems as well. For example Werschulz 1991 showed that the standard finite element method (FEM) is a nearly optimalcomplexityalgorithmforsolvinglinearellipticPDE’s.

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.