ebook img

Holotropic Research and Archetypal Astrology - Stanislav Grof PDF

17 Pages·2009·0.2 MB·English
by  
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Holotropic Research and Archetypal Astrology - Stanislav Grof

SQPlab { A Matlab software for solving nonlinear optimization problems and optimal control problems Version 0.4.4 (February 2009) J. Charles Gilberty 1 The optimization problem to solve 2 1.1 The problem de(cid:12)nition . . . . . . . . . . . . . . . . . . . . 2 1.2 State constraints . . . . . . . . . . . . . . . . . . . . . . . 3 2 Description of the method 4 2.1 Osculating quadratic problem . . . . . . . . . . . . . . . . 4 2.1.1 Solving the OQP completely . . . . . . . . . . . . 4 2.2 Hessians and their approximation . . . . . . . . . . . . . . 5 2.2.1 Newton methods . . . . . . . . . . . . . . . . . . . 5 2.2.2 Quasi-Newton methods . . . . . . . . . . . . . . . 5 2.3 Globalization techniques . . . . . . . . . . . . . . . . . . . 6 2.3.1 Globalization by linesearch . . . . . . . . . . . . . 6 3 Usage 7 3.1 The solver . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3.2 The simulator . . . . . . . . . . . . . . . . . . . . . . . . . 11 3.2.1 Free call . . . . . . . . . . . . . . . . . . . . . . . . 12 3.2.2 Function and (cid:12)rst order derivative computations . 12 3.2.3 Second order derivative computations . . . . . . . 13 3.2.4 Optimal control computations. . . . . . . . . . . . 14 3.3 Calling sequence . . . . . . . . . . . . . . . . . . . . . . . 15 3.4 Other tools . . . . . . . . . . . . . . . . . . . . . . . . . . 16 3.4.1 QR factorization . . . . . . . . . . . . . . . . . . . 16 References 16 Index 16 The name of the code, SQPlab, stands for Sequential Quadratic Program- ming (SQP) laboratory. It is written in Matlab. This piece of software is used by the author as a kind of laboratory for trying techniques related to the SQP algorithm, but it has been designed so that it can be useful to many. It is also aimed at accompanying the hanging chain project developped along part III of the book [1], during which it is shown how to implement step by step many aspects of the algorithm. The code is still in an embryonic stage. yINRIA-Rocquencourt,BP105,F-78153 LeChesnayCedex,France;e-mail: Jean-Charles. [email protected]. 1 1 The optimization problem to solve 1.1 The problem de(cid:12)nition SQPlab can solve a general nonlinear optimization problem of the form minx2Rn f(x) 8 l 6 (x;cI(x)) 6 u (P) (1) >> c (x) = 0 < E c (x) = 0; S > > : where f : Rn ! R, c : Rn ! RmI, and c : Rn ! RmE are nonlinear smooth I E functions, possibly nonconvex. Smoothness means that at least (cid:12)rst order di(cid:11)er- entiability isrequired. Thenotation l 6 (x;c (x)) 6 uexpressesincompact form I bound constraints on x and on c (x). The bounds l and u 2 R(cid:22)n+mI must verify I l < u and may have components with in(cid:12)nite values (they are not considered in that case). On the other hand, problem (P) can have m > 0 nonlinear inequal- I ity constraints and m > 0 equality constraints. The function c : Rn ! RmS E S imposes additional equality constraints that are treated by SQPlab in a way that is appropriate to optimal control problems (see section 1.2); these m > 0 S constraints are named state constraints below. A solution of problem (P) is a vector x , made of n components, that is (cid:3) feasible (i.e., that satis(cid:12)es the constraints of problem (P)) and that gives to f a value not greater than the one given by any other feasible point (or vector). To be concise, we note c (x) (cid:17) x and c : Rn ! Rm the function de(cid:12)ned by B c(x) := (c (x);c (x);c (x);c (x)); hence m :=n+m +m +m . Occasionally, B I E S I E S we will denote by n the number of variables x that are subject to a (cid:12)nite lower B i or upper bound. For a vector v 2 Rm, we de(cid:12)ne the vector v# by max(0;l (cid:0)v ;v (cid:0)u ) if i 2 B[I (v#) = i i i i (2) i (cid:26) vi if i 2 E [S: Then all the constraints of (P) can be written c(x)# = 0. This does not make the problem easier, however, since x 7! c(x)# is usually non di(cid:11)erentiable. It is just a way of making the notation more concise. The Lagrangian of the problem is the function ‘ : Rn (cid:2)Rm de(cid:12)ned at (x;(cid:21)) by ‘(x;(cid:21)) =f(x)+(cid:21)>c(x): (3) This one is useful to write the KKT optimality conditions of problem (P). Note that, for i2 B[I, (cid:21) is actually the di(cid:11)erence between the multiplier associated i with the upper bound constraint c (x) 6 u and the one with the lower bound i i constraint l 6 c (x). If x is a solution to (P) and if the constraints are quali(cid:12)ed i i at x, there exists a vector (cid:21)2 Rm such that: (a) r ‘(x;(cid:21)) = 0 x 8 (b) l 6 (x;cI(x)) 6 u; cE(x) = 0; cS(x) =0 (4) < (c) 8i2 B[I : (cid:21)(cid:0)(l (cid:0)c (x)) = (cid:21)+(c (x)(cid:0)u ) =0; i i i i i i : 2 where t+ :=max(t;0) and t(cid:0) := max((cid:0)t;0). In (c), in(cid:12)nite bounds are replaced by large numbers of the same sign, so that, for example, (cid:21) must be nonnegative i whenl = (cid:0)1and u is (cid:12)nite. The(cid:12)rst condition refers to the proper optimality, i i the second one to the feasibility, and the third one is known as the complemen- tarity conditions. The components of the vector (cid:21) in this equation are called the optimal KKT multipliers or the dual solutions or the marginal costs. 1.2 State constraints The constraint c (x) = 0 can be used to express state constraints in an optimal S control setting. These constraints are assumed to be without singularities, in the sense that the Jacobian matrix A (x) = c0 (x) S S is assumed uniformly surjective, which means that 9(cid:13) > 0; 8x2 X ; 8v 2RmS : kA (x)>vk > (cid:13) kvk: (5) S S S S The x’s range on a large closed set X , which may not be the full space Rn, but S should include the solutions and the iterates generated by the algorithm. When A (x) is surjective, it has a right inverse and it is assumed that this S one is the value at x of a smooth map A(cid:0) :X !Rn(cid:2)mS : x7!A(cid:0)(x): S S S Hence 8x2 X : A(cid:0)(x) is injective and A (x)A(cid:0)(x) = I . (6) S S S S mS On the other hand, the null space N(A (x)), which is also the space tangent S to the manifold fx0 2 Rn : c (x0) = c (x)g at x, has a basis formed of n(cid:0)m S S S vectors. We assume that these vectors are given by a smooth map Z(cid:0) :X !Rn(cid:2)(n(cid:0)mS) : x7!Z(cid:0)(x): S S S More precisely, for all x 2 X , the columns of Z(cid:0)(x) form a basis of N(A (x)) S S S or equivalently: 8x2 X : Z(cid:0)(x) is injective and A (x)Z(cid:0)(x) = 0. (7) S S S S If a state constraint exists (m 6= 0), SQPlab will ask the simulator to compute S the products A(cid:0)(x)v and Z(cid:0)(x)w S S for various x2 X , v 2RmS, and w 2 Rn(cid:0)mS. S In optimal control problems, these functions A(cid:0) and Z(cid:0) can be deduced S S from a partition of the variables x= (y;u) in state variables y 2 RmS and control variables u 2 Rn(cid:0)mS. Consider the corresponding partition of the Jacobian A (x): S A (x) = B(x) N(x) ; S (cid:0) (cid:1) where the square matrix B(x) is supposed to be nonsingular on for x 2X with S fB(x)g and fB(x)(cid:0)1g bounded. Then one can take x2XS x2XS B(x)(cid:0)1 (cid:0)B(x)(cid:0)1N(x) A(cid:0)(x) = and Z(cid:0)(x) = : S (cid:18) 0 (cid:19) S (cid:18) ImS (cid:19) 3 2 Description of the method One iteration of the SQP algorithm (see part III of [1] for an introduction) is made of a sequence of stages. We describethem in sequence in thissection. Only theelementsthatareusefulforunderstandingthebehaviorofSQPlabaregiven. 2.1 Osculating quadratic problem The SQP algorithm decomposes problem (P) in a sequence of quadratic sub- problems (quadratic objective and linear constraints). Such a subproblem is called an osculating quadratic problems (OQP and QP). At the current iterate (x;(cid:21)) 2Rn(cid:2)Rm, it reads (we drop the dependence of the functions in (x;(cid:21))): mind2Rn g>d+ 12 d>Md 8 ~l 6 (d;A d) 6u~ (8) I < c +A d = 0; E[S E[S : where g is the gradient rf(x), M is either the Hessian of the Lagrangian L := r2 ‘(x;(cid:21)) or an approximation to it, A := c0(x), A := c0 (x), ~l = xx I I E[S E[S l (cid:0)c (x), and u~ := u(cid:0) c (x). It is classical to impose the positive semi- B[I B[I de(cid:12)niteness of M (even though L does not have that property), in order to avoid a OQP that, otherwise, would be NP-hard. The osculating OQP is then convex. See section 2.2 for more details on the computation of M. 2.1.1 Solving the OQP completely When S = ?, SQPlab solves (8) thanks to the QP solver quadprog from the optimization toolbox of Matlab. When S 6= ?, SQPlab eliminates the linearized state constraints from (8) as follows. Any solution to problem (8) veri(cid:12)es c +A d = 0, so that, with the S S operators A(cid:0) (cid:17) A(cid:0)(x) and Z(cid:0) (cid:17) Z(cid:0)(x) de(cid:12)ned in section 1.2, it can be written S S S S d = r+t; (9) where r 2 R(A(cid:0)) is the restoration step and t2 R(Z(cid:0)) is the tangent step. The S S step t is indeed tangent to the manifold c(cid:0)1(c (x)), which is \parallel" to the S S state constraint manifold c(cid:0)1(0). Clearly, there holds S r :=(cid:0)A(cid:0)c 2 Rn; (10) S S while t:=Z(cid:0)h with h 2 Rn(cid:0)mS is a solution to the tangent quadratic problem: S minh2Rn(cid:0)mS (g+Mr)>ZS(cid:0)h+ 21 h>ZS(cid:0)>MZS(cid:0)h 8 ~l0 6 (Z(cid:0)h;A Z(cid:0)h) 6u~0 (11) S I S < c +A r+A Z(cid:0)h = 0: E E E S : We have denoted by g(cid:22) := Z(cid:0)>g the reduced gradient, ~l0 = l(cid:0)c (x)(cid:0)A r S B[I B[I and u~0 = u(cid:0)c (x)(cid:0)A r. The (n(cid:0)m )(cid:2)(n(cid:0)m ) matrix M(cid:22) :=Z(cid:0)>MZ(cid:0) B[I B[I S S S S is called the reduced matrix. The present version of the software does not allow you to solve problems with S 6= ? and inequality constraints. 4 2.2 Hessians and their approximation 2.2.1 Newton methods When second derivatives are computed, the SQP algorithm is a Newton-like method applied to the optimality conditions. In particular, provided the starting point is close enough to a regular stationnary point (x ;(cid:21) ) (local convergence) (cid:3) (cid:3) and some mild assumptions are satis(cid:12)ed, the algorithm generates a sequence of primal-dual iterates (x ;(cid:21) ) that converge quadratically to (x ;(cid:21) ). TheNewton k k (cid:3) (cid:3) algorithm is selected by setting (see section 3.1) options.algo method = ’Newton’; The implementation of the Newton method has two variants, which depend on the value of m . S When m = 0 (standard problems), this Newton-like method requires to S take for symmetric n(cid:2)n matrix M in the osculating quadratic problem (8), the Hessian of the Lagrangian L (cid:17) L(x;(cid:21)) (cid:17) r2 ‘(x;(cid:21)); (12) xx which is the n(cid:2)n symmetric matrix of the second order derivatives of the La- grangian ‘ with respect to x. Its (i;j) element is therefore given by @2‘ L = (x;(cid:21)): (13) ij @x @x i j When m 6= 0 (optimal control problems), the osculating QP becomes (11), S which requires the computation of the reduced Hessian of the Lagrangian L(cid:22) :=Z(cid:0)>LZ(cid:0) (cid:17) Z(cid:0)(x)L(x;(cid:21))Z(cid:0)(x)> (14) S S S S as well as the matrix-vector product Lr: (15) 2.2.2 Quasi-Newton methods Quasi-Newton algorithms inSQPlabare selected bythe setting (see section 3.1) options.algo method = ’quasi-Newton’; They generate approximations M of Hessians by updating them with the BFGS k formula (16)below (see [1]forexample). Giventwo well chosenvectors s andy k k belonging to the same space, the updated matrix M is given by k+1 M s s>M y y> M = M (cid:0) k k k k + k k: (16) k+1 k s>M s y>s k k k k k Since y = M s and M is a Hessian approximation, it makes sense to choose k k+1 k k for y the change in some gradient from x to x = x +s . Obviously, the k k k+1 k k 5 BFGS formula preserves symmetry. It also preserves positive de(cid:12)niteness when the monotonicity condition y>s > 0 (17) k k is satis(cid:12)ed [2]. Insomecases(essentially unconstrainedproblemsorproblemswithonlystate constraints), the monotonicity condition can nicely be realized by (piecewise) linesearch. When the structure of the problem does not allow the linesearch to guarantee the monotonicity condition, this one is ensured by the following heuristics, known as the Powell correction [3] (even if there is no linesearch). If y>s issu(cid:14)cientlypositiveinthesensethaty>s > (cid:20)s>M s ,wheretheconstant k k k k k k k (cid:20) ’ 0:2, the vectors y and s are used in the BFGS formula. Otherwise, y is k k k replaced by yP := (cid:18) y +(1(cid:0)(cid:18) )M s ; k k k k k k where (cid:18) is the greatest number in [0;1] such (yP)>s > (cid:20)s>M s . A similar k k k k k k technique can beusedwhenW :=M(cid:0)1 isupdatedbythe inverse BFGS formula k k to approximate some inverse Hessian. In that case, it is more appropriate to modify s into k sP :=(cid:18)0s +(1(cid:0)(cid:18)0)W y ; k k k k k k where(cid:18)0 isthegreatestnumberin[0;1]such(yP)>s > (cid:20)y>W y . Thisheuristics k k k k k k often providesgood results, butcanalso yieldill-conditioned matrices M orW . k k When m = 0 (standard problems), the matrices M try to approximate S k at best the full Hessian of the Lagrangian L(x ;(cid:21) ). If there is no constraint k k (or active a(cid:14)ne constraints), the monotonicity condition (17) is ensured by the Wolfe linesearch, which provides appropriate vectors s = x +(cid:11) d and y = k k k k k rf(x )(cid:0)rf(x ) satisfying the monotonicity condition (17) (see section 2.3). k+1 k In the presence of constraints, one should take for y the change in the gradient k of the Lagrangian y‘ = r ‘(x ;(cid:21) )(cid:0)r ‘(x ;(cid:21) ): k x k+1 k+1 x k k+1 However, since the Lagrangian may never have a positive curvature along d , k even close to the solution, the monotonicity condition (y‘)>s > 0 cannot be k k ensured by linesearch along d . Therefore, the actual y is obtained by the k k Powell correction of y‘. k 2.3 Globalization techniques To ensure convergence from remote starting points, SQPlab combines a merit function and linesearch. Globalization can be desactivated by setting options.algo globalization = ’unit stepsize’; in which case the local method described in section 2.1 applies. 2.3.1 Globalization by linesearch This is the default globalization option of the solver and it can be ensured by setting (see section 3.1) 6 options.algo globalization = ’linesearch’; With linesearch, the generated search directions need to have a descent prop- erty for some merit function. For the while, this property cannot be ensured with Newton’s method, so that the algorithm can fail in that case. For the quasi- Newton approach, the descent property can be ensured by various techniques. The simplest one (default) consists in modifying the vector y used in the BFGS k formula, using Powell’s corrections as described in section 2.2.2. This is required by setting (see section 3.1) options.algo descent = ’Powell’; If the problem has no constraint or only state constraints, the positive de(cid:12)nite- ness of the generated matrices can be ensured by using Wolfe’s linesearch (or an extension to it). This is required by setting (see section 3.1) options.algo descent = ’Wolfe’; A merit function gathers the two aspects of problem (P), minimality of the criterion f and feasibility of the constraints c. In SQPlab, we use the merit function (cid:2) : Rn !R de(cid:12)ned at x2 Rn by (cid:22);(cid:27) (cid:2) (x) = f(x)+(cid:22)>c(x)+(cid:27)kc(x)#k ; (cid:22);(cid:27) 1 where(cid:22)isanupdatedmultiplier estimate (itisnonzero,onlywhentherearestate constraints and options.algo descent is set to ’Wolfe’), (cid:27) > 0 is a penalty parameter, k(cid:1)k is the ‘ -norm, and the notation c(x)# has been de(cid:12)ned by (2). 1 1 Oncetheprimal-dualsolution(d;(cid:21)QP)totheosculatingquadraticproblem(8) has been computed, the new iterate (x ;(cid:21) ) is obtained by + + x = x+(cid:11)d and (cid:21) = (cid:21)+(cid:11)((cid:21)QP(cid:0)(cid:21)); (18) + + wherethestepsize(cid:11) > 0canbesettoone(thisisconvenientinaneighborhoodof the solution) or determined by linesearch. The latter forces the decrease of (cid:2) . (cid:22);(cid:27) 3 Usage The arguments of the procedure sqplab are described in section 3.1. In sec- tion 3.3, we give the typical sequence of statements that must precede a call to the solver. 3.1 The solver Here is the form of the sqplab procedure: [x,lm,info] = sqplab (@simul, x, lm, lb, ub, options) Input arguments. The (cid:12)rst two input arguments are mandatory the other ones are optional. If an optional argument is present, those preceding it must also be present. 7 @simul: is the function handle of the user-supplied simulator. If this one is named mysimul, use the handle @mysimul as the (cid:12)rst argument of sqplab. In SQPlab, the simulator is indeed called by [...] = simul (...) not by [...] = feval (simul,...). See section 3.2 for more details. x: vector giving the initial guess of the solution to problem (P). The length of x is used to determine the number of variables n. lm (optional): vector giving the initial guess of the dual solution (Lagrange or KKT multiplier): (cid:15) lm(1:n)= (cid:21)+(cid:0)(cid:21)(cid:0) is the di(cid:11)erence between the multiplier (cid:21)+ associated B B B with the bound constraint x 6 u and the multiplier (cid:21)(cid:0) associated with B B the bound constraint l 6 x, B (cid:15) lm(n+1:n+mi) = (cid:21)+ (cid:0)(cid:21)(cid:0) is the di(cid:11)erence between the multiplier (cid:21)+ as- I I I sociated with the inequality constraint c (x) 6 u and the multiplier (cid:21)(cid:0) I I I associated with the inequality constraint l 6 c (x), I I (cid:15) lm(n+mi+1:n+mi+me) = (cid:21) is the multiplier associated with the equality E constraint c (x) = 0, E (cid:15) lm(n+mi+me+1:n+mi+me+ms) = (cid:21) is the multiplier associated with the S state constraint c (x) = 0. S The dimensions mi = m , me = m , and ms = m are known after the (cid:12)rst I E S call to the simulator (see section 3.2). It is often di(cid:14)cult to (cid:12)nd a good value for the multiplier (cid:21); actually it has the meaning of a marginal cost, specifying how the optimal cost varies when the corresponding constraint is perturbed. If you have no idea of that value, just set lm=[], hence letting sqplab choose itself the best multiplier. The default value computed by sqplab is the least-squares multiplier (this computation is done when lm is set to [] or is not present as an argument). lb (optional): vector of dimension n+m giving the lower bound on x ((cid:12)rst n I components) and c (x). It can have in(cid:12)nite components. I The default value is -inf (no lower bound). ub (optional): vector of dimension n+m giving the upper bound on x ((cid:12)rst n I components) and c (x). It can have in(cid:12)nite components. I The default value is inf (no upper bound). options (optional): structure for tuning the behavior of sqplab. In the strings below, the case is meaningless and multiple white spaces are considered as a single white space. The following (cid:12)elds can be used. (cid:15) options.algo descent speci(cid:12)es the technique used to ensure descent of the direction solution to the osculating QP. { ’Powell’: assumes that the Hessian of the Lagrangian is approximated bytheBFGSformula(hencealgo method= ’quasi-Newton’)andthat the positive de(cid:12)nitenessof the generated matrices is ensuredby Powell’s corrections. { ’Wolfe’ assumes that the Hessian of the Lagrangian is approximated bytheBFGSformula(hencealgo method= ’quasi-Newton’)andthat 8 thepositivede(cid:12)nitenessofthegeneratedmatricesisensuredbytheWolfe linesearch. (cid:15) options.algo globalizationspeci(cid:12)esthetypeofglobalizationtechnique to use. { ’unit stepsize’preventssqplabfromusingaglobalization technique. In other words, (cid:11) in (18) is set to 1. { ’linesearch’ (default) requires sqplab to force convergence with line- search or piecewise linesearch. (cid:15) options.algo method speci(cid:12)es the second order information used in the algorithm: { ’Newton’ requires a Newton algorithm; second order derivatives will be computed either in the form of the Hessian of the Lagrangian L de(cid:12)ned by (12)-(13) (whenm = 0) or in the form of the reducedHessian of the S Lagrangian L(cid:22) de(cid:12)ned by (14) (when m 6= 0); see section 2.2.1; S { ’quasi-Newton’(default) requires a quasi-Newton algorithm; only (cid:12)rst order derivatives will be computed; when m = 0 the full Hessian of the S Lagrangian is approximated; when m 6= 0 (optimal control problems), S the reduced Hessian of the Lagrangian is approximated and at least two linearizations of the state constraints are performed at each iteration; see section 2.2.2. (cid:15) options.df1 can be used in unconstrained optimization to specify the expected decrease of the objective function at the (cid:12)rst iteration. When it is positive (> 0), this value is used by the quasi-Newton algorithm to scale theinitialmatrix(usingFletcher’sformula),henceavoidinguselessstepsize trials at the (cid:12)rst iteration. A good value of options.df1 is often di(cid:14)cult to (cid:12)nd, but for least-squares problems the value f(x )=10 or f(x )=100 is 1 1 often adequate. As a rule, a too large value is not dangerous: the stepsize is reduced by SQPlab in a few internal iterations. On the other hand, an excessively small value of options.df1could, due to roundingerror, force SQPlab to stop with at the (cid:12)rst iteration. The default value is 0 (unknown expected initial decrease). (cid:15) options.dxminis a positive number specifying the precision to which the primal variables must be determined. If sqplab needs to make a step smaller than dxmin in the in(cid:12)nity-norm to progress to optimality, it will stop. To this respect, a too small value for dxmin will force the solver to work for nothing at the very end when rounding errors prevent it from making any progress. The value of dxmin is also used to detect active bounds (for the bound constraints on x and the bound constraints on c (x)), so that this value I intervenes in the complementarity conditions. The default value is 1.e-8. (cid:15) options.fout is the (cid:12)le identi(cid:12)er (FID) for the printed outputs. The default value is 1, which implies that the outputs are written on the screen. (cid:15) options.infis used to speci(cid:12)ed in(cid:12)nite (or nonexistent) bounds. A lower 9 bound lb(i) 6 (cid:0)options.inf is considered to be in(cid:12)nitely negative (or nonexistent) and an upper bound ub(i) > options.inf is considered to be be in(cid:12)nitely positive (or nonexistent). The default value is inf (the largest number in Matlab). (cid:15) options.miter maximum number of iterations. The default value is 1000. (cid:15) options.tol tolerance on optimality: { options.tol(1) is the tolerance on the gradient of the Lagrangian, { options.tol(2) is the tolerance on the feasibility, { options.tol(3) is the tolerance on the complementarity. More speci(cid:12)cally, as soon as (x;(cid:21)) satis(cid:12)es kr ‘(x;(cid:21))k 6 options.tol(1) x 1 kc(x)#k 6 options.tol(2) 1 and complementarity is satis(cid:12)ed, it is considered to be optimal. (cid:15) options.verbose is the verbosity level for the outputs: = 0 nothing is printed; the only manner to be informed of the behavior of sqplab is to look at the structure info (see below); > 1 error messages (default); > 2 initial setting and (cid:12)nal status; > 3 one line per iteration; > 4 details on the iterations; > 5 details on the step computation and on the globalization; = 6 some additional information is printed, generally requiring expensive computation, such as the evaluation of the eigenvalues of M. Output arguments. None of the output arguments must be present. If an output argument is present those preceding it must also be present. x: vector of dimension n giving the computed primal solution x. lm: vectorofdimensionm = n+m +m +m givingthecomputeddualsolution I E S or multiplier (cid:21). See the description of input variable lm for the meaning of its components. info: structure providing various information on the minimization realized by sqplab. The following (cid:12)elds are meaningful. (cid:15) info.aevalue oftheJacobianofthe equality constraint functionc at the E (cid:12)nal point x. (cid:15) info.ai value of the Jacobian of the inequality constraint function c at I the (cid:12)nal point x. (cid:15) info.ce value of the equality constraint function c at the (cid:12)nal point x. E (cid:15) info.ci value of the inequality constraint function c at the (cid:12)nal point x. I (cid:15) info.compl value of the complementarity at the (cid:12)nal point (x;(cid:21)). (cid:15) info.cs value of the state constraint function c at the (cid:12)nal point x. S (cid:15) info.f value of the cost function at the (cid:12)nal point x. 10

Description:
Stanislav Grof. In the concluding with Richard Tarnas and the connections they discovered between the planetary The Adventure of Self-Discovery. Albany
See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.