ebook img

Optimization: Proceedings of the Fifth French-German Conference held in Castel-Novel (Varetz), France, Oct. 3–8, 1988 PDF

224 Pages·1989·3.86 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Optimization: Proceedings of the Fifth French-German Conference held in Castel-Novel (Varetz), France, Oct. 3–8, 1988

Lecture Notes ni Mathematics yb detidE .A Dold, nnamkcE.B dna E snekaT 1405 S. Dolecki ).dE( noitazimitpO sgnideecorP of eht Fifth namreG-hcnerF Conference held ni levoN-IetsaC ,)zteraV( France, Oct. 3-8, 8891 Springer-Verlag London NewYork Heidelberg Berlin siraP oykoT Hong gnoK Editor Szymon Dolecki D6partment de Math6matiques, Facult6 des Sciences 123 rue A. Thomas, 87060 Limoges, France Mathematics Subject Classification (1980): 49-06 ISBN 3-540-51970-X Springer-Verlag Berlin Heidelberg New York ISBN 0-387-51970-X Springer-Verlag New York Berlin Heidelberg This work is subject to copyright. rights All are reserved, whether whtohtee or ofp art the material is concerned, specifically the rights of translation, reprinting, re+use of illustrations, recitation, broadcasting, reproduction on microfilms or in other ways, and storage in data Duplication banks. of publication this poarr ts thereof is only permitted under the provisions of the German Copyright Loafw September 9, version its in 1965, of June 24, 1985, and a copyright fee must always be Violations fall paid. under the prosecution acto f the German Copyright Law. © Springer-Verlag Berlin Heidelberg 1989 Printed ni Germany Printing and binding: Druckhaus Beltz, Hemsbach/Bergstr. 2146/3140-543210 - Printed on acid-free paper INTRODUCTION I am pleased to present the proceedings of the Fifth French-German Conference on Optimization held in the castle Castel-Novel in Varetz near Brive from the 3 rd to th 8 of October 1988. Its aim was to review the work carried out by various research groups, to intensify the exchange of ideas and to evaluate the state of arts and the trends in the area of optimization. As a consequence of the spectacular growth in speed of computation, one witnesses an increasing role of discrete optimization and of computation complexity questions. In order to reflect these trends a survey talk on discrete optimization was invited and a special session was dedicated to projective methods in linear programming. During the meeting, state-of-art talks were given in selected topics: identification - H. Bock (Heidelberg), nonsmooth optimization - A. D. Ioffe (Haifa), discrete optimization - B. Korte (Bonn), sensitivity analysis - K. Malanowski (Warszawa), projective methods - J.-Ph. Vial ( Gendve ). The contents of the volume differ slightlyf rom the program of the conference (the latter is recalled on pages V and VI). In fact, the results published elsewhere do not appear here, as require the rules of the Lecture Notes. On the other hand, some authors unable to attend the meeting wished nevertheless to contribute to the proceedings. I am very grateful to the referees of this volume for the excellent work that they have done. Previous French-German conferences were organized: - first in Ob'erwolfach (16-24 March 1980) by A. Auslender, W. Oettli and J. Stoer; - second in Confolant (16-20 March 1981) by J.-B. Hiriart-Urruty; - third in Luminy (2-6 July 1984) by C. Lemar6chal; - fourth in Irsee (21-26 April 1986) by K.-H. Hoffmann, J. Zowe, J.-B. Hiriart-Urruty and C. Lemar6chal. The Equipe d'Analyse non lindaire et d'Optimisation did its best to prepare this meeting. On the other hand, I had received valuable assistance from several members of the Scientific Committee; I owe them my gratitude. Let me express high appreciation to the institutions that supported this conference which would not have taken place without their generosity. I gratefully acknowledge the cordial reception by the City of Brive who offered also a recital (of Th6r6se Dussaut and Jean Barthe) and by the Conseil G6n6ral de la Dordogne during the excursion to Sarlat and the Grottes de Lascaux. Limoges, the th 5 of June 1989 Szymon Dolecki SCIENTIFIC COMMITTEE S. Dolecki (Limoges), J.-B. Hiriart-Urruty (Toulouse), K.-H. Hoffmann (Augsburg), W. Krabs (Darmstadt), C. Lemar~chal (Rocquencourt), J.-P. Penot (Pau), J. Stoer (Wiirzburg), J. Zowe (Bayreuth) . ORGANIZING COMMITTEE S. Dolecki, J. Guillerme, C. Malivert, M. Th~ra (Equipe d'Analyse non lin6aire et Optimisation de Limoges) and V. Piqueras (clerical assistant). SUPPORTING INSTITUTIONS Conseil G~n~ral de la Corr~ze Conseil R~gionai du Limousin D~partement de Math~matiques de Limoges Direction des Recherches Etudes et Techniques Electricit~ de France European Research Office Minist~re des Affaires Etrang~res Minist~re de l'Education Nationale Minist6re de la Recherche et de la Technologie Universit~ de Limoges Ville de Brive Excursion partially supported by Conseil G~n~ral de la Dordogne Technical support Macorbur SPONSORS Centre National de la Recherche Scientifique Facult6 des Sciences de Limoges Soci6t6 Math6matique de France TABLE OF CONTENTS A smoothing technique for nondifferentiable optimization problems .................................... 1 A. Ben-Tal and M. Tebonlle Local study of newton type algorithms forc onstrained problems ...................................... 13 J. F. Bonnans Numerical study of projective methods for linear programming ........................................ 25 C. Fraley and J.-Ph. Vial Applications and numerical convergence of thep artial inverse method ................................ 39 H. Idrissi, O. Lefebvre and C. Michelot On some recent developments in the theory of second order optimality conditions .................. 55 h. Ioffe On the method of analytic centers for solving smooth convex programs .............................. 69 F. Jarre An optimization problem with a piecewise linear objective and conditional threshold constraints. Solution by partitioning ........................................................................ 87 B. Lemaire and C. Lemaire-Misonne e -Regularized two-level optimization problems : Approximation and existence results ............. 99 P. Loridan and J. Morgan Variational convergence and perturbed proximal method for saddle point problems ............... 115 K. Mouallif Note on Pareto optimality and duality for certain nonlinear systems ................................. 141 Werner Oettli An 0(n3L) approximate centerm ethod for linear programming ...................................... 147 C. Roos Improvement and implementation of some algorithms for nonconvex optimization problems .... 159 Tran Vu Thieu On a search problem in graph theory ..................................................................... 171 Eberhard Trieseh Gradient Newton flows for complex polynomials ..................................................... 177 F. Twilt, P. Jonker and M. Streng A unified approach to projective algorithms for linear programming ....................... . ........ 191 Jean-Philippe Vial Optimization - Fifth French-German Conference Castel Novel 1988, Lecture Notes in Mathematics 5041 Springer Verlag ,9891 pp. 1-I1 A SMOOTHING TECHNIQUE FOR NONDIFFERENTIABLE OPTIMIZATION PROBLEMS A. BEN-TAL 1 AND M. TEBOULLE 2 Abstract . We ecudortni a gnihtooms euqinhcet rof elbaitnoreffidnon noitazimitpo .smelborp ehT hcaorppa is to replace eht lanigiro melborp by na etamixorppa eno hcihw is dellortnoc by a gnihtooms .retemarap ehT noissecer noitcnuf is latnemurtsni in eht noitcurtsnoc of eht -xorppa etaani .melborp An a priori dnuob no eht ecnereffid neewteb eht optimal values of eht original melborp dna eht etamixorppa eno is explicitly derived ni term of eht gnihtooms .retemarap ehT spihsnoitaler neewteb eht primal detamixorppa melborp dna its gnidnopserroc dual era investi- .detag l. INTRODUCTION In this paper we introduce a smoothing mechanism for nondifferentiable optimization problems. The idea underlying our approach is to approximate the original nondifferentiable problem by a per- turbed problem. The basic tool to generate such an approximate problem is through the use of reces- sion functions. The resulting approximate problem is a smooth optimization problem which contains a smoothing parameter. This parameter controls the accuracy of the approximation. When the parame- ter approaches zero, the original problem is recovered. In Section 2 we recall some basic properties of recession functions and present the framework for smoothing nondifferentiable optimization prob- lems. Our approach is general enough to cover many interesting problems, including ~l-norm mini- mization and min-max optimization. This is illustrated via examples. In Section 3, we derive an a priori error bound on the difference between the optimal values of the original problem and the ap- proximate one. The duality correspondence existing between the recession function and the support function leads naturally to explore the relationships between the primal perturbed problem and its cor- responding dual. These duality results are derived in Section 4 and some applications are given. We will frequently referred to results in Rockafellar book 3. The notations and definitions used here are the same as in that book. Recall that the recession function of the function g is denoted by gO +, the domain of g is denoted by dom g. The conjugate function of g is denoted by g* and is de- f'med as g*(z) = sup{ < x, z > - g(x) : x e dora g} where < -, • > denotes the inner product in RI m The support function of a set S is given by ~*(zlS) = sup{ < x, z > : x e S} and the relative interior of S is denoted by ri S. 1 Faculty of Indusllial Engineering and Management, Technion, Israel Institute of Technology, Haifa 32000, Israel. Supported by NSF Grant .0421088-SCE 2 Department of Mathematics and Statistics, University of Maryland, Baltimore County, Baltimore. Supported by AFOSR Grant 0218-88 and NSF Grant .9322088-SCE 2 FIFTH NAMREG-HCNERF ECNEREFNOC ON NOITAZIMITPO 2. THE SMOOTHING METHOD We consider the following optimization problem : (p) inf {G(x) := F(fl(x) ..... fro(x))} XE R fl and where we assume that {fi(x), i = 1 ..... m} are real functions over IR n, and F is the recession function of some proper convex function g, (2.1) F(y) = g0+(y) = sup{g(x + y) - g(x) : x ~ dora g} The recession function is a positively homogeneous convex function (see [3, Theorem 8.5, p.66]). In the sequel we assume that g : RI m )---- R is a closed proper convex function and that 0 ~ dora g. Let us define Fe(y) = e g(Y) Then from [3, Corollary 8.5.2, p.67] we have F(y) = lim ) FE(y 'v'y e R m E---)0 Thus, for small e > 0, we can approximate F by Ft. Hence, an approximated (perturbed) problem for (P) is : (Pe) inf { G~ (x) ::= E g(fl(x)-''''' fm(X) }) xe n R 31 E The usefulness of the approximation lies in the fact that frequently g (and hence Fe) is a smooth func- tion, while F is not. Here e > 0 plays the role of a smoothing parameter. 2.1. EXAMPLE. Consider the grnorm minimization problem : 111 Y. Jfi(x) l (P1) _n i= 1 xE bl Here, F(y) ~n=l [ iY .I It is easily verified that F(y) is the recession function of = Ill )y(g Z = J i i=l hence the resulting approximate problem is the smoothed problem : A. BEN-TAL DNA M. ELLUOBET 3 (P1)e mill ~ /fi(x)2+e 2 xeR n i=l This is precisely the nonlinear approximation problem suggested by El-Attar et al. [2] for solving the gl-norm optimization problem. 2.2. EXAMPLE. Consider the continuous-discrete min-max optimization problem : min max fi(x) (P2) xeR n m<-i<-1 Here, F(y) = max ,iY which is the recession function of m<-i<-1 m g(y) =log ~ yi e i=l Hence, the resulting approximation is the smoothed problem : m (p2) e rain e log~ exp ( ) xe n R 1 i= E This approximate function here is similar to the penalty function used in Bertsekas 1, [ Section 5.1.3]. 3. ERROR ANALYSIS In this section we show that the difference between the optimal values of the original function and the approximate one, is bounded by a term which is proportional to the smoothing parameter e. We first recall the following result. 3.1. LEMMA. [3, Theorem 13.3, p. 116] Let g be a proper convefxu nction. Then (g*0 +) (x*) = 8*(x* I domg) = sup < x, x* > xe domg If g is closed, then +) (gO (x) = $*(x I domg) = sup < x, x* > dorng x*e We denote by x* and x e * the minimizers of G(x) and Ge(x) respectively. We may now establish the main result of this section. 3.2. THEOREM . Let g be a closed proper convex function and suppose that the following assumption holds : (A) VE > 0 e g(z / e) > (gO+)(z) Vz 4 FIFTH NAMREG-HCNERF ECNEREFNOC ON NOITAZIMITPO Then for every e > 0 0 < G(x*) G(x*) <- eg(O) - Proof. Since g is closed we have g**(z) = g(z) = sup { < t, z > - g*(t)} ~t *gmod -< sup <t,Z> + sup (--g*(t)) ~t *gmod m *gmod but sup (- g*(t)) = g(0) and, from Lemma 3.1 sup < t, z > = (g0+)(z). Hence, Et dora g* t~ dom g* g(z)- (gO+)(z) < g(O) for every z Applying the above inequality at z = f(x) / = E (fl(x) ..... / E fm(x) / )E T and using the fact that (gO +) is positively homogeneous, it follows that (3.1) G~(x) - G(x) < ~-g(0) Now, since x* = argmin G(x), then G(x*) < G(x) for every x and thus in particular G(x*) - G(x*) > 0. From assumption (A) we have Ge(x) > G(x) for every x. Hence G(x~) - G(x*) < Ge(x~) - G(x*) But, x* = argmin Ge(x), then Ge(x~) < Ge(x) for every x and thus in particular Ge(x~) < Ge(x*). Combining with the above inequality it follows that G(x~)- G(x*) < Ge(x*)- G(x*) <_ Ge(x*) - G(x*) <_ e.g(0) The last inequality following from (3.1) [] Note that assumption (A) holds trivially for Examples 2.1 and 2.2 3.3. EXAMPLE . From Example 2.1 we have g(0) = m, thus for the ~l-norm minimization problem the following holds : 0 < G(x~) - G(x*) < e.m 3.4. EXAMPLE. From Example 2.2 we have g(0) = log m, hence for the min-max problem 0 < G(x~) - G(x*) <_ e.log m 4. DUAL PROBLEMS The dual correspondence existing between the recession function and the support function as given in Lemma 3.1, leads naturally to explore the relationships between the primal problem and the A. LAT-NEB DNA .M ELLUOBET 5 induced dual. In the sequel we assume that the functions fl(x) ..... fm(x) are convex and we denote f(x) = (fl(x) ..... fm(x)) .w Recall that the primal problem is (p) inf {G(x) = (gO)(fl(x) + ..... fro(x))} XE R B By Lemma 3.1, we have (go+)(y) = 8*(y I dom g*) Thus (P) can be rewritten as the following min-max problem : t~ (p) inf 8 if(x) I domg*) = inf sup { < y, f(x) > : y ~ domg*} x~ "R~x Y Therefore a natural dual problem for (P) is (D) sup{H(y) : y e dom g*} where the dual objective function is H(y) = inf{ < y, f(x) > : x E Rn} 4.1. THEOREM. Under one of the following two conditions, a strong duality result holds for the pair of problems (P) - (D) i.e. inf(P) = max(D). (a) The functions fi are aff~ne and dom g* is closed (b) domg*cR m and 3~e R n f('~)<0 Proof. (a) Problem (P) can be rewritten equivalently as a linearly convex problem (P) inf {8 (z I dom g*) : f(x) = z} X,Z The corresponding Lagrangian is L(x, z) = 8*(z I dom g*) + < y, f(x) > - < y, z >, y e RI m and thus the dual objective is inf L(x, z) = inf< y, f(x) > + inf {8*(z I dom g*) - < y, z >} X,Z X" Z = H(y) - sup{ < y, z > - 8*(z I dora g*)} = H(y) - 8**(y I dom g*) = H(y) - 8(y I dom g*) since dom g is closed Hence, the dual of (P) is

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.