ebook img

Numerical Methods of Mathematical Optimization. With ALGOL and FORTRAN Programs PDF

222 Pages·1968·8.666 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Numerical Methods of Mathematical Optimization. With ALGOL and FORTRAN Programs

Numerical Methods of Mathematical Optimization With ALGOL and FORTRAN Programs C O R R E C T ED A ND A U G M E N T ED EDITION HANS P. KÜNZI University of Zürich and Eidgenössische Technische Hochschule Zürich, Switzerland Η. G. TZSCHACH Division of Mathematical Methods IBM Deutschland, Berlin, Germany C. A. ZEHNDER Eidgenössische Technische Hochschule Zürich, Switzerland Translated by Werner C. Rheinboldt Institute for Fluid Dynamics and Applied Mathematics University of Maryland, College Park, Maryland and Cornelie J. Rheinboldt ACADEMIC PRESS New York San Francisco London 1971 A Subsidiary ofHarcourt Brace Jovanovich, Publishers COPYRIGHT © 1971 BY ACADEMIC PRESS, INC. ALL RIGHTS RESERVED NO PART OF THIS BOOK MAY BE REPRODUCED IN ANY FORM, BY PHOTOSTAT, MICROFILM, RETRIEVAL SYSTEM, OR ANY OTHER MEANS, WITHOUT WRITTEN PERMISSION FROM THE PUBLISHERS. ACADEMIC PRESS, INC. Ill Fifth Avenue, New York, New York 10003 United Kingdom Edition published by ACADEMIC PRESS, INC. (LONDON) LTD. 24/28 Oval Road. London NWl LIBRARY OF CONGRESS CATALOG CARD NUMBER: 68-18673 PRINTED IN THE UNITED STATES OF AMERICA Numerical Methods of Mathematical Optimization With ALGOL and FORTRAN Programs First published in the German language under the title Numerische Methoden der mathematischen Optimierung (mit ALGOL und FORTRAN Programmen) and copyrighted in 1966 by Β. G. Teubner Verlag, Stuttgart, Germany. This is the only authorized English edition published with the consent of the publishing house B. G. Teubner, Stuttgart, of the original German edition which has been published in the series "Leitfäden der angewandten Mathematik und Mechanik," edited by Professor Dr. H. Görtier. PRINTED IN THE UNTIED STATES OF AMERICA PREFACE TO THE GERMAN EDITION Compared to the already existing literature on linear and nonlinear optimization theory, this book differs both in content and presentation as follows: The first part (Chapters 1 and 2) is devoted to the theory of linear and nonlinear optimization, with the main stress on an easily understood, mathematically precise presentation. The chapter on linear optimization theory is somewhat more detailed than the one on nonlinear optimization. Besides the theoretical considerations, several algorithms of importance to the numerical application of optimization theory are discussed. As prerequisite mathematical knowledge only the fundamentals of linear algebra, predominantly vector and matrix algebra, and the elements of differential calculus are required of the reader (the latter for the nonlinear optimization). One difference between our presentation and earlier ones will un­ doubtedly be the fact that in the second part we have developed both an ALGOL and a FORTRAN program for each one of the algorithms described in the theoretical section. This should result in easy access to the application of the different optimization methods for everyone familiar with these two symbolic languages. (The difference between the ALGOL and FORTRAN programs is in the language only—computationally they proceed entirely in parallel.) Intentionally, both parts have been kept largely independent of each other so that the first part can be used as an independent theoretical presentation and the second part as a well-rounded and eflScient pro­ gram collection. The connection between the theory and the programs is assured by an intermediary text (Chapter 3) which also contains all those explanations needed for the use of ALGOL and FORTRAN programs. vi PREFACE TO THE GERMAN EDITION The first author listed carries the principal responsibility for the theoretical part, the other two prepared the program section. We hope that with this division into two separate parts we have succeeded in bringing the theory and the practical application of the optimization methods closer to each other. Anyone working in this subject area knows that without electronic computers, and therefore without computer programs, the actual application of linear and nonlinear optimization belongs more or less to the realm of a utopia. We wish to thank Professors E. Stiefel and H. Görtier as well as Drs. P. Kali and Kirchgässner for their many suggestions and valuable recommendations for improvements. Mr. D. Latterman has assisted us greatly with the programming, and valuable assistance was also given us in our proofreading by Drs. Kleibohm and Tan. We should furthermore like to express our warm thanks to the publishers for their consideration of our numerous wishes and for their careful printing job. H. P. KÜNZI Autumn 1966 H. G. TZSCHACH Zürich and Berlin C. A. ZEHNDER 1 L I N E AR O P T I M I Z A T I ON 1.1 General Fonnulation of Linear Optimization Linear optimization concerns the optimization of a linear expression subject to a number of linear constraints, and can involve either a maxi­ mization or a minimization problem. First, we will formulate the maximization problem. In that case, quantities , ,... , x„ are to be found for which the linear form, or objective function, η assumes a maximum subject to the constraints η (1.1) Σ ^H^i ^ «io 0* = 1, 2,... , m) t=l as well as the nonnegativity restrictions ^¿^0 (/= l,2,...,/7). Here the coefficients «ο,, α,,, and α,ο ^re given and the are the original unknown variables. It will be useful for further discussion to transform the system of constraints in (1.1) into a system of equations of the form η Σ «>Λ + ^n+i = «io 0 = 1 , 2 , . . ., m) »=1 by introducing additional variables (so-called slack-variables) which have to satisfy the additional nonnegativity restrictions ^n+i^O (7=1,2, ...,m). 2 LINEAR OPTIMIZATION Usingmatrix and vector notation, wecan then represent the maximiza tion problem in the following shortened form: Maximize the objective function B=a[x subject to the constraints = (1.2) Ax a.o x~O In contrast to (1.1) the symbols used in form (1.2) have been slightly modified. xT is a row vector with (m +n) components + and the row vector a[ also has (m n) components al = (aOI' a02' •••, aon, aO,n+I'···' ao,n+m) where = =... =o. aO,n+1 aO,n+2 = aO,n+m + A is an m X (m n) matrix, given by o o A = (1.3) ami am2 ••• amn 0 0 ... , The constraint system in (1.2) consists ofm equations in the (m + n) unknowns Xl' X2, • •• ,xN; N == m +n. IfA has rankm,l a setofnvariables Xv , ••• ,Xv canalways bechosen from among the m +n variables Xl' •.I• 'Xm+n "such that when these XVI' • •• , xv" are given arbitrary butfixed values, the system Ax ~ a.o (1.4) is uniquely solvable in terms ofthe remaining m variables. I Thestructureofthematrix A couldalsobemoregeneralthanthatshownin(1.3). 1.1 GENERAL FORMULATION OF LINEAR OPTIMIZATION 3 Any vector χ for which the components satisfy both the system of constraints and the nonnegativity restrictions of (1.2) is called a feasible vector. A feasible vector which at the ^ame time maximizes the objective function is called an optimal feasible vector. Using these definitions we can formulate (without proof) the fundamental theorem of linear programming: If an optimal feasible vector exists at all, there always exists an optimal feasible vector with at least η zero components. If the row vectors of the matrix A, corresponding to the nonzero components of jc, are linearly independent, χ is called a basic vector.^ In its strict form we then have the Fundamental Theorem of Linear Optimization. If an optimal feasible vector exists, there also exists a feasible basic vector which is optimal. It is easily seen that the first formulation is derived from the second one. A proof of this fundamental theorem can be found in 1.4. In the event only two basic variables {n = 2) occur in system (1.2), the result can be described graphically. Fig. 1. Geometric interpretation of a linear optimization problem in the plane. Suppose the constraint system in (1.2) has three equations, i.e., that /w =2 3 and η ^2, then a geometrical interpretation follows from Fig. 1. The first inequality, to which the slack variable x^ belongs, is satisfied by all points contained in the hatched half-plane bordered by the straight ^ Translator's Comment: The term **basic solution" is also used and the associated nonzero variables are called the basic variables. In view of the fact that in the nonde- generate case they correspond to a basis of the space, the set of basic variables is also called the basis for short. 4 1 LINEAR OPTIMIZATION line Xg = 0. The same holds for and x^. Two other half-planes are determined by the nonnegativity restrictions Χχ ^0, ^ 0> and the intersection of all these half-planes constitutes the convex domain If we now consider the objective function for different values B^, we are obviously faced with the problem of finding the "outermost corner" of the convex polygon PQPIPZFZP^ with respect to the family of straight lines Β i. This corner is represented by a basic solution^ because in the case of Fig. 1 the optimal feasible solution vector X'^ = (Xi , X2 9 ^3 » ^4 9 ^5) for the cornerpoint has the positive components Xi, x^, and ^3 while ^4 and Xq are zero. Fig. 2. Geometrical interpretation of a degenerate linear optimization problem in the plane. The different methods of linear optimization are primarily directed toward finding an efficient algorithm for calculating as quickly as possible the "outermost corner" Pg drawn in Fig. 1. One of the main purposes of this book has been to describe such algorithms from both a theoretical as well as a numerical standpoint. If the point of solution (for λ = 2) lies in the intersection of 3 or more straight lines (Fig. 2), we speak of a degenerate linear optimization problem. In anticipation of our further discussion, it should be noted that in the general case when w > 2, a degenerate solution has more than η vanishing variables; for λ = 2 this is evident from Fig. 2 (see also 1.5). As was mentioned in the beginning, we can also consider minimization problems. Every maximization problem can be trivially changed into a minimization problem, and for problem (1.1) this means the following: ^ See Karlin [82, Chapter 6.1, pp. 161-162]. 1.1 GENERAL FORMULATION OF LINEAR OPTIMIZATION 5 Minimize n I -B = - aOixi i=1 subject to the constraints n - I a;ixi ~ -a;o (j = 1, ... ,m) (1.5) i=1 and the nonnegativity restrictions Xi ~ 0 (i = 1, ... ,n). Algorithmsmustalsobefoundfortheminimizationproblemspermitting thedeterminationofanoptimalfeasible solution. Itis up to thereaderto = depict the minimization problem graphically in the case n 2. For reasons which will become clear in connection with the duality principle oflinear optimization (see 1.6), it is desirable to construct t~e minimizationproblem in a way which is symmetrical to the maximization problem (1.1). Minimize m C = Ia;ow; ;=1 subject to the constraints m I a;iw; ~ aOi (i = 1, 2, .. ., n) (1.6) ;=1 and the nonnegativity restrictions W; ~ 0 (j = 1,2, ... ,m). Using vector and matrix notation as was done in problem (1.2), this can be stated as follows: Minimize C=aTw .0 subject to the constraints (1.7) w~O 6 1 LINEAR OPTIMIZATION where = (Wi, . . . , H^^+J / ^11 ··· ^ml -1 0 ^22 · · · Öfm2 0-1 OÍ2n ··· ^mn 0 0 ··· In (1.6) it will be proved that for the solution of the problems (1.1) and (1.6) the following fundamental rule holds: ^max=Q,n. (1.8) 1.2 The Simplex Method Starting from the problem formulation (1.2) we shall now try to find an algorithm which yields the desired optimal value of the objective function. It will be assumed that the linear expressions on the left side of the con­ straints in (1.2) are linearly independent. For the time being we shall furthermore suppose that the coefficients UJQ are nonnegative. This restriction will be dropped later. One of the best known methods for calculating the optimal solution of (1.2) is the so-called simplex method, first published by Dantzig [22] in 1948. This iterative method is derived from the fundamental theorem (cf. 1.1). The first step is to look for a nondegenerate basic solution^ of the system (1.2), that is, a basic solution with exactly η zero variables. If this basic solution has at least one component x¿ < 0, it is not feasible and has to be dropped from further consideration. However, if the solution is feasible (i.e., if > 0 for all /), it serves as the starting point for the following iterative process, provided, of course, that this process is still required. As we shall prove in 1.4, the simplex method leads in finitely many steps to the optimal solution. Suppose we are at the Ä:th step of the iteration but that the optimum has not been reached as yet. We then proceed to the (k + \) step by letting one variable become positive which at the Ä:th step was zero, i.e., ^ The case of degenerate basic solutions will be discussed in 1.5.

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.