ebook img

Convergence of Iterations for Linear Equations PDF

187 Pages·1993·3.83 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Convergence of Iterations for Linear Equations

Lectures in Mathematics ETHZiirich Department of Mathematics Research Institute of Mathematics Managing Editor: Oscar E. Lanford o lavi Ne vanlinna Convergence of Iterations for Linear Equations Springer Basel AG Author: Olavi Nevanlinna Institute of Mathematics Helsinki U niversity of Technology SF-02150 Espoo Finland Library of Congress Cataloging-in-Publication Data Nevanlinna, OIavi, 1948- Convergence of iterations for linear equations I Olavi Nevanlinna. p. cm. - (Lectures in mathematics ETH Ziirich) Includes bibliographical references and index. ISBN 978-3-7643-2865-8 ISBN 978-3-0348-8547-8 (eBook) DOI 10.1007/978-3-0348-8547-8 1. Iterative methods (Mathematics) 2. Convergence. 3. Equations Numerical solutions. 1. Title.lI. Series. QA297.8.N48 1993 511' .4--dc20 Deutsche Bibliothek Cataloging-in-Publication Data NewnIinna, OIavi: Convergence of iterations for linear equations I Olavi Nevanlinna. - Basel; Boston; Berlin: Birkhăuser, 1993 (Lectures in mathematics) ISBN 978-3-7643-2865-8 This work is subject to copyright. Ali rights are reserved, whether the whole or part ofthe material is concerned, specifically the rights of translation, reprinting, re-use of iIlustrations, recitation, broadcasting, reproduction on microfilms orin other ways, and storage in data banks. For any kind of use permission of the copyright owner must be obtained. © 1993 Springer Basel AG Originally published by Birkhăuser Verlag in 1993 Camery-ready copy prepared by the author Printed on acid-free paper produced from chlorine-free pulp ISBN 978-3-7643-2865-8 987654321 CONTENTS Preface vii 1. Motivation, problem and notation 1.1 Motivation 1 1.2 Problem formulation 1 1.3 Usual tools 2 1.4 Notation for polynomial acceleration 2 1.5 Minimal error and minimal residual 5 1.6 Approximation of the solution operator 6 1. 7 Location of zeros 7 1.8 Heuristics 9 Comments to Chapter 1 11 2. Spectrum, resolvent and power boundedness 2.1 The spectrum 13 2.2 The resolvent 17 2.3 The spectral mapping theorem 22 2.4 Continuity of the spectrum 23 2.5 Equivalent norms 26 2.6 The Yosida approximation 29 2.7 Power bounded operators 30 2.8 Minimal polynomials and algebraic operators 34 2.9 Quasialgebraic operators 41 2.10 Polynomial numerical hull 41 Comments to Chapter 2 44 3. Linear convergence 3.1 Preliminaries 46 3.2 Generating functions and asymptotic convergence factors 47 3.3 Optimal reduction factor 52 3.4 Green's function for Goo 57 3.5 Optimal polynomials for E 63 3.6 Simply connected Goo(L) 72 3.7 Stationary recursions 77 3.8 Simple examples 81 Comments to Chapter 3 85 4. Sublinear convergence 4.1 Introduction 86 4.2 Convergence of Lk(L - 1) 88 4.3 Splitting into invariant subspaces 90 4.4 Uniform convergence 95 4.5 Nonisolated singularity and successive approximation 99 4.6 Nonisolated singularity and polynomial acceleration 103 4.7 Fractional powers of operators 111 4.8 Convergence of iterates 113 4.9 Convergence with speed 116 Comments to Chapter 4 123 vi 5. Superlinear convergence 5.1 What is superlinear 124 5.2 Introductory examples 125 5.3 Order and type 133 5.4 Finite termination 137 5.5 Lower and upper bounds for optimal polynomials 139 5.6 Infinite products 144 5.7 Almost algebraic operators 145 5.8 Estimates using singular values 157 5.9 Multiple clusters 163 5.10 Approximation with algebraic operators 165 5.11 Locally superlinear implies superlinear 177 Comments to Chapter 5 169 References 171 Definitions 175 vii PREFACE These notes are based on lectures which were given in two phases. In the fall of 1990, I gave a series of lectures at the Helsinki University of Technology and in the 1992 summer term, at the ETH in Zurich. The former set of lectures was intended to serve as background material for a book on Picard-Lindelof iteration (or "waveform relaxation"), but as the material was being written up, it began to take on a life of its own. Section 4 appeared as a separate technical report in May 1991. The present form is more encompassing than the "Nachdiplomsvorlesungen" given at the ETH, but the material covers the same topics - and I did not get as far.as even starting the Picard-Lindelof iteration. These lectures try to present some tools which might be useful in understanding the convergence of iterative solvers for nonsymmetric problems. The book is not a survey of what is known. Quite the opposite. In places there is material which is new and then there is material which in all likelihood is not new but for which referencing is apparently missing. The referencing is sometimes also missing in places where the tools can be assumed to be well known. Many of those attending the lectures, both here at home and at the ETH, have read parts of different versions of the manuscript. I am very grateful for their help. They have contributed a lot, but, since I have kept changing the text, errors surely still exist. I hope that they are not too grievous and that whenever errors are found, I will be notified. Also any other comments would be welcome, in case I find enough courage to try to turn this into a real text book. I can be reached via electronic mail at Olavi.Nevanlinna@ hut.fl. If there is a common theme in these lectures it is avoiding the spectral decom position. I hope this material will serve as a source of inspiration. I am indebted to the personnel at the Forschungsinstitut at the ETH for making my stay very pleasant and to Rolf Jeltsch for being an excellent host. I dedicate this book to my wife Marja. Otaniemi, Finland, December 16. 1992 Olavi Nevanlinna 1 1. MOTIVATION, PROBLEM AND NOTATION 1.1. Motivation. Parallel processing is changing our approach to scientific computing. With tra ditional computers the common attitude has been to think along the following, simplified, steps. (1) One models the physical problem by a mathematical, idealized model. This often leads to a differential or integral equation. (2) Then one replaces this mathematical problem with a finite dimensional ap proximation, e.g. with finite element method. (3) This leads to a ~ often large and sparse ~ linear system (or a sequence of such) which then is solved either by a direct (elimination) method or by iteration. Most of the research in numerical analysis on parallel processing is today circulating around the last step: how to solve a large linear algebra problem efficiently with a parallel computer. I want to emphasize the following picture. We are given a collection of processors ~ or computers ~ connected together. A natural approach is to think that the original physical problem has somehow been decomposed into a set of similar smaller problems. Each smaller problem is then solved within a processor using some software which is composed along the steps above. As the subproblems are seldom independent, this leads us into iterative techniques. Notice that this iteration takes place on the first level and should typically be studied in a function space setting. 1.2. Problem formulation. With this motivation we shall consider fixed point problems of the form (1.2.1) x = Lx+ g, where L is a bounded linear operator in a Banach space X and 9 is a given vector in X such that the equation has at least one solution. We can imagine L to show up e.g. from solving the subproblems modulo unknown couplings from other subproblems. Thus an evaluation of Lu for a given u may be given only implicitly ~ often running a piece of software. This more or less implies that the approaches to solve (1.2.1) are of iterative nature, the simplest being the method of successive approximations (1.2.2) We may think that moving the data representing the iterates xk and forming linear combinations of given vectors is relatively fast compared to an evaluation of Lu. This leads us to consider the following approach: one asks whether the subspace created by repeated evaluation contains much better candidates for the solution than the latest created vector. 2 1.3. Usual tools. Without oversimplifying much one can say that the basic mathematical tools used to predict the behavior of iterative methods fall into two classes. For analyzing the iteration (1.2.2) one looks at the spectral radius p(L) and the norm IILII. In numerical analysis there is quite a lot of knowledge accumulated on the behavior of p(L), when L operates in a finite dimensional space. These are often given for the following set-up. In order to solve Ax = b you split the matrix A into A = M - N and solve repeatedly the equation (1.3.1) If the matrix A is symmetric and, say positive definite, then it is known that work ing with the created subspace often pays off. (One should notice that multiplying the equation Ax = b from the left with the adjoint A * creates such a situation, but there are drawbacks for this approach as well.) Accelerating a preconditioned problem with conjugate gradient (CG) method is today a standard approach. Pre conditioning is roughly multiplication of the equation by an approximative inverse of A. A typical preconditioner would be obtained e.g. from incomplete Cholesky factorization. The CG-method orthogonalizes the created subspace and the best approximation within the subspace can be computed. The standard tools to dis cuss the speed of the convergence contain energy-type estimation combined with Chebyshev approximation. Adding the fact that the preconditioner should be so chosen as to cluster the spectrum, we have collected the tools that guide the present thinking. 1.4. Notation for polynomial acceleration. Given a vector xk E X we denote by dk the associated residual (1.4.1) In the successive approximation (1.2.2) the residual satisfies dk = xk+1 - xk and controlling the residual equals controlling the differerence of the iterates. As the residual is always obtainable we choose to work with it. Usually we would like to control the error ek := x - xk. As x - Lx = 9 xk _ Lxk = 9 _ dk we always have (1.4.2) If 1 - L has a bounded inverse so that (1) has a unique solution x = (1 - L)-lg, then we have (1.4.3)

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.