Table Of ContentFast Algorithms
for Structured Matrices:
Theory and Applications
This page intentionally left blank
CONTEMPORARY
MATHEMATICS
323
Fast Algorithms
for Structured Matrices:
Theory and Applications
AMS-IMS-SIAM Joint Summer Research Conference
on Fast Algorithms in Mathematics,
Computer Science and Engineering
August 5-9, 2001
Mount Holyoke College, South Hadley,
Massachusetts
Vadim Olshevsky
Editor
American Mathematical Society
Providence, Rhode Island
Society for Industrial and Applied Mathematics
Philadelphia, PA
Editorial Board
Dennis DeTurck. managing editor
Andreas Blass Andy R. Magid Michael Vogelius
The AMS-IMS-SIAM Joint Summer Research Conference on "Fast Algorithms in Math-
ematics, Computer Science and Engineering'' was held at Mount Holyoke College. South
Hadley, Massachusetts, August 5-9, 2001, with support from the National Science Foun-
dation, grant DMS 9973450.
2000 Mathematics Subject Classification. Primary 68Q25. 65Y20. 65F05. 65F10. 65G50.
65M12, 15A57, 15A18, 47N70. 47N40.
SIAM is a registered trademark
Any opinions, findings, and conclusions or recommendations expressed in this material
are those of the authors and do not necessarily reflect the views of the National Science
Foundation.
Library of Congress Cataloging-in-Publication Data
AMS-IMS-SIAM Joint Summer Research Conference on Fast Algorithms in Mathematics. Com-
puter Science, and Engineering (2001 : Mount Holyoke College)
Fast algorithms for structured matrices : theory and applications : AMS-IMS-SIAM Joint
Summer Research Conference on Fast Algorithms in Mathematics, Computer Science, and En-
gineering, August 5-9, 2001. Mount Holyoke College. South Hadley. Massachusetts / Vadim Ol-
shevsky, editor.
p. cm. — (Contemporary mathematics, ISSN 0271-4132 ; 323)
Includes bibliographical references.
ISBN 0-8218-3177-1 (acid-free paper) — ISBN 0-89871-543-1 (acid-free paper)
1. Matrices—Congresses. 2. Fourier transformations—Congresses. 3. Algorithms—Congresses.
I. Olshevsky, Vadim, 1961-. II. Title. III. Contemporary mathematics (American Mathematical
Society); v. 323.
QA188.J65 2001
512.9'434-dc21 2003041790
Copying and reprinting. Material in this book may be reproduced by any means for edu-
cational and scientific purposes without fee or permission with the exception of reproduction by
services that collect fees for delivery of documents and provided that the customary acknowledg-
ment of the source is given. This consent does not extend to other kinds of copying for general
distribution, for advertising or promotional purposes, or for resale. Requests for permission for
commercial use of material should be addressed to the Acquisitions Department, American Math-
ematical Society. 201 Charles Street, Providence, Rhode Island 02904-2294. USA. Requests can
also be made by e-mail to reprint-permission@ams.org.
Excluded from these provisions is material in articles for which the author holds copyright. In
such cases, requests for permission to use or reprint should be addressed directly to the author(s).
(Copyright ownership is indicated in the notice in the lower right-hand corner of the first page of
each article.)
© 2003 by the American Mathematical Society. All rights reserved.
The American Mathematical Society retains all rights
except those granted to the United States Government.
Printed in the United States of America.
The paper used in this book is acid-free and falls within the guidelines
established to ensure permanence and durability.
Visit the AMS home page at http://www.ams.org/
10 9 8 7 6 5 4 3 2 1 08 07 06 05 04 03
Contents
Foreword vii
Pivoting for Structured Matrices and Rational Tangential Interpolation
VADIM OLSHEVSKY 1
Inversion of Toeplitz-Plus-Hankel Matrices with Arbitrary Rank Profile
GEORG HEINIG 75
A Lanczos-type Algorithm for the QR Factorization of Cauchy-like Matrices
DARIO FASINO AND LUCA GEMIGNANI 91
Fast and Stable Algorithms for Reducing Diagonal Plus Semiseparable
Matrices to Tridiagonal and Bidiagonal Form
DARIO FASINO, NICOLA MASTRONARDI, AND MARC VAN BAREL 105
A Comrade-Matrix-Based Derivation of the Eight Versions of Fast Cosine
and Sine Transforms
ALEXANDER OLSHEVSKY, VADIM OLSHEVSKY, AND JUN WANG 119
Solving Certain Matrix Equations by Means of Toeplitz Computations:
Algorithms and Applications
DARIO A. BINI, LUCA GEMIGNANI, AND BEATRICE MEINI 151
A Fast Singular Value Algorithm for Hankel Matrices
FRANKLIN T. LUK AND SANZHENG QIAO 169
A Modified Companion Matrix Method Based on Newton Polynomials
D. CALVETTI, L. REICHEL, AND F. SGALLARI 179
A Fast Direct Method for Solving the Two-dimensional Helmholtz
Equation, with Robbins Boundary Conditions
J. HENDRICKX, RAF VANDEBRIL, AND MARC VAN BAREL 187
Structured Matrices in Unconstrained Minimization Methods
CARMINE Di FIORE 205
Computation of Minimal State Space Realizations in Jacobson Normal Form
NAOHARU ITO, WILAND SCHMALE, AND HARALD K. WIMMER 221
High Order Accurate Particular Solutions of the Biharmonic Equation
on General Regions
ANITA MAYO 233
V
vi CONTENTS
A Fast Projected Conjugate Gradient Algorithm for Training Support
Vector Machines
TONG WEN, ALAN EDELMAN, AND DAVID GORSICH 245
A Displacement Approach to Decoding Algebraic Codes
V. OLSHEVSKY AND M. AMIN SHOKROLLAHI 265
Some Convergence Estimates for Algebraic Multilevel Preconditioners
MATTHIAS BOLLHOFER AND VOLKER MEHRMANN 293
Spectral Equivalence and Matrix Algebra Preconditioners for Multilevel
Toeplitz Systems: A Negative Result
D. NOUTSOS, S. SERRA CAPIZZANO, AND P. VASSALOS 313
Spectral Distribution of Hermitian Toeplitz Matrices Formally Generated
by Rational Functions
WILLIAM F. TRENCH 323
From Toeplitz Matrix Sequences to Zero Distribution of Orthogonal
Polynomials
DARIO FASINO AND STEFANO SERRA CAPIZZANO 329
On Lie Algebras, Submanifolds and Structured Matrices
KENNETH R. DRIESSEL 341
Riccati Equations and Bitangential Interpolation Problems with Singular
Pick Matrices
HARRY DYM 361
Functions with Pick Matrices having Bounded Number of Negative
Eigenvalues
V. BOLOTNIKOV, A. KHEIFETS, AND L. RODMAN 393
One-dimensional Perturbations of Selfadjoint Operators with Finite
or Discrete Spectrum
Yu. M. ARLINSKII, S. HASSI, H. S. V. DE SNOO, AND E. R.
TSEKANOVSKII 419
Foreword
Perhaps the most widely known example of fast algorithms is the fast Fourier
transform (FFT) algorithm. Its importance is widely acknowledged and nicely de-
scribed in numerous papers and monographs, e.g., as follows: "The fast Fourier
transform (FFT) is one of the truly great computational developments of this cen-
tury. It has changed the face of science and engineering so that it is not an exagger-
ation to say that life as we know it would be very different without FFT" (Charles
Van Loan, Computational Frameworks for the Fast Fourier Transform, SIAM Pub-
lications, 1992). There are many different mathematical languages which can be
used to derive and describe the FFT, and the "structured matrices language" is one
of them, yielding the following interpretation. Though the usual matrix-vector mul-
tiplication uses n(2n — 1) arithmetic operations, the special structure of the discrete
Fourier transform matrix allows us to reduce the latter complexity to the nearly
linear cost of O(nlogn) operations. The practical importance of such a dramatic
speed-up is impossible to overestimate; even for moderately sized problems one can
compute the result hundreds of times faster.
This is a model example showing why reducing the computational burden via
structure exploitation is an important issue in many applied areas. Thus, it is not
surprising that in recent years the design of fast algorithms for structured matrices
has become an increasingly important activity in a diverse variety of branches of
the exact sciences. Unfortunately, until recently there was not much interaction
between the different branches. There were no comprehensive meetings bringing
together "all interested parties," and no cross-disciplinary publishing projects. Such
situations caused several disadvantages.
First, there clearly was a certain parallelism, and several algorithms have in-
dependently been rediscovered in different areas. For example, the Chebyshev
continuous fraction algorithm for interpolation, the Stiltjes procedure for generat-
ing orthogonal polynomials, the Lanczos algorithm for computing the eigenvalues
of a symmetric matrix, and the Berlekamp-Massey algorithm for decoding of BCH
codes are closely related. Another example: the classical Nevanlinna algorithm
for rational passive interpolation and the Darlington procedure for passive network
synthesis are variations on the same theme.
The second disadvantage of the lack of cross-disciplinary interaction is that
it was not clear that the research efforts in different branches are part of what
one can call a "full research cycle," starting from an actual application, through
developing deep theories, to the design of efficient algorithms and their implemen-
tation. Researchers in different branches often had somewhat narrower focuses.
Electrical engineers used structured matrices to efficiently solve applied problems.
vii
viii FOREWORD
Mathematicians exploited the structure to obtain elegant solutions for various fun-
damental problems. Computer scientists utilized the structure to speed up related
algorithms and to study the corresponding complexity problems. Numerical ana-
lysts took advantage of the structure to improve accuracy in many cases.
In recent years such an unfortunate situation has changed. We had a number of
cross-disciplinary conferences in Santa Barbara (USA. Aug. 1996). Cortona (Italy.
Sept. 1996 and Sept. 2000), Boulder (USA, July 1999). Chemnitz (Germany. Jan.
2000), South Hadley (USA, Aug. 2001). In fact, it was the "cross-fertilization"
atmosphere of the South Hadley meeting that suggested the idea to pursue this
publishing project. We hope it demonstrates the following two points. First, the
approaches, ideas and techniques of engineers, mathematicians, and numerical an-
alysts nicely complement each other, and despite their differences in techniques
and agendas they can be considered as important parts of a joint research effort.
Secondly, the theory of structured matrices and design of fast algorithms for them
seem to be positioned to bridge several diverse fundamental and applied areas.
The volume contains twenty-two survey and research papers devoted to a va-
riety of theoretical and practical aspects of design of fast algorithms for structured
matrices and related issues. It contains a number of papers on direct fast al-
gorithms and also on iterative methods. The convergence analysis of the latter
requires studying spectral properties of structured matrices. The reader will find
here several papers containing various affirmative and negative results in this direc-
tion. The theory of rational interpolation is one of the excellent sources providing
intuition and methods to design fast algorithms. This volume contains several com-
putational and theoretical papers on the topic. There are several papers on new
applications of structured matrices, e.g., the design of fast decoding algorithms,
computing state-space realizations, relations to Lie algebras, unconstrained opti-
mization, solving matrix equations, etc.
We hope that the reader will enjoy a plethora of different problems, different
focuses, and different methods that all contribute to one unified theory of fast
algorithms for structured matrices and related theoretical issues.
Vadim Olshevsky
Department of Mathematics
University of Connecticut
Storrs, CT 06269. USA
Contemporary Mathematics
Volume 323, 2003
Pivoting for Structured Matrices and Rational Tangential
Interpolation
Vadim Olshevsky
ABSTRACT. Gaussian elimination is a standard tool for computing triangu-
lar factorizations for general matrices, and thereby solving associated linear
systems of equations. As is well-known, when this classical method is im-
plemented in finite-precision-arithmetic, it often fails to compute the solution
accurately because of the accumulation of small roundoffs accompanying each
elementary floating point operation. This problem motivated a number of in-
teresting and important studies in modern numerical linear algebra; for our
purposes in this paper we only mention that starting with the breakthrough
work of Wilkinson, several pivoting techniques have been proposed to stabilize
the numerical behavior of Gaussian elimination.
Interestingly, matrix interpretations of many known and new algorithms
for various applied problems can be seen as a way of computing triangular fac-
torizations for the associated structured matrices, where different patterns of
structure arise in the context of different physical problems. The special struc-
ture of such matrices [e.g., Toeplitz, Hankel, Cauchy, Vandermonde, etc.] often
allows one to speed-up the computation of its triangular factorization, i.e., to
efficiently obtain fast implementations of the Gaussian elimination procedure.
There is a vast literature about such methods which are known under different
names, e.g., fast Cholesky, fast Gaussian elimination, generalized Schur, or
Schur-type algorithms. However, without further improvements they are effi-
cient fast implementations of a numerically inaccurate [for indefinite matrices]
method.
In this paper we survey recent results on the fast implementation of vari-
ous pivoting techniques which allowed vis to improve numerical accuracy for a
variety of fast algorithms. This approach led us to formulate new more accu-
rate numerical methods for factorization of general and of J-unitary rational
matrix functions, for solving various tangential interpolation problems, new
Toeplitz-like and Toeplitz-plus-Hankel-like solvers, and new divided differences
schemes. We beleive that similar methods can be used to design accurate fast
algorithm for the other applied problems and a recent work of our colleagues
supports this anticipation.
1991 Mathematics Subject Classification. Primary 65F30, 15A57; Secondary 65Y20.
Key words and phrases. Fast algorithms, Pivoting, Structured matrices, Interpolation.
This work was supported in part by NSF contracts CCR 0098222 and 0242518.
© 2003 American Mathematical Society
1
Description:One of the best known fast computational algorithms is the fast Fourier transform method. Its efficiency is based mainly on the special structure of the discrete Fourier transform matrix. Recently, many other algorithms of this type were discovered, and the theory of structured matrices emerged. Thi