ebook img

Topics in Sparse Approximation PDF

245 Pages·2004·3.9 MB·English
by  
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Topics in Sparse Approximation

Copyright by Joel Aaron Tropp 2004 The Dissertation Committee for Joel Aaron Tropp certifies that this is the approved version of the following dissertation: Topics in Sparse Approximation Committee: Inderjit S. Dhillon, Supervisor Anna C. Gilbert, Supervisor E. Ward Cheney Alan K. Cline Robert W. Heath Jr. Topics in Sparse Approximation by Joel Aaron Tropp, B.A., B.S., M.S. DISSERTATION Presented to the Faculty of the Graduate School of The University of Texas at Austin in Partial Fulfillment of the Requirements for the Degree of DOCTOR OF PHILOSOPHY THE UNIVERSITY OF TEXAS AT AUSTIN August 2004 To my parents and my little sister, who have supported me through all my successes and failures. Topics in Sparse Approximation Publication No. Joel Aaron Tropp, Ph.D. The University of Texas at Austin, 2004 Supervisors: Inderjit S. Dhillon Anna C. Gilbert Sparse approximation problems request a good approximation of an input signal as a linear combination of elementary signals, yet they stipulate that the approximation may involve only a few of the elementary signals. This class of problems arises throughout applied mathematics, statistics, and electrical engineering, but small theoretical progress has been made over the last fifty years. This dissertation offers four main contributions to the theory of sparse approximation. The first two contributions concern the analysis of two types of numerical algorithms for sparse approximation: greedy methods and convex relaxation methods. Greedy methods make a sequence of locally optimal choices in an effort to obtain a globally optimal solution. Convex relaxation methods re- place the combinatorial sparse approximation problem with a related convex optimization in hope that their solutions will coincide. This work delineates conditions under which greedy methods and convex relaxation methods actu- ally succeed in solving a well-defined sparse approximation problem in part or in full. The conditions for both classes of algorithms are remarkably similar, in spite of the fact that the two analyses differ significantly. v The study of these algorithms yields geometric conditions on the collec- tion of elementary signals which ensure that sparse approximation problems are computationally tractable. One may interpret these conditions as a re- quirement that the elementary signals should form a good packing of points in projective space. The third contribution of this work is an alternating projec- tion algorithm that can produce good packings of points in projective space. The output of this algorithm frequently matches the best recorded solutions of projective packing problems. It can also address many related packing problems that have never been studied numerically. Finally, the dissertation develops a novel connection between sparse ap- proximation problems and clustering problems. This perspective shows that many clustering problems from the literature can be viewed as sparse approx- imation problems where the collection of elementary signals must be learned along with the optimal sparse approximation. This treatment also yields many novelclusteringproblems, anditleadstoanumericalmethodforsolvingthem. vi Table of Contents Abstract v List of Tables xii List of Figures xiii Chapter 1. Introduction 1 Chapter 2. Sparse Approximation Problems 7 2.1 Mathematical Setting . . . . . . . . . . . . . . . . . . . . . . . 8 2.1.1 Signal Space . . . . . . . . . . . . . . . . . . . . . . . . 8 2.1.2 The Dictionary . . . . . . . . . . . . . . . . . . . . . . . 9 2.1.3 Coefficient Space . . . . . . . . . . . . . . . . . . . . . . 10 2.1.4 Sparsity and Diversity . . . . . . . . . . . . . . . . . . . 10 2.1.5 Other Cost Functions . . . . . . . . . . . . . . . . . . . 11 2.1.6 Synthesis and Analysis Matrices . . . . . . . . . . . . . 14 2.2 Formal Problem Statements . . . . . . . . . . . . . . . . . . . 14 2.2.1 The Sparsest Representation of a Signal . . . . . . . . . 15 2.2.2 Error-Constrained Approximation . . . . . . . . . . . . 16 2.2.3 Sparsity-Constrained Approximation . . . . . . . . . . . 17 2.2.4 The Subset Selection Problem . . . . . . . . . . . . . . 18 2.3 Computational Complexity . . . . . . . . . . . . . . . . . . . . 19 2.3.1 Orthonormal Dictionaries . . . . . . . . . . . . . . . . . 20 2.3.2 General Dictionaries . . . . . . . . . . . . . . . . . . . . 20 Chapter 3. Numerical Methods for Sparse Approximation 23 3.1 Greedy Methods . . . . . . . . . . . . . . . . . . . . . . . . . . 24 3.1.1 Matching Pursuit . . . . . . . . . . . . . . . . . . . . . 24 3.1.2 Orthogonal Matching Pursuit . . . . . . . . . . . . . . . 26 vii 3.1.3 Stopping Criteria . . . . . . . . . . . . . . . . . . . . . . 29 3.1.4 A Counterexample for MP . . . . . . . . . . . . . . . . 29 3.1.5 History of Greedy Methods . . . . . . . . . . . . . . . . 30 3.2 Convex Relaxation Methods . . . . . . . . . . . . . . . . . . . 31 3.2.1 The Sparsest Representation of a Signal . . . . . . . . . 32 3.2.2 Error-Constrained Approximation . . . . . . . . . . . . 32 3.2.3 Subset Selection . . . . . . . . . . . . . . . . . . . . . . 33 3.2.4 Sparsity-Constrained Approximation . . . . . . . . . . . 35 3.2.5 History of Convex Relaxation . . . . . . . . . . . . . . . 35 3.3 Other Methods . . . . . . . . . . . . . . . . . . . . . . . . . . 37 3.3.1 The Brute Force Approach . . . . . . . . . . . . . . . . 37 3.3.2 The Nonlinear Programming Approach . . . . . . . . . 38 3.3.3 The Bayesian Approach . . . . . . . . . . . . . . . . . . 38 Chapter 4. Geometry of Sparse Approximation 39 4.1 Sub-dictionaries . . . . . . . . . . . . . . . . . . . . . . . . . . 39 4.2 Summarizing the Dictionary . . . . . . . . . . . . . . . . . . . 40 4.2.1 Coherence . . . . . . . . . . . . . . . . . . . . . . . . . . 40 4.2.2 Example: The Dirac–Fourier Dictionary . . . . . . . . . 42 4.2.3 Cumulative Coherence . . . . . . . . . . . . . . . . . . . 42 4.2.4 Example: Double Pulses . . . . . . . . . . . . . . . . . . 43 4.2.5 Example: Decaying Atoms . . . . . . . . . . . . . . . . 44 4.3 Operator Norms . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.3.1 Calculating Operator Norms . . . . . . . . . . . . . . . 47 4.4 Singular Values . . . . . . . . . . . . . . . . . . . . . . . . . . 48 4.5 The Inverse Gram Matrix . . . . . . . . . . . . . . . . . . . . . 49 4.6 The Exact Recovery Coefficient . . . . . . . . . . . . . . . . . 51 4.6.1 Structured Dictionaries . . . . . . . . . . . . . . . . . . 52 4.7 Uniqueness of Sparse Representations . . . . . . . . . . . . . . 58 4.8 Projective Spaces . . . . . . . . . . . . . . . . . . . . . . . . . 59 4.9 Minimum Distance, Maximum Correlation . . . . . . . . . . . 60 4.10 Packing Radii . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 4.11 Covering Radii . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 4.12 Quantization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 viii Chapter 5. Analysis of Greedy Methods 69 5.1 The Sparsest Representation of a Signal . . . . . . . . . . . . . 71 5.1.1 Greedy Selection of Atoms . . . . . . . . . . . . . . . . 71 5.1.2 The Exact Recovery Theorem . . . . . . . . . . . . . . . 74 5.1.3 Coherence Estimates . . . . . . . . . . . . . . . . . . . . 75 5.1.4 Is the ERC Necessary? . . . . . . . . . . . . . . . . . . . 77 5.2 Identifying Atoms from an Approximation . . . . . . . . . . . 78 5.3 Error-Constrained Sparse Approximation . . . . . . . . . . . . 80 5.3.1 Coherence Estimates . . . . . . . . . . . . . . . . . . . . 83 5.4 Sparsity-Constrained Approximation . . . . . . . . . . . . . . 84 5.4.1 Coherence Estimates . . . . . . . . . . . . . . . . . . . . 86 5.5 Comparison with Previous Work . . . . . . . . . . . . . . . . . 86 Chapter 6. Analysis of Convex Relaxation Methods 89 6.1 The Sparsest Representation of a Signal . . . . . . . . . . . . . 90 6.1.1 Coherence Estimates . . . . . . . . . . . . . . . . . . . . 93 6.2 Fundamental Lemmata . . . . . . . . . . . . . . . . . . . . . . 95 6.2.1 The Correlation Condition Lemma . . . . . . . . . . . . 95 6.2.2 Proof of Correlation Condition Lemma . . . . . . . . . . 97 6.2.3 Restricted Minimizers . . . . . . . . . . . . . . . . . . . 101 6.2.4 Is the ERC Necessary? . . . . . . . . . . . . . . . . . . . 105 6.3 Subset Selection . . . . . . . . . . . . . . . . . . . . . . . . . . 107 6.3.1 Main Theorem . . . . . . . . . . . . . . . . . . . . . . . 108 6.3.2 Coherence Estimates . . . . . . . . . . . . . . . . . . . . 110 6.3.3 Proof of Main Theorem . . . . . . . . . . . . . . . . . . 111 6.4 Error-Constrained Sparse Approximation . . . . . . . . . . . . 113 6.4.1 Main Theorem . . . . . . . . . . . . . . . . . . . . . . . 114 6.4.2 Coherence Estimates . . . . . . . . . . . . . . . . . . . . 115 6.4.3 Comparison with Other Work . . . . . . . . . . . . . . . 116 6.4.4 Proof of the Main Theorem . . . . . . . . . . . . . . . . 118 ix Chapter 7. Numerical Construction of Packings 123 7.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 7.1.1 Our Approach . . . . . . . . . . . . . . . . . . . . . . . 124 7.1.2 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 7.2 Packing on Spheres . . . . . . . . . . . . . . . . . . . . . . . . 127 7.2.1 The Sphere . . . . . . . . . . . . . . . . . . . . . . . . . 127 7.2.2 Packings and Matrices . . . . . . . . . . . . . . . . . . . 128 7.2.3 Alternating Projection . . . . . . . . . . . . . . . . . . . 130 7.2.4 The Matrix Nearness Problems . . . . . . . . . . . . . . 132 7.2.5 The Initial Matrix . . . . . . . . . . . . . . . . . . . . . 135 7.2.6 Theoretical Behavior of the Algorithm . . . . . . . . . . 136 7.2.7 Numerical Experiments . . . . . . . . . . . . . . . . . . 138 7.3 Packing in Projective Spaces . . . . . . . . . . . . . . . . . . . 140 7.3.1 Projective Spaces . . . . . . . . . . . . . . . . . . . . . . 141 7.3.2 Packings and Matrices . . . . . . . . . . . . . . . . . . . 142 7.3.3 Implementation Details . . . . . . . . . . . . . . . . . . 143 7.3.4 Numerical Experiments . . . . . . . . . . . . . . . . . . 144 7.4 Packing in Grassmannian Spaces . . . . . . . . . . . . . . . . . 148 7.4.1 Grassmannian Spaces . . . . . . . . . . . . . . . . . . . 148 7.4.2 Metrics on Grassmannian Spaces . . . . . . . . . . . . . 149 7.4.3 Configurations and Matrices . . . . . . . . . . . . . . . 151 7.4.4 Packings with Chordal Distance . . . . . . . . . . . . . 152 7.4.4.1 Numerical Experiments . . . . . . . . . . . . . . 154 7.4.5 Packings with Spectral Distance . . . . . . . . . . . . . 156 7.4.5.1 Numerical Experiments . . . . . . . . . . . . . . 158 7.4.6 Packings with Fubini–Study Distance . . . . . . . . . . 159 7.4.6.1 Numerical Experiments . . . . . . . . . . . . . . 161 7.5 Bounds on Packing Radii . . . . . . . . . . . . . . . . . . . . . 162 7.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 7.6.1 Future Work . . . . . . . . . . . . . . . . . . . . . . . . 167 7.7 Tables and Figures . . . . . . . . . . . . . . . . . . . . . . . . 169 x

Description:
Joel Aaron Tropp, Ph.D. The first two contributions concern the analysis of two types of numerical .. Clustering and Sparse Matrix Approximation.
See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.