ebook img

Statistical Digital Signal Processing and Modeling PDF

622 Pages·1996·18.668 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Statistical Digital Signal Processing and Modeling

STATISTICAL DIGITAL SIGNAL PROCESSING AND MODELING MONSON H. HAYES Georgia Institute of Technology JOHN WILEY & SONS, INC. New York, Chichester, Brisbane, Toronto, Singapore Acquisitions Editor Steven Elliot Marketing Manager Debra Riegert Senior Production Editor Tony VenGraitis Designer Laura Ierardi Manufacturing Manager Mark Cirillo Production Service Ingrao Associates This book was set in Times Roman by Techsetters, Inc. and printed and bound by Hamilton Printing. The cover was printed by New England Book Components. Recognizing the importance of preserving what has been written, it is a policy of John Wiley & Sons, Inc. to have books of enduring value published in the United States printed on acid-free paper, and we exert our best efforts to that end. The paper in this book was manufactured by a mill whose forest management programs include sustained yield harvesting of its timberlands. Sustained yield harvesting principles ensure that the number of trees cut each year does not exceed the amount of new growth. Copyright © 1996, by John Wiley & Sons, Inc. All rights reserved. Published simultaneously in Canada. Reproduction or translation of any part of this work beyond that permitted by Sections 107 and 108 of the 1976 United States Copyright Act without the permission of the copyright owner is unlawful. Requests for permission or further information should be addressed to the Permissions Department, John Wiley & Sons, Inc. ISBN 0-471 59431-8 Printed in the United States of America IO 9 8 7 6 5 4 3 2 I This book is dedicated to my beautiful wife, Sandy, for all of her love, patience, and support, and for all of the sacrifices that she has made so that this book could become a reality. It is also dedicated to Michael, Kimberly, and Nicole, for the joy that they have brought into my life, and to my parents for all of their love and support of me in all of my endeavors. CONTENTS Preface xi ] INTRODUCTION 1 2 BACKGROUND 7 2.1 Introduction 7 2.2 Discrete-Time Signal Processing 7 2 .2.1 Discrete-Time Signals 8 2.2.2 Discrete-Time Systems 9 2.2.3 Time-Domain Descriptions of LSI Filters 12 2.2.4 The Discrete-Time Fourier Transform 12 2.2.5 The z-Transform 14 2.2.6 Special Classes of Filters 16 2.2.7 Filter Flowgraphs 18 2.2.8 The DFT and FFT 18 2.3 Linear Algebra 20 2.3.1 Vectors 21 2.3.2 Linear Independence, Vector Spaces, and Basis Vectors 24 2.3.3 Matrices 25 2.3.4 Matrix Inverse 27 2.3.5 The Determinant and the Trace 29 2.3.6 Linear Equations 30 2.3.7 Special Matrix Forms 35 2.3.8 Quadratic and Hermitian Forms 39 2.3.9 Eigenvalues and Eigenv~ctors 40 2.3.10 Optimization Theory 48 2.4 Summary 52 2.5 Problems 52 3 DISCRETE-TIME RANDOM PROCESSES 57 3.1 Introduction 57 3.2 Random Variables 58 3.2.1 Definitions 58 3.2.2 Ensemble Averages 62 3.2.3 Jointly Distributed Random Variables 65 vi CONTENTS 3.2.4 Joint Moments 66 3.2.5 Independent, Uncorrelated and Orthogonal Random Variables 67 3.2.6 Linear Mean Square Estimation 68 3 .2. 7 Gaussian Random Variables 71 3.2.8 Parameter Estimation: Bias and Consistency 72 3.3 Random Processes 74 3.3.1 Definitions 74 3.3.2 Ensemble Averages 77 3.3.3 Gaussian Processes 81 3.3.4 Stationary Processes 81 3.3.5 The Autocovariance and Autocorrelation Matrices 85 3.3.6 Ergodicity 88 3.3.7 White Noise 93 3.3.8 The Power Spectrum 94 3.4 Filtering Random Processes 99 3.5 Spectral Factorization 104 3.6 Special Types of Random Processes 108 3.6.1 Autoregressive Moving Average Processes 108 3.6.2 Autoregressive Processes 111 3.6.3 Moving Average Processes 115 3.6.4 Harmonic Processes 116 3.7 Summary 118 3.8 Problems 120 4 SIGNAL MODELING 129 4.1 Introduction 129 4.2 The Least Squares (Direct) Method 131 4.3 The Pade Approximation 133 4.4 Prony's Method 144 4.4.1 Pole-Zero Modeling 144 4.4.2 Shanks' Method 154 4.4.3 All-Pole Modeling 160 4.4.4 Linear Prediction 165 4.4.5 Application: FIR Least Squares Inverse Filters 166 4.5 Iterative Prefiltering* 174 4.6 Finite Data Records 177 4.6.1 The Autocorrelation Method 178 4.6.2 The Covariance Method 182 4.7 Stochastic Models 188 4.7.1 Autoregressive Moving Average Models 189 4.7.2 Autoregressive Models 194 CONTENTS vii 4.7.3 Moving Average Models 195 4.7.4 Application: Power Spectrum Estimation 198 4.8 Summary 201 4.9 Problems 203 5 THE LEVINSON RECURSION 215 5. I Introduction 215 5.2 The Levinson-Durbin Recursion 216 5.2.1 Development of the Recursion 216 5.2.2 The Lattice Filter 223 5.2.3 Properties 225 5.2.4 The Step-Up and Step-Down Recursions 232 5.2.5 The Inverse Levinson-Durbin Recursion 238 5.2.6 The Schur Recursion* 240 5.2.7 The Cholesky Decomposition 250 5.2.8 The Autocorrelation Extension Problem 254 5.2.9 Inverting a Toeplitz Matrix 256 5.3 The Levinson Recursion 264 5.4 The Split Levinson Recursion* 268 5.5 Summary 276 5.6 Problems 279 6 LATTICE FILTERS 289 6.1 Introduction 289 6.2 The FIR Lattice Filter 289 6.3 Split Lattice Filter 294 6.4 IIR Lattice Filters 297 6.4.1 All-pole Filter 297 6.4.2 Other All-pole Lattice Structures 299 6.4.3 Lattice Filters Having Poles and Zeros 304 6.5 Lattice Methods for All-Pole Signal Modeling 307 6.5.1 The Forward Covariance Method 308 6.5.2 The Backward Covariance Method 313 6.5.3 Variations 315 6.5.4 Burg's Method 316 6.5.5 Modified Covariance Method 322 6.6 Stochastic Modeling 325 6.7 Summary 327 6.8 Problems 329 7 WIENER FILTERING 335 7 .1 Introduction 335 viii CONTENTS 7.2 The FIR Wiener Filter 337 7.2.1 Filtering 339 7.2.2 Linear Prediction 342 7.2.3 Noise Cancellation 349 7.2.4 Lattice Representation for the FIR Wiener Filter 352 7.3 The IIR Wiener Filter 353 7.3.1 Noncausal IIR Wiener Filter 353 7.3.2 The Causal IIR Wiener Filter 358 7.3.3 Causal Wiener Filtering 361 7.3.4 Causal Linear Prediction 365 7.3.5 Wiener Deconvolution 369 7.4 Discrete Kalman Filter 371 7.5 Summary 379 7.6 Problems 380 8 SPECTRUM ESTIMATION 391 8.1 Introduction 391 8.2 Nonparametric Methods 393 8.2.1 The Periodogram 393 8.2.2 Performance of the Periodogram 398 8.2.3 The Modified Periodogram 408 8.2.4 Bartlett's Method: Periodogram Averaging 412 8.2.5 Welch's Method: Averaging Modified Periodograms 415 8.2.6 Blackman-Tukey Approach: Periodogram Smoothing 420 8.2.7 Performance Comparisons 424 8.3 Minimum Variance Spectrum Estimation 426 8.4 The Maximum Entropy Method 433 8.5 Parametric Methods 440 8.5.1 Autoregressive Spectrum Estimation 441 8.5.2 Moving Average Spectrum Estimation 448 8.5.3 Autoregressive Moving Average Spectrum Estimation 449 8.6 Frequency Estimation 451 8.6.1 Eigendecomposition of the Autocorrelation Matrix 451 8.6.2 Pisarenko Harmonic Decomposition 459 8.6.3 MUSIC 463 8.6.4 Other Eigenvector Methods 465 8.7 Principal Components Spectrum Estimation 469 8. 7 .1 Bartlett Frequency Estimation 470 8.7.2 Minimum Variance Frequency Estimation 471 8.7.3 Autoregressive Frequency Estimation 472 CONTENTS ix 8.8 Summary 473 8.9 Problems 477 9 ADAPTIVE FILTERING 493 9.1 Introduction 493 9.2 FIR Adaptive Filters 497 9.2.1 The Steepest Descent Adaptive Filter 499 9.2.2 The LMS Algorithm 505 9.2.3 Convergence of the LMS Algorithm 506 9.2.4 Normalized LMS 514 9.2.5 Application: Noise Cancellation 516 9.2.6 Other LMS-Based Adaptive Filters 521 9.2.7 Gradient Adaptive Lattice Filter 526 9.2.8 Joint Process Estimator 528 9.2.9 Application: Channel Equalization 530 9.3 Adaptive Recursive Filters 534 9.4 Recursive Least Squares 541 9.4.1 Exponentially Weighted RLS 541 9.4.2 Sliding Window RLS 548 9.4.3 Summary 551 9.5 Problems 554 Appendix USE OF MATLAB PROGRAMS 571 A. l Introduction 571 A.2 General Information 572 A.3 Random Processes 574 A.4 Signal Modeling 574 A.5 Levinson Recursion 578 A.6 Lattice Filters 582 A. 7 Optimum Filters 583 A.8 Spectrum Estimation 584 A.9 Adaptive Filtering 592 Table of Symbols 595 Index 599 PREFACE This book is the culmination of a project that began as a set of notes for a graduate level course that is offered at Georgia Tech. In writing this book, there have been many challenges. One of these was the selection of an appropriate title for the book. Although the title that was selected is Statistical Signal Processing and Modeling, any one of a number of other titles could equally well have been chosen. For example, if the title of a book is to capture its central theme, then the title perhaps could have been Least Squares Theory in Signal Processing. If, on the other hand, the title should reflect the role of the book within the context of a course curriculum, then the title should have been A Second Course in Discrete-Time Signal Processing. Whatever the title, the goal of this book remains the same: to provide a comprehensive treatment of signal processing algorithms for modeling discrete-time signals, designing optimum digital filters, estimating the power spectrum of a random process, and designing and implementing adaptive filters. In looking through the Table of Contents, the reader may wonder what the reasons were in choosing the collection of topics in this book. There are two. The first is that each topic that has been selected is not only important, in its own right, but is also important in a wide variety of applications such as speech and audio signal processing, image processing, array processing, and digital communications. The second is that, as the reader will soon discover, there is a remarkable relationship that exists between these topics that tie together a number of seemingly unrelated problems and applications. For example, in Chapter 4 we consider the problem of modeling a signal as the unit sample response of an all-pole filter. Then, in Chapter 7, we find that all-pole signal modeling is equivalent to the problem of designing an optimum (Wiener) filter for linear prediction. Since both problems require finding the solution to a set of Toeplitz linear equations, the Levinson recursion that is derived in Chapter 5 may be used to solve both problems, and the properties that are shown to apply to one problem may be applied to the other. Later, in Chapter 8, we find that an all-pole model performs a maximum entropy extrapolation of a partial autocorrelation sequence and leads, therefore, to the maximum entropy method of spectrum estimation. This book possesses some unique features that set it apart from other treatments of statistical signal processing and modeling. First, each chapter contains numerous examples that illustrate the algorithms and techniques presented in the text. These examples play an important role in the learning process. However, of equal or greater importance is the working of new problems by the student. Therefore, at the end of each chapter, the reader will find numerous problems that range in difficulty from relatively simple exercises to more involved problems. Since many of the problems introduce extensions, generalizations, or applications of the material in each chapter, the reader is encouraged to read through the problems that are not worked. In addition to working problems, another important step in the learning process is to experiment with signal processing algorithms on a computer using either real or synthetic data. Therefore, throughout the book the reader will find MATLAB programs that have been written for most of the algorithms and techniques that are presented in the book and, at the end of each chapter, the reader will find a variety of computer exercises that use these programs. In these exercises, the student will study the performance of signal processing algorithms, look at ways to generalize the algorithms or make them more efficient, and write new programs.

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.