ebook img

Computer Techniques and Algorithms in Digital Signal Processing PDF

412 Pages·1996·5.882 MB·1-411\412
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Computer Techniques and Algorithms in Digital Signal Processing

CONTRIBUTORS Numbers in parentheses indicate the pages on which the authors' contributions begin. J. R. Cruz (1), School of Electrical Engineering, ehT University of Oklahoma, Norman, Oklahoma 73019 Vassil Dimitrov (155), Technical University of Plovdiv, Sofia-lO00, Bulgaria Patrick Flandrin (105), Laboratoire de Physique, ARU 1325 Centre National de al Recherche Scientifique, Ecole Normale Superieure de Lyon, 69634 Lyon, France Joydeep Ghosh (301), Department of Electrical and Computer Engineering, College of Engineering, ehT University of Texas at Austin, Austin, Texas 21787 Georgios B. Giannakis (259), School of Engineering and Applied Science, Department of Electrical Engineering, University of Virginia, Charlottes- ville, Virginia 22903 K. Giridhar (339), Department of Electrical Engineering, Indian Institute of Technology, Madras 600036, India Ronald A. Iltis (339), Department of Electrical and Computer Engineering, University of California, Santa Barbara, Santa Barbara, California 93106 Graham A. Jullien (155), Department of Electrical Engineering, University of Windsor, Windsor, Ontario, Canada N9B 3P4 Cornelius .T Leondes (211), University of California, Los Angeles, Los An- geles, California 90024 Olivier Michel (105), Laboratoire de Physique, ARU 1325 Centre National de al Recherche Scientifique, Ecole Normale Superieure de Lyon, 69634 Lyon, France vii viii CONTRIBUTORS Bhaskar D. Rao (79), Electrical and Computer Engineering Department, University of ,ainrofilaC naS Diego, La Jolla, California 92093 Sanyogita Shamsunder (259), School of Engineering and Applied ,ecneicS Department of Electrical Engineering, University of ,ainigriV Charlottes- ,elliv Virginia 22903 John Shynk (339), Department of Electrical and Computer Engineering, ytisrevinU of ,ainrofilaC Santa Barbara, Santa Barbara, California 93106 Bryan .W Stiles (301), Department of Electrical and Computer ,gnireenignE College of Engineering, ehT University of Texas as Austin, Austin, Texas 21787 Peter A. Stubberud (211), Department of Electrical and Computer Engi- neering, University of Nevada, Las Vegas, Las Vegas, Nevada 89154 Richard M. Todd (1), School of Electrical Engineering, ehT University of ,amohalkO Norman, Oklahoma, 73019 PREFACE From about the mid-1950s to the early 1960s, the field of digital fil- tering, which was based on processing data from various sources on a main- frame computer, played a key role in the processing of telemetry data. During this period the processing of airborne radar data was based on analog computer technology. In this application area, an airborne radar used in tactical aircraft could detect the radar return from another low-flying aircraft in the environment of competing radar return from the ground. This was accomplished by the processing and filtering of the radar signal by means of analog circuitry, taking advantage of the Doppler frequency shift due to the velocity of the observed aircraft. This analog implementation lacked the flexibility and capability inherent in programmable digital signal processor technology, which was just coming onto the technological scene. Developments and powerful technological advances in integrated dig- ital electronics coalesced soon after the early 1960s to lay the foundations for modern digital signal processing. Continuing developments in techniques and supporting technology, particularly very-large-scale integrated digital electronics circuitry, have resulted in significant advances in many areas. These areas include consumer products, medical products, automotive sys- tems, aerospace systems, geophysical systems, and defense-related systems. Therefore, this is a particularly appropriate time for Control and Dynamic Systems to address the theme of "Computer Techniques and Algorithms in Digital Signal Processing." The first contribution to this volume, "Frequency Estimation and the QD Method," by Richard M. Todd and .J R. Cruz, is an in-depth treatment of recently developed algorithms for taking an input signal assumed to be a combination of several sinusoids and determining the frequencies of these sinusoids. Fast algorithm techniques are also presented for frequency esti- mation. Extensive computer analyses demonstrate the effectiveness of the various techniques described. "Roundoff Noise in Floating Point Digital Filters," by Bhaskar D. Rao, describes the effect of finite word length in the implementation of digital filters. This is an issue of considerable importance because, among other reasons, finite word length introduces errors which must be understood and x ECAFERP dealt with effectively in the implementation process. The types of arithmetic commonly employed are fixed and floating point arithmetics, and their effect on digital filters has been extensively studied and is reasonably well under- stood. More recently, with the increasing availability of floating point ca- pability in signal processing chips, insight into algorithms employing floating point arithmetic is of growing interest and significance. This con- tribution is a comprehensive treatment of the techniques involved and in- cludes examples to illustrate them. The third contribution is "Higher Order Statistics for Chaotic Signal Analysis," by Olivier Michel and Patrick Flandrin. Numerous engineering problems involving stochastic process inputs to systems have been based on the model of white Gaussian noise as an input to a linear system whose output is then the desired input to the system under study. This approach has proved to be very effective in numerous engineering problems, though it is not capable of handling possible nonlinear features of the system from which the time series in fact originates. In addition, it is now well recognized that irregularities in a signal may stem from a nonlinear, purely deterministic process, exhibiting a high sensitivity to initial conditions. Such systems, referred to as chaotic systems, have been studied for a long time in the context of dynamical systems theory. However, the study of time series produced by such experimental chaotic systems is more recent and has mo- tivated the search for new specific analytical tools and the need to describe the corresponding signals from a completely new perspective. This rather comprehensive treatment includes numerous examples of issues that will be of increasing applied significance in engineering systems. "Two-Dimensional Transforms Using Number Theoretic Techniques," by Graham A. Jullien and Vassil Dimitrov, discusses various issues related to the computation of 2-dimensional transforms using number theoretic tech- niques. For those not fully familiar with number theoretic methods, the au- thors have presented the basic groundwork for a study of the techniques and a number of examples of 2-dimensional transforms developed by them. Sev- eral VLSI implementations which exemplify the effectiveness of number theoretic techniques in 2-dimensional transforms are presented. The next contribution is "Fixed Point Roundoff Effects in Frequency Sampling Filters," by Peter A. Stubberud and Cornelius .T Leondes. Under certain conditions, frequency sampling filters can implement linear phase filters more efficiently than direct convolution filters. This chapter examines the effects that finite precision fixed point arithmetic can have on frequency sampling filters. Next is "Cyclic and High-Order Sensor Array Processing," by San- yogita Shamsunder and Georgios B. Giannakis. Most conventional sensor array processing algorithms avoid the undesirable Doppler effects due to relative motion between the antenna array and the source by limiting the observation interval. This contribution develops alternative, more effective ECAFERP ix algorithms for dealing with this issue and includes numerous illustrative examples. "Two Stage Habituation Based Neural Networks for Dynamic Signal Classification," by Bryan .W Stiles and Joydeep Ghosh, is a comprehensive treatment of the application of neural network systems techniques to the important area of dynamic signal classification in signal processing systems. The literature and, in fact, the patents in neural network systems techniques are growing rapidly as is the breadth of important applications. These ap- plications include such major areas as speech signal classification, sonar systems, and geophysical systems signal processing. The final contribution is "Blind Adaptive MAP Symbol Detection and a TDMA Digital Mobile Radio Application," by K. Giridhar, John J. Shynk, and Ronald A. Iltis. One of the most important application areas of signal processing on the international scene is that of mobile digital communica- tions applications in such areas as cellular phones and mobile radios. The shift from analog to digital cellular phones with all their advantages is well underway and will be pervasive in Europe, Asia, and the United States. This contribution is an in-depth treatment of techniques in this area and is there- fore a most appropriate contribution with which to conclude this volume. This volume on computer techniques and algorithms in digital signal processing clearly reveals the significance and power of the techniques avail- able and, with further development, the essential role they will play in a wide variety of applications. The authors are all to be highly commended for their splendid contributions, which will provide a significant and unique reference on the international scene for students, research workers, practicing engineers, and others for years to come. Frequency Estimation and the QD Method Richard M. Todd J. R. Cruz School of Electrical Engineering The University of Oklahoma Norman, OK 73019 OI Introduction to Frequency Estimation In this chapter we discuss some recently developed algorithms for fre- quency estimation, that is, taking an input signal assumed to be a combi- nation of several sinusoids and determining the frequencies of these sinu- soids. Here we briefly review the better known approaches to frequency estimation, and some of the mathematics behind them. I.A. Periodograms and Blackman-Tukey Methods The original periodogram-based frequency estimation algorithm is fairly simple. Suppose we have as input a regularly sampled signal xn (where n varies from 0 to N- ,1 so there are N data points in total). To compute the periodogram, one simply takes the signal, applies a Discrete Fourier Transform, and take the magnitude squared of the result. This gives one an estimate of the power spectral density (PSD) 5/ (f) of the input signal: 1 1~ 1 -j27rfn 2 Pxx(f) - -~ In-0 xne (1) where j is the square root of minus one. If the input signal is assumed to be a sum of sinusoids, one can then derive estimates for the frequencies of these sinusoids from the peaks of the/5 (f) estimate as a function of frequency each peak will correspond to an input sinusoid. This approach to frequency estimation seems simple enough, but it has some problems. One would expect that the variance of this estimate would go to zero as the number of input data samples available increases, i.e, as the number of samples increases, the quality of the estimate of the spectral density would get better. Surpringly, this does not happen; CONTROL AND DYNAMIC SYSTEMS, VOL. 75 Copyright (cid:14)9 1996 by Academic Press, Inc. All fights of reproduction in any form reserved. 2 RICHARD M. TODD AND J. R. CRUZ to first order, the variance of/Sx~(f) does not depend on N (see 1 for details). Intuitively, this lack of decrease of the variance can be explained as follows: the spectral estimate/5~x(f) can be looked at as I - N 2 h-kxk (2) f'~(f)-N k=0 where 1 kh - -~ exp(-j27rfk) k - -(N - 1),..., 0 (3) turns out to be the impulse response of a bandpass filter centered around f. So the spectral estimate turns out to be, basically, a single sample of the output of a bandpass filter centered at f. Since only one sample of the output goes into the computation of the estimate, there is no opportunity for averaging to lower the variance of the estimate. This suggests a simple way to improve the variance of the periodogram: split the input signal into M separate pieces of length N/M, compute the periodogram of each one, and average them together. This does reduce the variance of the spectral estimates by a factor of 1/M. Alas, this benefit does not come without a cost. It turns out that the spectral estimates from the periodogram not only have a (rather nasty) variance, they also have a bias; the expected value of the estimate turns out to be the true power spectral density convolved with WB(f) , the Fourier transform of the Bartlett window function wbn- 1 k lkI < (N- )1 (4) N As N grows larger, WB(f) approaches a delta function, so the peri- odogram approaches an unbiased estimate as N ~ exp. But when we split the signal into M pieces of size N/M, we are now computing pe- riodograms on segments 1/Mth the size of the original, so the resulting periodograms have considerably worse bias than the original. We have improved the variance of the periodogram by splitting the signal into M pieces, but at a cost of significantly increasing the bias, increasing the blurring effect of convolving the true PSD with the We(f) function. As the variance improves, the resolution of the algorithm, the ability for it to detect two closely spaced sinusoids as being two separate sinusoids, goes down as well; with smaller segment sizes, the two peaks of the true PSD get blurred into one peak. Blackman and Tukey invented another modification of the peri- odogram estimator. One can readily show that the original periodogram FREQUENCY ESTIMATION AND THE QD METHOD 3 can be written as N-1 /5 (f) _ E § exp(-j27rfk) (5) k=-(g-1) where r..k- for k = O,...,N- 1 (6) for k = -(N- 1),...,-1 is a (biased) estimate of the autocorrelation function of the signal. Hence, the periodogram can be thought of as the Discrete Fourier Transform of the estimated autocorrelation function of the signal. The Blackman- Tukey algorithm simply modifies the periodogram by multiplying the au- tocorrelation by a suitable window function before taking the Fourier Transform, thus giving more weight to those autocorrelation estimates in the center and less weight to those out at the ends (near iN), where the estimate depends on relatively few input sample values. As is shown in Kay 1, this results in a trade-off similar to that involved in the seg- mented periodogram algorithm; the variance improves, but at the expense of worsened bias and poorer resolution. I.B. Linear Prediction Methods A wide variety of frequency estimation algorithms are based on the idea of linear prediction. Linear prediction, basically, is assuming that one's sig- nal satisfies some sort of linear difference equation, and using this model to further work with the signal. We will primarily concentrate in this sec- tion on the so-called AR (auto-regressive) models, as opposed to the MA (moving-average) model, or the ARMA model, which is a combination of AR and MA models. This is because, as we will show, the AR model is particularly applicable to the case we are interested in, the case of a sum of sinusoids embedded in noise. An ARMA random process of orders 1 and m is a series of numbers yn which satisfy the recurrence relation rn l yn - E bien - i- E aiyn - i (7) i=0 i=1 where en- i is a sequence of white Gaussian noise with zero mean and some known variance. The ai are called the auto-regressive coefficients, because they express the dependence of yn on previous values of y. The ib coefficients are called the moving average coefficients, because they 4 RICHARD M. TODD AND J. R. CRUZ produce a moving average of the Gaussian noise process. One can consider this process as being the sum of the output of two digital filters, one filtering the noise values en, and one providing feedback by operating on earlier values of the output of the ARMA process. A MA process is just a special case of the ARMA process with a l = ... = al - 0, i.e., m ny - ~ bien- i (S) i=0 (note that without loss of generality, we can take 0b - 1 by absorbing that term into the variance of the noise process en). Similarly, an AR process is just the special case with bl = ... =bm - ,O so l yn - - E aiyn - i + en (9) i=0 (note that, again, we can always take 0b - 1.) As mentioned above, the AR model is particularly suitable to describ- ing a sum of sinusoids; we now show why this is so. Consider the AR process described by Eq. (9), and consider what happens when we filter it as follows" xn -- yn + alyn -- 1 +... -q azyn -l , (10) that is to say, run it through a digital filter with coefficients ,1 a 1,..., az. Looking at the definition of the AR process yn, one can readily see that the output xn of our filter is nothing but the noise sequence: xn = en Vn . (11) The above digital filter is called the predictor error filter, because it is the error between the actual signal yn and a prediction of that signal based on the previous m values of the signal. It can be shown 1 that this particular filter is optimal in the sense that if you consider all possible filters of order m which have their first coefficient set to ,1 the one that produces an output signal of least total power is the one based on the AR coefficients of the original AR process. Furthermore, the optimal filter is the one which makes the output signal white noise, removing all traces of frequency dependence in the spectrum of the output signal. Now suppose we have a signal composed of a sum of m complex sinusoids plus some white Gaussian noise: m yn - E Ai exp(j(win + r + en (12) i=1 FREQUENCY ESTIMATION AND THE QD METHOD 5 where Ai are the amplitudes of the sinusoids, iw and r th the phases. Now, let us compute a polynomial A(z) as follows: m m A(z) - 1 + ~ aiz -i - H(1- z -1 exp(jwi)) (13) i:1 i=1 Now, let us take the above ai as coefficents of a proposed predictor error filter, and apply it to our sum of sinusoids. A(z) is just the z-transform of the filter's input response. As is known from z-transform theory, each sinusoid in the input will be scaled by a factor A(e j~) on passing through the filter. But for any given wi, m A(e j~') H(1-exp(-jwi)exp(jwk)) - k=l = 0 (14) So the terms corresponding to the sinusoidal frequencies in the input signal are completely blocked by the predictor error filter, and all that comes out of the filter is a scaled version of the noise en. Hence, a signal composed of a sum of m complex sinusoids can be modeled as an AR process of order m. There are complications that arise when one considers the case of undamped sinusoids; in that case, the feedback filter for the AR process we would like to use as a model has unity gain at the frequencies of the sinusoids. This means that those components of the noise at those frequencies get fed back with unity strength over and over; this means that the variance of yn tends to grow without bounds as n ---. c~. Thus, theoretically, one cannot model a signal of undamped signals with an AR process. In practice, however, if one ignores this restriction and attempts to make an AR model, the resulting models and frequency estimates seem to work fairly well anyway. I.B.1. The Prony Method Probably the first use of linear prediction methods in frequency estima- tion, interestingly enough, dates back to 1795, to a work by Prony 2. We suppose our signal to be composed solely of complex damped exponentials (with no additive noise present), e.g., P nX -- E iA i~((npxe -J )i2oj )51( I:i

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.