ebook img

Numerical differentiation with annihilators in noisy environment PDF

27 Pages·2009·0.46 MB·English
by  
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Numerical differentiation with annihilators in noisy environment

Author manuscript, published in "Numerical Algorithms 50, 4 (2009) 439-467" DOI : 10.1007/s11075-008-9236-1 Numerical Algorithms manuscript No. (will be inserted by the editor) Numerical differentiation with annihilators in noisy environment Mamadou Mboup · C´edric Join · Michel Fliess September3,2008 Abstract Numerical differentiation in noisy environment is revised through an algebraic approach. For each given order, an explicit formula yielding a 8 pointwise derivative estimation is derived, using elementary differential alge- 0 0 braic operations. These expressions are composed of iterated integrals of the 2 noisy observation signal. We show in particular that the introduction of de- p e layed estimates affords significant improvement. An implementation in terms S of a classical finite impulse response (FIR) digital filter is given. Several sim- 7 ulation results are presented. - 1 n o Keywords Numerical differentiation, Operational calculus, Orthogonal si polynomials, Linear filtering r e v , 0 Mathematics Subject Classification (2000) 65D25,44A40, 44A10 4 2 9 1 3 0 M.Mboup 0 UFRMath´ematiquesetInformatique,CRIP5,Universit´eParisDescartes, - a 45ruedesSaints-P`eres,75270Pariscedex06,France ri Tel.:+33144553525,Fax:+33144553535 n i E-mail:[email protected] C.Join CRAN(CNRS-UMR7039),Universit´eHenriPoincar´e(NancyI),BP239, 54506Vandœuvre-l`es-Nancy,France E-mail:[email protected] M.Fliess E´quipeMAX,LIX(CNRS-UMR7161),E´colepolytechnique, 91128Palaiseau,France E-mail:[email protected] M.Mboup·C.Join·M.Fliess EPIALIENINRIA,France. 2 1 Introduction Numerical differentiation, i.e., the derivative estimation of noisy time signals, isalongstandingdifficultill-posedprobleminnumericalanalysisandinsignal processingandcontrol.Ithasattractedalotofattentionduetoitsimportance in many fields of engineering and applied mathematics. A number of different approaches have been proposed. Methods based on observer design may be found in the control literature [7], [6], [18]. See also [19,22,34] for other ap- proaches from the control litterature. In signal processing, it is very common tocasttheproblemintermsoffrequencydomaindigitalfilterdesign[31],[29], [5]. This is motivated by the observation that an ideal nth order differentiator has a frequency response of magnitude ωn. Another interesting approach, due to[2], consistsin invertingthe minimumphase transfer functionof a properly designed numerical integrator to obtain a digital differentiator. Similar ideas have also been presented by [8], in the continuous-time context. All these interesting approaches have been developped as candidate alter- natives to the very classical one, based on least-squares polynomial fitting or (spline) interpolation. When completed with a regularization step, this classi- 8 cal approach may be very efficient (see i.e. [20], [9]) in off-line applications. 0 0 In this paper, the numerical differentiation problem is revised through the 2 algebraic framework of parameter estimation initially presented in [15] (see p e also [14] and [25]). Our purpose is to improve a new approach which started S in [11,16], and in [13,12,10], for solving various questions in control and in 7 signal and image processing. Given a smooth signal x and an order n, a key - 1 point of our approach is to consider dnx(t) | , for each fixed τ (cid:62) 0, as a n dtn t=τ o single parameter to be estimated from a noisy obervation of the signal. A si pointwise derivative estimation therefore follows by varying τ. The general r e ideas leading to this estimation are exposed in section 2. As in the classical v , approaches,thestartingpointisanorderN (cid:62)ntruncationoftheTaylorseries 0 4 expansion of the signal. One of the key feature of the proposed approach is to 2 9 operateintheoperationalcalculusdomain[36],[27,28].Therein,anextensive 1 3 use of differential elimination and a series of algebraic manipulations yield, 0 back in the time domain, an explicit expression for the estimate of dnx(t) | 0 dtn t=τ a- as an integral operator of the noisy observation within a short time interval ri [τ,τ+T].Thedifferentialalgebraicmanipulationsmaybechoseninsuchaway n i that the time domain integral operators specialise to iterated integrals. This correspondstoafamilyofpointwisederivativeestimatorsintroducedinsection 3. This section contains the main contributions of the paper. Therein, we establish a direct link in between our algebraic numerical differentiators with thewellknownfactthatascalarproductofthesignalx(t)withakernelhaving nvanishingmomentsessentiallycapturesitsnthorderderivative,providedxis smooth. Note however that drawing an efficient numerical differentiator from this general principle is another history, especcially in a noisy setting. More precisely, recall that (see e.g. [3, Theorem 2.3.5, p. 31]): If f(t) belongs to a reproducing kernel Hilbert space H, with reproducing kernel K (τ,t), analytic H 3 in both t and τ∗, then we have ∂‘ hf(t), K (τ,t)i=f(‘)(τ). (1) ∂τ∗‘ H Wementionthat,themodelspaceHandespeciallytheevaluationpointτ have to be judiciously chosen before the equation (1) yields some efficient deriva- tive estimator. But, once such choices are met, high performance numerical differentiation method is obtained. Now, it is a happy fact that the proposed algebraic estimators inherently achieve these judicious choices. In particular, the estimators are as efficient as an appropriate time delay is admitted. We give an explicit expression for the corresconding delay. Section 4 tells us how to compute the estimates based on the samples of the noisy observation sig- nal. For equally spaced samples, our numerical differentiators take the form of classical finite impulse response (FIR) digital filters. These filters are next used to present some simulation examples. To ease the presentation, all the proofs are deferred to an appendix. 2 Preliminaries 8 0 0 2 We consider the estimation of x(n)(t), the nth order derivative of a smooth p signal x(t) defined on an interval I ⊂ R . In a concrete application, the e + S signal x(t) is not directly available but rather it is observed through a noise 7 corruption. In all the sequel, $(t) will denote the noise and we set y(t) = - x(t)+$(t) for the noisy observation. The problem is challenging since it is 1 n well known that differentiation tends to amplify the noise. o si r e v 2.1 Operational domain , 0 4 2 Let us ignore the noise for a moment. Assume that x(t) is analytic on I so 9 1 that we may consider, without any loss of generality, the convergent Taylor 3 0 expansion at t=0 a-0 x(t)=(cid:88)x(i)(0)ti. (2) ri i! n i≥0 i With N (cid:62)n, the truncated Taylor expansion (cid:88)N ti x (t)= x(i)(0) , (3) N i! i=0 satisfies the differential equation dN+1 x (t) = 0. In the operational domain, dtN+1 N this reads as sN+1xˆ (s)=sNx(0)+sN−1x˙(0)+···+sN−nx(n)(0)+···+x(N)(0), (4) N 4 where xˆ (s) stands for the operational analog of x (t). In all the sequel, the N N operational analog of a signal u(t) will be denoted as uˆ(s) and, to ease the notation to follow, we will drop the argument s and write uˆ for short. The basic step towards the estimation of x(n)(t),t (cid:62) 0 is the estimation of the coefficient x(n)(0) from the observation y(t). Indeed, assume that an algorithm is already designed for this task. Then, for any τ (cid:62) 0, it is clear 4 that applying the same algorithm on the new signal observation y (t) = τ Heaviside(t)y(t+τ), will yield an estimate for x(n)(τ). We thus lay the fo- cus on how to estimate x(n)(0) in (4). 2.1.1 Annihilators Ofcourseitisalsopossibletoestimateallthecoefficientsx(i)(0),i=0,...,N simultaneously. However, not only the coefficients x(i)(0),i 6= n are not nec- essary for the estimation of x(n)(t) as explaned above, but also simultane- ous estimation is more sensitive to noise and numerical computation errors [26]. These drawbacks are avoided in the proposed approach. All the terms sN−ix(i)(0) in (4) with i 6= n, are consequently considered as undesired per- turbations which we proceed to annihilate. For this, it suffices to find a linear 8 0 differential operator, i.e. 0 2 (cid:88) d‘ p Π = % (s) , % (s)∈C(s), (5) e ‘ ds‘ ‘ S finite 7 satisfying - 1 ΠxˆN =%(s)x(n)(0), (6) n o forsomerationalfunction%(s)∈C(s).Suchalineardifferentialoperator,sub- si sequentlycalled an annihilator for x(n)(0), obviouslyexists and is not unique. r e It is also clear that to each annihilator Π, there is a unique %(s)∈C(s) such v , that (6) holds. We shall say that Π and %(s) are associated. 0 4 2 9 2.1.2 Integral operators 1 3 0 Alineardifferentialoperatorissaidtobeproper(resp.strictlyproper)if,and 0 a- only if, each %‘(s) in (5) is a proper (resp. strictly proper) rational function. ri Werecallthatarationalfunction a issaidtobeproper(resp.strictlyproper) n b i if, and only if, dega(cid:54)degb (resp. dega<degb). Now,ifΠ isanannihilatorassociatedwith%(s),thenleftmultiplyingboth sides of (6) by a rational function σ(s) immediately shows that σ(s)Π is also an annihilator, associated with σ(s)%(s). It is therefore easy to see that Lemma 1 An annihilator Π and its associated rational function % as in (6) may always be chosen strictly proper. Henceforth, we shall consider only strictly proper annihilators with strictly proper associated rational function. This means that only integral operators will be involved, once (6) is translated back in the time domain. 5 2.2 Time domain We now replace the model x by the actual noisy observation y. With (5), N equation (6) then becomes (cid:88) d‘ %(s)x˜(n)(0)= % (s) yˆ. (7) N ‘ ds‘ finite This equation provides us with an operational estimator of x(n)(0). It now remains to express the estimate x˜(n)(0), directly in the time domain. For this, N rememberthat %(s)and % (s)arestrictly properrational(transfer)functions. ‘ Let f(t) and h (t) be their corresponding impulse responses and recall that, ‘ by the classical rules of operational calculus, d corresponds to multiplication ds by −t. Then equation (7) readily reads in the time domain as (cid:90) (cid:88) t f(t)x˜(n)(0)= h (t−α)(−1)‘α‘y(α)dα. (8) N ‘ finite 0 8 0 Here, the variable t represents the estimation time. This equation has there- 0 fore to be considered for fixed t, say t = T. Now, for each estimation time 2 p T > 0 such that f(T) 6= 0, we obtain a numerical estimate x˜(n)(0). Examine e N S this estimate. To do this, write RN(t) = x(t)−xN(t) from (2) and (3) and 7 decompose the observation y in y(t) = xN(t)+RN(t)+$(t). Rewriting (8) - in the form 1 n (cid:90) o T ersi x˜(Nn)(0)= 0 h(α;T){xN(α)+RN(α)+$(α)}dα (9) v =x(n)(0)+e (0)+e (0), , RN $ 0 4 2 shows that the estimate is corrupted by two sources of error: 9 1 3 • the mismodelling, stemming from the truncation of the Taylor expansion, 0 0 which contributes to e (0) (the bias term) and - RN a • thenoisyenvironmentwhichproducestheerrore (0)(thevarianceterm). ri $ n i Needless to say, the contribution of each one of these two sources may vary markedly from one estimator to another. So, the game of the next section willbetodeviseanadequateannihilatorΠ,alongwithitsassociatedrational function %. By adequateness, we mean the minimisation of the overall estima- tion error. However for N fixed, minimizing the mismodelling effect is, most often,inoppositionwithreducingthenoiseerrorcontribution.Asanexample, note that the estimate x˜(n)(0) will vary with T in general. Now, for N fixed, N choosingasmallerestimationtime,T,willtendtoreduce|e (0)|.Butatthe RN sametime,alarge T isrequiredtofilteroutthenoise showingthatthechoice for T should obey a compromise. 6 3 Pointwise derivative estimation Now that we have fixed the notations and the guidelines of our methodology, we investigate in this section some detailed properties and performance of a classofpointwisederivativeestimators.Thesewillbederivedfromaparticular familyofannihilators.And,asweshallshortlysee,theJacobiorthogonalpoly- nomials [35] are inherently connected with these estimators. A least squares interpretation then naturally follows [25], [26] and this leads to one of the main contribution of the paper: the numerical differentiation is as efficient as an appropriately chosen time delay is tolerated. 3.1 Purely integral estimators AlineardifferentialoperatorΠ issaidtobeinfinite-integral formif,andonly if, each % (s) in (5) is in the form % (s) = 1H (1) for some polynomial H . ‘ ‘ s ‘ s ‘ We consider the family of annihilators defined below. Lemma 2 For any κ,µ∈N, the differential operator 8 1 dn+κ 1 dN−n 0 ΠN,n = · · ·sN+1 (10) 0 κ,µ sN+µ+1dsn+κ s dsN−n 2 p is a finite-integral form annihilator for x(n)(0) and, it is associated with e S (−1)(n+κ)(n+κ)!(N −n)! 7 ϕ (s)= . (11) - κ,µ,N sµ+κ+N+n+2 1 n This means that for each set of positive integers {κ,µ,N}, with N (cid:62) n (cid:62) 0, o si the equation ver ϕκ,µ,N(s)x˜(Nn)(0)=ΠκN,µ,nyˆ (12) 0, provides an operational estimator of the nth order derivative of x(t), observed 4 by y(t). We thus obtain a family of estimators parametrized by κ, µ and N. 2 9 In order to emphasis the dependence on these parameters, we introduce the 1 3 notation x(n)(0;κ,µ;N) for x(n)(0). If N = n, the differential operator ΠN,n 0 N κ,µ 0 reduces to a- 1 dn+κ nri Πκn,µ = sn+µ+1dsn+κ ·sn. (13) i Accordingly, the simplified notations x˜(n)(0;κ,µ) and ϕ (s) will be use in- κ,µ stead of x˜(n)(0;κ,µ;n) and ϕ (s) respectively. κ,µ,N The next result will play a central rˆole in the rest of the paper. Theorem 1 Let N,n,κ and µ be four positive integers, with N (cid:62) n. Set q = N −n. Then the nth order derivative estimate x˜(n)(0;κ,µ;N), obtained from a Taylor expansion of order N, is uniquely expressible in the form (cid:88)q x˜(n)(0;κ,µ;N)= λ x˜(n)(0;κ ,µ ), λ ∈Q (14) ‘ ‘ ‘ ‘ ‘=0 7 whereκ =κ+q+‘andµ =µ+‘.Moreover,ifq (cid:54)n+κ,thenthecoefficients ‘ ‘ λ satisfy ‘ (cid:88)q λ =1. ‘ ‘=0 Thistheoremshowsthatannth-ordertruncatedTaylorexpansionisappropri- ateforestimatingthenth-orderderivative.Also,itcharacterisesx˜(n)(0;κ,µ;N) as a point in the Q-affine hull of the set (cid:110) (cid:111) S = x˜(n)(0;κ+q,µ), ··· , x˜(n)(0;κ,µ+q) , (15) κ,µ,q provided q (cid:54)n+κ. Suppose now that, this affine hull is extended by allowing the barycentric coordinates λ in (14) to be real rather than rational. By ‘ doing so, it is clear that any point in this new larger set will represents an nth-order derivative estimate of x(t) at the origin, in some meaningful sense. Characterising those points which minimise a given distance to x(n)(0) is an important question. This is the program of the following two subsections. 8 0 0 3.2 Minimizing the bias term 2 p e 3.2.1 Jacobi orthogonal polynomials S 7 We show in this subsection how the preceding special class of strictly proper - 1 differential operators are intimately connected to the Jacobi orthogonal poly- n nomials. This connection, in turn, will allow us to attach a least squares in- o si terpretation to our estimators (see [25] for more delails). r e To begin let us recall that for a given signal uˆ, and a positive integer α, v , the time domain analog of vˆ = 1 dβ uˆ is the iterated integral of order α, of 0 sαdsβ 4 (−1)βtβu(t). Using the well known Cauchy formula for repeated integration, 2 9 this reduces to a single itegral 1 3 -00 v(t)= 1 (cid:90) t(t−τ)α−1(−1)βτβu(τ)dτ. a (α−1)! ri 0 n i By a direct application of this rule of operational calculus and by recalling (13), (12) and (11), it becomes easy to see that, for any κ and µ, (cid:90) (µ+κ+2n+1)! T (T −τ)µ+nτκ+n x˜(n)(0;κ,µ)= y(n)(τ)dτ. (16) (µ+n)!(κ+n)! Tµ+κ+2n+1 0 Notethateventhoughthenoisefunctionmaynotbedifferentiable,theabove expression still makes sense because the derivation under the integral sign is only formal; it disappears upon integrating by parts. Nonetheless, to avoid any possible misunderstanding, we rewite (16) by considering separately the 8 contribution due to the signal x and that due to the noise $. For the former, we keep the form of (16) and for the latter, we integrate by parts, to obtain (cid:90) 1 x˜(n)(0;κ,µ)=γ τκ+n(1−τ)µ+nx(n)(Tτ)dτ κ,µ,n 0 (cid:90) (17) γ 1 dn (cid:169) (cid:170) +(−1)n κ,µ,n τκ+n(1−τ)µ+n $(Tτ)dτ Tn dτn 0 where γ =4 (µ+κ+2n+1)!. Here we have also used a change of variable to κ,µ,n (µ+n)!(κ+n)! reduce the estimation interval IT =[0,T] to [0,1]. (cid:110) (cid:111) 0 Consider now P{κ,µ}(t) , the Jacobi orthogonal polynomials (cf. [35, i i(cid:62)0 1]) on the interval [0, 1], associated to the weight function w (t)=tκ+n(1−t)µ+n. (18) κ,µ In all the sequel, k · k will denote the norm induced by the inner product 8 defined, for two real valued functions f and g, by: 0 0 2 (cid:90) 1 p hf,gi = f(t)w (t)g(t)dt. (19) e {κ,µ} κ,µ S 0 7 - The short hand notation h·,·i will be used wherever there is no possible con- 1 n fusion. Upon noting that P0{κ,µ}(t)≡1 and o si r (µ+n)!(κ+n)! 1 e kP{κ,µ}k2 = = , (20) , v 0 (µ+κ+2n+1)! γκ,µ,n 0 4 2 we have the following: 9 1 3 0 Proposition 1 Let x(n) (t) denote the first order least-squares polynomial 0 LS,1 a- approximation of x(n)(t) in the interval [0,T]. ri Then x˜(n)(0;κ,µ) is given by n i x˜(n)(0;κ,µ)=x(n) (Tξ )+e (0;κ,µ), (21) LS,1 1 $ where κ+n+1 ξ = (22) 1 µ+κ+2(n+1) is the root of P{κ,µ}(t) and e (0;κ,µ) is the noise contribution as given by 1 $ the second term in the right hand side of (17). 9 3.2.2 Time-delayed derivative estimation For a given τ (cid:62) 0, let us substitute, in all the preceding developments, y(t) by [Heaviside(τ)y(τ + t)]. This simply amounts to moving the time ori- gin from 0 to τ. As a result, we obtain x˜(n)(τ;κ,µ), the estimate of x(n)(τ). Observe however that, the corresponding estimation is anti-causal, since the estimate at time τ is based on the signal observation during the time window IT =[τ,τ +T]. Causal estimation is readily achieved by substituting y(t) by τ+ −[Heaviside(t)y(τ−t)],forτ (cid:62)T.Doingso,westillobtaininx˜(n)(τ;κ,µ),the estimate of x(n)(τ) but now from the signal observation on IT =[τ −T,τ]. τ− These sliding windows thus induce a map t7→x˜(n)(t;κ,µ), as an image of themapt7→y(t)throughalltheprecedingdevelopments.Theunderlyingop- erator will subsequently be called the (κ,µ)-algebraic numerical differentiator (AND)ofordern.A(κ,µ)-ANDwillbetermedminimal ifithasbeendesigned from the minimum truncation order of the Taylor expansion (3). It may be either causal or anti-causal, according to which estimation interval IT or IT τ− τ+ is used. Now, it is straightforward to infer from Proposition 1 that, the noise con- tribution e (t;κ,µ) apart, 8 $ 0 0 x˜(n)(t;κ,µ)=x(n) (t−Tξ ), t(cid:62)T, 2 LS,1 1 p e for the causal variant. As a consequence, we have: S 7 Corollary 1 Let x(t) be observed through a noise corruption $(t) by y(t) = - x(t)+$(t). The causal nth order minimal (κ,µ)-algebraic numerical differen- 1 n tiation of y(t) is time-delayed: o ersi x˜(n)(t;κ,µ)≈x(n)(t−τ1), t(cid:62)T. (23) v 0, The delay τ1 reads as τ1 =Tξ1 where ξ1 is given by (22). 4 2 Similarly, we obviously have for the anti-causal variant, 9 1 3 x˜(n)(t;κ,µ)≈x(n)(t+τ ), t(cid:62)0. (24) 0 1 0 - a Clearly, the above approximations become exact for polynomial signals of de- nri gree not exceeding n, in the noise free context. But since ξ1, in Proposition 1, i is the root of P{κ,µ}(t), we deduce: 1 Corollary 2 The approximations in (23) and (24) are exact for polynomial signals of degree up to n+1. 3.2.3 Delay and model complexity In this paragraph, we investigate the non minimality effects on the algebraic numerical differentiators. We shall first establish that non minimality allows to compensate the previous delay. This is important because a delay may not 10 be tolerable in some real time applications1. We shall also show that for a fixed modelling order N, the abscence of delay undergoes a performance loss. Finally,thequestionofthe“best”choiceofthebarycentriccoordinatesin(14) will be approached. To begin, let us recall some well-known facts. Consider the subspace of L ([0,1]), defined by 2 (cid:110) (cid:111) H =span P{κ,µ}(t),P{κ,µ}(t),··· ,P{κ,µ}(t) . (25) q 0 1 q Equipped with the inner product (19), H is clearly a reproducing kernel q Hilbert space [4], [3], [32], with the reproducing kernel (cid:88)q P{κ,µ}(τ)P{κ,µ}(t) K (τ,t)= i i . (26) q kP{κ,µ}k2 i=0 i The reproducing property implies that for any function f defined on [0,1], we have hK (τ,·),f(·)i=f (τ), q q where f stands for the orthogonal projection of f on H . In the rest of the q q paper, we will set q =N −n and assume that q (cid:62)0. 8 0 0 Theorem 2 Assume that q (cid:54)κ+n and let 2 p (cid:88)q hP{κ,µ}(τ),x(n)(Tτ)i e x(n) (t)=4 i P{κ,µ}(t/T), (27) S LS,q kP{κ,µ}k2 i 7 i=0 i - denote the least-squares qth order polynomial approximation of x(n)(·) in the 1 on interval I0T− =[0,T]. Then x˜(n)(0;κ,µ;N) is given by ersi x˜(n)(0;κ,µ;N)=x(LnS),q(0)+e$(0;κ,µ;N), (28) v where e (0;κ,µ;N) represents the corresponding noise contribution. 0, $ 4 2 Remark 1 Thisisfortheanti-causalvariant.Toobtainthecausalcounterpart, 9 just replace y(t) for t∈[0,T] by −y(T −t). 1 3 0 Weimmediatelydeducefromtheabovetheoremthatthecorrespondingcausal 0 - (κ,µ)-AND is delay-free: a ri n x˜(n)(t;κ,µ;N)≈x(n)(t), t(cid:62)T. (29) i Now, since x(n) (Tt) is the orthogonal projection of x(n)(Tt),t∈[0,1] on H , LS,q q we may rewrite (28) as x˜(n)(0;κ,µ;N)=hK (0,t),x(n)(Tt)i+e (0;κ,µ;N) q $ (cid:88)q (30) = λ x˜(n)(0;κ+q−‘,µ+‘), ‘ ‘=0 1 Forinstance,itisknownthatintroducingdelayedsignalsinacontrolloopofasystem tendstodestabilizeit[30],[33].

Description:
in many fields of engineering and applied mathematics. A number of different approaches have been In a concrete application, the signal x(t) is not directly available but rather it URL: http://www.emath.fr/cocv. 16. Fliess, M., Sira Ramırez, H.: Control via state estimations of some nonlinear sy
See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.