ebook img

Noise reduction in chaotic time series by a local projection with nonlinear constraints PDF

0.52 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Noise reduction in chaotic time series by a local projection with nonlinear constraints

Noise reduction in chaotic time series by a local projection with nonlinear constraints Krzysztof Urbanowicz and Janusz A. Ho lyst ∗ † Faculty of Physics and Center of Excellence Complex Systems Research Warsaw University of Technology Koszykowa 75, PL–00-662 Warsaw, Poland 4 0 Thomas Stemler and Hartmut Benner 0 Institute of Solid-State Physics 2 Darmstadt University of Technology n Hochschulstr.6, D-64289 Darmstadt, Germany a (Dated: February 2, 2008) J On thebasis of alocal-projective (LP) approach wedevelop a method of noise reduction in time 7 series that makes use of nonlinear constraints appearing due to the deterministic character of the ] underlying dynamical system. The Delaunay triangulation approach is used to find the optimal h nearest neighboring points in time series. The efficiency of our method is comparable to standard c LP methods but our method is more robust to the input parameter estimation. The approach has e beensuccessfullyappliedforseparatingasignalfromnoiseinthechaoticHenonandLorenzmodels m aswellasfornoisyexperimentaldataobtainedfrom anelectronicChuacircuit. Themethodworks - properlyforamixtureofadditiveanddynamicalnoiseandcanbeusedforthenoise-leveldetection. t a t PACSnumbers: 05.45.Tp,05.40.Ca s . t a I. INTRODUCTION submanifolds of an admissible phase space. As results m corresponding state vectors reconstructed from time de- - layvariablesarelimitedtogeometricobjectsthatcanbe d It is common that observed data are contaminated by n locally linearized. This fact is a common background of noise (for a review of methods of nonlinear time series o all local projective (LP) methods of noise reduction. analysis see [1, 2, 3]). The presence of noise can sub- c [ stantially affect such system parameters as dimension, entropy or Lyapunov exponents [4]. In fact noise can BesidestheLPapproachtherearealsonoisereduction 3 completely obscure or even destroy the fractal structure methods that approximate an unknown equation of mo- v 4 ofachaoticattractor[5]andeven2%ofnoisecanmakea tion and use it to find corrections to state vectors. Such 5 dimensioncalculationmisleading[6]. Itfollowsthatboth methods make use of neural networks [11] or a genetic 5 fromthetheoreticalaswellasfromthepracticalpointof programming[12]andonehastoassumesomebasisfunc- 8 view it is desirable to reduce the noise level. Thanks to tions e.g. radial basis functions [19] to reconstruct the 0 noisereduction[5,7,8,9,10,11,12,13,14,15,16,17,18] equationofmotion. Another groupofmethods aremod- 3 it is possible e.g. to restore the hidden structure of an ified linear filters e.g. the Wiener filter [13], the Kalman 0 attractor which is smeared out by noise, as well as to filter[14],ormethodsbasedonwaveletanalysis[15]. Ap- / t improve the quality of predictions. plications of these methods are limited to systems with a m large sampling frequencies, and they are confined to the Everymethodofnoisereductionassumesthatitispos- neighborhood of every point in phase space. - sible to distinguish between noise and a clean signal on d the basis of some objective criteria. Conventional meth- n o ods such as linear filters use a power spectrum for this The method described inthis paper canbe considered c purpose. Low pass filters assume that a clean signal has as an extension of LP methods by taking into account : sometypicallowfrequency,respectivelyitistrueforhigh constraintsthatoccurduetothelocallinearizationofthe v i pass filters. It follows that these methods are convenient equation of motion of the system. We call our method X foraregularsourcewhichgeneratesaperiodicoraquasi- the local projection with nonlinear constraints (LPNC). r periodicsignal. Inthecaseofchaoticsignalslinearfilters a cannot be used for noise reduction without a substantial disturbance of the clean signal. The reasonis the broad- The paper is organized as follows. In the following band spectrum of chaotic signals. It follows that for section we shall present the general background of LP chaoticsystemswemakeuseofanothergenericfeatureof methods. The LPNC method is introduced in Sec. III dissipativemotionlocatedonattractorsthat aresmooth andcomparedwithLPmethodsinSec.IV. InSec.Vwe present methods how to find the nearest neighborhood, and examples of noise reduction and estimation are in- troducedinSecs.VIandVII. IntheappendixAonecan ∗Electronicaddress: [email protected] find the multidimensional generalization of the solution †Electronicaddress: [email protected] presented in Sec. III. 2 II. LOCAL PROJECTIVE METHODS OF NOISE [16, 17]. Since there are several constraints (2) and the REDUCTION same data will occur in several Takens vectors x there n aremanypossiblecorrectionsδx tothesameobserved n,q Letusconsiderascalartimeseries{x˜n},n=1,2,...,N data xn. In the CHS method one makes a compromise correspondingtoanexperimentallyaccessiblecomponent between different corrections by taking the average ofthesystemtrajectory. Weassumethatinthepresence Q of measurement noise instead of the cleantime series x˜ n x˜ =x +α δx (3) we observe a noisy series x : x = x˜ +η where η is n n n,q n n n n n q=1 thenoisevariable. Theaimofnoisereductionmethodsis X toestimatetheset{x˜n}fromtheobservednoisydataset where {x },i.e. tofindcorrectionsδx suchthatx +δx ≈x˜ . n n n n n The correctionsδxn canbe estimatedonthe assumption δx =−H(n)(x ,x ,...,x ) ▽nHq(n) (4) tuhsactrex˜antebveelocntogrsstooftahcelesaynstedmetesrtmatienxi˜sticustirnagjetchteorTya.kLenets n,q q n+1 n n−d+1 ▽nHq(n) 2 n tThheeeomrebmed[d2]inx˜gndi=me{nx˜snio,nx˜,n−anτ,d..τ.,xi˜snt−h(ed−e1m)τb}e,dwdihnegrededlaiys is the correction of xn obtained due to(cid:13)(cid:13)(cid:13) the con(cid:13)(cid:13)(cid:13)straint thatfurtherwillbejust1. Nowthesimpleapproachisto Hq(n)(x˜n+1,x˜n,...,x˜n d+1)=0. α issome constant0< use a linear approximationfor the nearest neighborhood α < 1 and ▽ H(n) =−▽H(n)(x ,x ,...,x ) is n q q n+1 n n d+1 X˜NnN of a vector x˜n and then to estimate an unknown the gradient of the constraint function. − equationofmotionintheembeddedspacebyalinearfit: x˜ = Ax˜ +b. The matrix A is the corresponding n+1 n Jacobi matrix and b is a constant vector. B. Schreiber-Grassberger method (SG) In LP methods the local linearity of the system dy- namics plays the crucial role. The unknown equa- Instead of the perpendicular projection on the sub- tion of motion of a deterministic systems x˜n+1 = space defined by (2) one can perform a projection by aF c(xo˜nns,tx˜rna−in1t,.H..,(xx˜˜nn+−1d+,x˜1n),i.s.e.q,ux˜invadle+n1t)t=o t0h.e pIrfetsheneceemo-f cthoerrceocrtrinecgteodnlvyaroinaeblveawrihaebrleer[5≈]. dI/f2wtehecnhotohseecxonrr+e1c−trionass bedding dimension d is larger tha−n the dimension of the are attractor then Q constraints appear: H(n)(x ,x ,...,x ) s n+1+r n+r n d+1+r Hq(n)(x˜n+1,x˜n,...,x˜n−d+1)=0 and q =1,...,Q≤(1d) x˜n =xn−α∂Hs(n)(xn+1+r,xn+r,...,xn d−+1+r)/∂xn − (5) where Q depends on the rounded up dimension d of a where s ≈ Q. The approach can be justified as follows. athpepraotxtirmacattoiro,nQfor=vdec+tor1s−x˜da∈.X˜SNinNcethweecaopnpsltyraainltisne(1a)r If the larges2t (unstable) Lyapunov exponent is λu > 0 i n and the smallest (stable) Lyapunov exponent is λ < 0 can be written as s we can write ∂xn+r ∼eλur and ∂xn−d+1+r ∼eλs(r d+1). ∂xn ∂xn | | − H(n)(x˜ ,x˜ ,...,x˜ )= If λ ≈ |λ | then the highest precision for determining q n+1 n n d+1 u s − the denominator of the rhs of (5) is usually obtained for d Q − aq,(n)x˜ +bq,(n)−x˜ =0, (2) r =d/2: j i j+1 q i+1 q − − − Xj=1 d/2 where aq,(n) and bq,(n) are elements of A and b respec- eλul+e|λs|l |∆xn|= tively. Tj he main problem of LP methods is to find a Xl=0(cid:12)(cid:12) (cid:12)(cid:12) tangent subspace determined by the linear constraints min r eλul |∆x |(cid:12)+d−r eλsl(cid:12)|∆x | , (6) Hpprrq(oonpa)cr(hixa˜etnse+m1p,arx˜oknjee,cu.t.sie.o,nox˜fnod−nidfft+eh1ri)esn=stupb0rsopajanecdcet.itnoDgpimffereefrtoehrnomtdsLa,Pnhoaawpp--- wherr=e0∆,..x.,dn(isXl=t0he(cid:12)(cid:12) erro(cid:12)(cid:12)r connnectXle=d0w(cid:12)(cid:12)(cid:12) i|th|t(cid:12)(cid:12)(cid:12)he vanri)able xn. evertangentsubspacesarefound inthe samemanner by allmethods, i.e. the subspaceshouldfulfill the condition (2) and the condition |x −x˜ |2 =min. C. The optimal method of local projection i i (GHKSS) D E In the GHKSS method [18, 20] developed by Grass- A. Cawley-Hsu-Sauer method (CHS) berger et.al. one looks for a minimization functional that fulfills the linear constraints (2) by correspond- The method makes use of a perpendicular projec- ing corrections received in a one-step procedure. The tion on a subspace corresponding to the constraints (2) constraints (2) can be written in the equivalent form 3 aq,(n)·y˜ + bq,(n) = 0 where a new vector y˜ = reduction. The last value is calculated as the square of n n {x˜ ,x˜ ,...,x˜ } is introduced, the dimension of the distance between the vector of noise-reduced data n+1 n n d+1 (cid:0)which is la(cid:1)rger by−one than the dimension of the vector andthe vectorof cleandata divided by the dimensionof x . Vectors aq,(n) should be linearly independent and these vectors. The definition of the gain presumes the n appropriatelynormalized,sothat multiple correctionsof knowledge of the clean data X˜ ={x˜ }. n n the variables are eliminated, i.e. aq,(n) ·Paq′,(n) = δqq′ The noise level parameter N can be defined as the where P is the matrix describing the metric of the sys- ratioof standardnoise deviationσ to standarddata noise tem. Let YNN be a set corresponding to the nearest deviation σ n data neighborhood of the vector yn. Minimizing the func- σnoise tional δx · P 1δx for k :y ∈YNN under the N = . (11) k − k k n σdata k above cPonditions we get a sy(cid:8)stem of couple(cid:9)d equations. The next step is to consider all vectors of YNN and to n III. THE PRINCIPLE OF LPNC METHOD calculatetheaverageξ(n) = 1 x ,i=0,1,...,d i |YnNN| k k+i aswellascorresponding(d+1)×(Pd+1)covariancema- The LP methods described in the previous section trix make use of linear constraints that appear due to lin- earapproximationofthesystemdynamics. Suchalinear 1 C(n) = x x −ξ(n)ξ(n), (7) approximationhasonlyalocalcharacterandcorrespond- ij |YnNN| k k+i k+j i j ing coefficients depend, in fact, on the position in phase X space. Ifweassumethatthenearestneighborhoodofev- here YnNN means the number of elements in the set. erypointx˜nischaracterizedbythesamecoefficientsthen Defining R = 1 and Γ(n) =R C(n)R one can find Q nonlinear constraints appear that can be used for recon- (cid:12)(cid:12) i(cid:12)(cid:12) √Pi ij i ij j struction of the unknown deterministic trajectory. The orthonormaleigenvectorsof the matrix Γ(n) correspond- basicadvantageofthelocalprojectionwithnonlinearcon- ing to its smallest eigenvalues eq,(n) for q =1,...,Q. straints (LPNC)methodintroducedhereascomparedto Q LetthematrixΠ(n) = eq,(n)eq,(n) defineasubspace LPmethodsisitssmallersensitivitytotheinputparam- ij i j q=1 eters estimation. A weak point of the LPNC method is spanned by the eigenvectPors eq,(n). Now the corrections its slower convergence rate with respect to the standard to the observed signal can be written as follows LP approach. The LPNC algorithm can be accelerated but at the cost of decreasing the gain parameter. Like d δx = 1 Π(n)R ξ(n)−x . (8) other LP methods the LPNC method belongs to the it- n+i R ij j j n+j erative approaches. A single iteration provides only a i Xj=0 (cid:16) (cid:17) partialnoisereductionandacorrecteddatasetservesas WeseethattheGHKSSmethoddoesnotemploymul- an input for the next iteration. tiple corrections resulting from constraints (2), but only For the one-dimensional case the Jacobi matrix A performs a smaller number of corrections following the and the additive vector b describing the locally lin- multiple occurrence of the same variable xn in various earized dynamics at point x˜n reduce to scalar coeffi- vectors yi : xn ∈yi. cients A = a1(x˜n), b = b(x˜n), and the linearized equa- Thesolution(8)isageneralizationoftheCHSandSG tion of motion at x˜n reads x˜n+1 = a1(x˜n)x˜n +b(x˜n). methods. The main difference between the CHS method Let us consider the nearest neighborhood X˜NN of x˜ . n n andthe GHKSSmethodisinthe subspaceofprojection. We assume that the set X˜NN consists of three points n While a perpendicular projectionofpoints is usedin the x˜ ,x˜ ,x˜ ∈X˜NN whicharesoclosetoeachotherthat first case,projection is ona tangent subspace defined by n k j n thematrixP inthesecondcase. ThematrixP shouldbe ntheir locally lineariozed dynamics can be approximately described by the same pair of coefficients A = a (x˜ ), diagonal and such that the first and the last component 1 n b = b(x˜ ). When we write down three linear equations of the vector y have only small weights e.g. : n n of motion for x˜ ,x˜ ,x˜ n k j 0.001 i=0,d, x˜ =a (x˜ )x˜ +b(x˜ ) P = (9) n+1 1 n n n (1 otherwise. x˜ =a (x˜ )x˜ +b(x˜ ) k+1 1 n k n x˜ =a (x˜ )x˜ +b(x˜ ) (12) The efficiency of noise reductionmethods canbe mea- j+1 1 n j n sured by the gain parameter, defined as thecoefficientsa (x˜ )andb(x˜ )canbeeliminated. After 1 n n eliminationwegetaconstraintthathastobefulfilledby σ2 G =10log noise (10) the system variables for consistency reasons. σ2 (cid:18) red (cid:19) where σn2oise = (xn−x˜n)2 is the variance of added G X˜NnN ≡x˜n(x˜k+1−x˜j+1)+ noise and σ2 isDthe varianEce of noise left after noise x˜ (x˜ −(cid:16)x˜ )(cid:17)+x˜ (x˜ −x˜ )=0. (13) red k j+1 n+1 j n+1 k+1 4 In the case of a higher dimension d > 1 we have three nearestneighborhoodX˜NN. Similarly as inLP methods n equations of motions but the number of unknown con- these constraints are ensured in the LPNC approach by stants is larger than two, i.e. application of the method of Lagrange multipliers to an appropriate cost function. Since we expect that correc- d tions to noisy data should be as small as possible, the x˜ = a (x˜ )x˜ +b(x˜ ) n+1 i n n i+1 n cost function can be assumed to be the sum of squared − Xi=1 corrections S = Ns=1(δxs)2. d It follows that we are looking for the minimum of the x˜ = a (x˜ )x˜ +b(x˜ ) P k+1 i n k−i+1 n functional i=1 X d N N x˜j+1 = ai(x˜n)x˜j i+1+b(x˜n), (14) S˜= (δx )2+ λ Gd X˜NN =min. (16) − n n n i=1 X nX=1 nX=1 (cid:16) (cid:17) where a (x˜ ) are elements of the first row of Jacobi ma- i n After finding zero points of 2N partial derivatives one trix and b(x˜ ) is a constant. The corresponding con- n straint Gd for higher dimensional case is as follows gets 2N equations with 2N unknown variables δxn and λ . However, in such a case the derivatives of the func- n d tional(16)arenonlinearfunctionsofthesevariables. For Gd X˜NN ≡ a x (x˜ −x˜ )+ simplicity of computing we are interested to pose our n i n i+1 k+1 j+1 − ! probleminsuchawaythatlinearequationsappearwhich (cid:16) (cid:17) Xi=1 d can be solved by standard matrix algebra. To under- a x (x˜ −x˜ )+ stand the role of nonlinearity let us write the constraint i k i+1 j+1 n+1 i=1 − ! G X˜NN insuchawaythatexplicitdependence onthe X n d un(cid:16)known(cid:17)variables is seen (the corresponding equations aixj i+1 (x˜n+1−x˜k+1)=0 (15) for Gd(X˜NN) have a similar form) − ! n i=1 X The extended constraints and the corresponding calcu- G X˜NN ∼=G XNN,X +G(δX ,X )+ lations that are valid for all rows of Jacobi matrix A n n n+1 n n+1 are presented in the appendix A. The condition (13) (cid:16) (cid:17) G (cid:0)XNN,δX (cid:1) +G(δX ,δX ).(17) n n+1 n n+1 and (15) should be fulfilled for every point x˜ and its Here we introduced the following notation n (cid:0) (cid:1) G XNN,X ≡x (x −x )+x (x −x )+x (x −x ) n n+1 n k+1 j+1 k j+1 n+1 j n+1 k+1 G(δX ,X )≡δx (x −x )+δx (x −x )+δx (x −x ) (cid:0) n n+1 (cid:1) n k+1 j+1 k j+1 n+1 j n+1 k+1 G XNN,δX ≡x (δx −δx )+x (δx −δx )+x (δx −δx ) n n+1 n k+1 j+1 k j+1 n+1 j n+1 k+1 G(δX ,δX )≡δx (δx −δx )+δx (δx −δx )+δx (δx −δx ), (18) (cid:0)n n+1 (cid:1) n k+1 j+1 k j+1 n+1 j n+1 k+1 where XNN = {x ,x ,x }, X = the noise effect δx = −η (∀ ) one can neglect n n k j n+1 s s s=1,...,N {x ,x ,x }, δX = {δx ,δx ,δx }, the nonlinear terms in Eqs. (18) i.e. n+1 k+1 j+1 n n k j δX = {δx ,δx ,δx } and x ,x are n+1 n+1 k+1 j+1 k j G(δX ,δX )∼=0 (∀ ). (20) the near neighbors of x . Indices are defined as n n+1 n=1,...,N n n,j,k :x ,x ,x ∈XNN . Note that elements of the n k j n In the equation (20) we use the fact that hη i = 0 and set X are not necessarily near neighbors to each i (cid:8) n+1 (cid:9) hηiηji∼δij. other. Taking into account the approximation (20) one can The approximation we use in (17) follows from the write the following linear equation for the problem (16) fact thatin generalthe nearestneighborhoodX˜NN does n notincludethesameindicesasthenearestneighborhood M·δX=B, (21) XNN, i.e. n where M is a matrix containing constant ele- k:x˜ ∈X˜NN 6= j :x ∈XNN . (19) ments, B is a constant vector, and δXT = k n j n {δx ,δx ,...,δx ,λ ,λ ,...,λ } is a vector of depen- Inthecasenofnotcorrelatoedno(cid:8)iseandunder(cid:9)theassump- dent1 var2iables (TN- t1ran2spositioNn). In practice it is very tion that the introduced corrections completely reduce difficult or even impossible to find the solution of the 5 equation (21) for large N. First,it is time consuming to appearsin6·ddifferentminimizationproblems(22). The solve a linear equation with a matrix 2N ×2N matrix globalproblem(16)isequivalenttoEq.(21)with2N un- for N > 1000. Second, when M becomes singular the known variables that should be found single-time. The estimationerrorofthe inversematrix M 1 is verylarge. problem (22) is equivalent to a system of coupled equa- − Third,wecannotalwaysfindthetruenearneighbors(the tions that should be solved several times and as a re- setX˜NN)fromthe noisydata{x }. Takingintoaccount sult one gets an approximate global solution. Writing n i the above reasons it is useful to replace the global mini- Eq. (22) in the linear form i.e. calculating zero sites of mizationproblem(16)byN localminimizationproblems correspondingderivatives and using Eq. (20) one gets N related to the nearest neighborhood XNN. The corre- linear equations as follows n sponding local functionals to be minimized are M ·δXλ =B (∀ ), (23) S˜NN = (δx )2+λ Gd XNN =min n n n n=1,...,N n s n n s X (cid:0) (cid:1) (∀n=1,...,N) where where δXλn T ={δxn,δxk,δxj,δxn+1,δxk+1,δxj+1,λn}. s:xs ∈XNnN orxs ∈Xn+1 or ... xs ∈Xn−d+1 (.22) aTdhveanmt(cid:0)aagtreisce(cid:1)osfM(2n1)c,ori.ree.spotnhdeiyngarteo n(2o2t)sainvgouidlatrh,ethdeisir- We(cid:8)canconsiderthe minimizationproblem(22)asa(cid:9)cer- dimension is smaller and they do not substantially tain approximation of (16). Functionals (22) are linked depend on the initial approximation of near neighbors. to each other due to the fact that the same variable δx The matrix M for one-dimensional case is given by n n 2 0 0 0 0 0 x −x k+1 j+1 0 2 0 0 0 0 x −x j+1 n+1  0 0 2 0 0 0 x −x  n+1 k+1 M = 0 0 0 2 0 0 x −x (24) n  j k   0 0 0 0 2 0 x −x   n j   0 0 0 0 0 2 x −x   k n  x −x x −x x −x x −x x −x x −x 0   k+1 j+1 j+1 n+1 n+1 k+1 j k n j k n    Vector B has the form BT = would be the same since (25) can be obtained from (26) n n 0,0,0,0,0,0,−G XNN,X . after elimination of the parameters a and b. n n+1 (cid:8) (cid:0) (cid:1)(cid:9) In both cases the variables x˜n,x˜k,x˜j ∈X˜NnN be- IV. COMPARING LPNC METHOD TO LOCAL longtothenearestneighborhoodnofthevariablex˜no. The PROJECTION METHODS index i = k,k+p:x˜ ∈X˜NN runs through all in- k n dices of thenvariables appearingoin (25) and (26) while Let us illustrate the LPNC method by taking into the variable x˜ = x˜ :x˜ ∈X˜NN corresponds to account the cost functional (22) (it will be written as k+p l l−p n SLPNC) thep-iterateofx˜k. Panrametersaandbocanbecalculated from a linearized form of the equations of motion at the SLPNC = δx2+λ[x˜ (x˜ −x˜ )+ point x˜n. i n k+1 j+1 In practice the minimization problems SLPNC and i x˜ (x˜ −x˜ X)+x˜ (x˜ −x˜ )]=min. (25) SGHKSS are not equivalent because in both cases dif- k j+1 n+1 j n+1 k+1 ferent approximations are used. These differences are: ThecorrespondingcostfunctionSGHKSS thatisusedin (i) Eq. (25) is nonlinear against corrections δxi. In this the standardlocalprojectionmethode.g. intheGHKSS case the approximation consists in a linearization. (ii) method [18] is For Eq. (26) the exact values of the parameters a and b are unknown. The approximation means that a and b SGHKSS = δx2+λ (x˜ a+b−x˜ )+ are estimated from noisy data. i 1 n n+1 i Fig. 1 and 2 present a comparison between results re- X λ (x˜ a+b−x˜ )+λ (x˜ a+b−x˜ )=min. (26) ceivedby the GHKSS and LPNC methods. Fig. 1 shows 2 j j+1 3 k k+1 that the gain parameter G depends on the number of If we were in the position to find exact solutions for the neighbors,whichisaninputparameterofbothmethods. minimization problems (25) and (26) then both results OnecanseethatforLPNCmethodthegainparameteris 6 6.0 GHKSS method 0.6 5.8 LPNC method 0.4 5.6 5.4 0.2 5.2 ] 0.0 B 1 [d 45..80 Xn+-0.2 G 4.6 -0.4 4.4 -0.6 4.2 -0.6 -0.4 -0.2 0.0 0.2 0.4 0.6 10 20 30 40 50 60 70 X Number of neighbors n FIG. 1: The plot of the gain parameter G versus number of FIG. 3: Therandom data from uniform distribution iterationsoftheGHKSSmethod(squares)andLPNCmethod (triangles). Lorenz system N =78%, N =1000. 0.6 0.4 6 0.2 5 0.0 1 4 Xn+ -0.2 B] 3 -0.4 d [ 2 GHKSS method G LPNC method -0.6 1 -0.6 -0.4 -0.2 0.0 0.2 0.4 0.6 X 0 n 0 10 20 30 40 50 60 FIG. 4: The random data shown in the Fig. 3 after noise Number of iterations reduction with the LPNCmethod FIG. 2: The plot of the gain parameter G versus number of neighborsoftheGHKSSmethod(squares)andLPNCmethod V. THE NEAREST NEIGHBORHOOD (triangles). Lorenz system N =78%, N =1000. ASSESSMENT The LP methods are local. It follows that features of the nearest neighborhood XNN of every point x in the more robust to changes of the number of neighbors than n n phase space play an important role. Usually the near- for the GHKSS method. In Fig. 2 the dependence of the est neighborhood is estimated by the smallest distance gain parameter on the number of iteration steps of the approach that makes use of the standard Euclidian ge- methods is shown. One can see that LPNC method fin- ometry. Wehavefound,however,thatourLPNCmethod ishedreductionatthemaximalefficiencywhatisnotthe works much better when the Delaunay triangulation ap- case of GHKSS method, so the former method is easier proach [21] is applied for the nearest neighborhood esti- to use since it does not need estimation of the iteration mation. number. If we consider uniformly distributed stochastic vari- ables (see Fig. 3) the LPNC method reduces the noise A. The smallest distance approach (SD) very well, and as a result all data are represented as a neighborhood of a point attractor (see Fig. 4) while a In the smallest distance approach the Euclidian met- complete noise reduction would correspond to a phase ric is used, i.e. first the distance between every pair of portrait consisting of a single point. In fact, for the case points in the Takens embedded space is calculated as considered we observed for the LPNC method a noise reduction of about 96% of data variance. di,j = (xi−xj)2+...+ xi (d 1)τ −xj (d 1)τ 2 and − − − − q (cid:0) (cid:1) 7 a) xn b) xn mn,j mn,j xj xj FIG. 6: Illustration of nearest neighborhood search by DT approach a) xj and xn are not near neighbors. b) xj and xn are near neighbors . 2 FIG. 5: Delaunay triangulation for a set of nine points in a two-dimensional space. The near neighbors are connected 1 by bold lines. Sets T(xi), i = 1,2,...,9 are limited by thin lines. 0 1 n+ X then the nearestneighborhoodXNnN of a point xn is de- -1 fined as a set of ν points fulfilling the relation x ∈XNN,∀ x ∈/ XNN :d ≤d . (27) -2 j n k k n n,j n,k Let us(cid:8)stress that this definition depends on(cid:9)the chosen -2 -1 0 1 2 value ofthe ν parameter. i.e. on the assumednumber of X n near neighbors, ν =2,3,.... FIG. 7: Chaotic Henon map without noise B. The Delaunay triangulation approach (DT) The DT method has the advantage that triangles ap- To find the nearest neighborhood relations for the pearing due to connections of near neighbors are almost LPNC method we have used the Delaunay triangulation equiangular(seeFig.5). Thispropertyisthemainreason [21]. In general the triangulation of any set of points forusingtheDT methodinsearchofthenearneighbors. X = {x } ∈ Rd is a collection of d-dimensional sim- The disadvantage of this method is a slowing down of i plices with disjoint interiorsand vertices chosenfromX. numerical calculations. There are many triangulation of the same set of points X. One of the best knownis the Delaunay triangulation (see Fig. 5). Let T (x ) be a part of the space Rd that VI. EXAMPLES OF NOISE REDUCTIONS n contains all points that are closer to x than any other n point x from the set X TheLPNC methodhasbeenappliedtothree systems: j theHenonmap,theLorenzmodel[22]andtheChuacir- T (x )= z ∈Rd,x ∈X:∀ kz−x k≤kz−x k . cuit [23, 24, 25]. Figures 7 - 9 presentthe chaotic Henon n j j=n n j 6 (28) map in the absence and in the presence of measurement (cid:8) (cid:9) If m = (x +x )/2 belongs to both sets T (x ) and noise as well as a result of the noise reduction. Table I i,j i j i T (x )thenbydefinitionthepointx isthenearestneigh- presents the values of the gain parameter for the Henon j j borofx receivedduetothe Delaunaytriangulation. By map and for the Lorenz system. i theabovedefinitioneverypointx belongstoitsnearest To verify our method in a real experiment we have n neighborhood x ∈XNN. performed the analysis of data generated by a nonlin- n n In practice the Delaunay approach can be performed ear electronic circuit. The Chua circuit in the chaotic asfollows. Apairofpointsx andx arenearneighbors regime [23, 24] has been used and we have added a mea- n j provided that there are no other points x (k 6= j,n) surementnoisetotheoutcomingsignal. Thenoise(white k belonging to the hypersphere centered at the point m and Gaussian) came from an electronic noise generator. n,j and of the radius r =kx −x k/2. Figures 10 - 12 show a clean signal coming from this n,j n j In Fig. 6 two cases are presented when in the two- circuit, the signal generated by Chua circuit with mea- dimensional space a) the point x is not the nearest surement noise (N = 96.5%) and the same signal after j neighborofx andb)thepointx isthenearestneighbor the noise reduction with the LPNC method (G = 6.38). n j of x . Table II presents values of the G parameter and the per- n 8 4 2.4 3 2.3 2 1 2.2 ] 0 V Xn+1-1 [+12.1 n V -2 2.0 -3 1.9 -4 -4 -3 -2 -1 0 1 2 3 4 1.9 2.0 2.1 2.2 2.3 2.4 X V [V] n n FIG.8: ChaoticHenonmapwithameasurementnoise N = FIG. 10: The stroboscopic map corresponding to a clean 69%. Note thedifference in scale. trajectory in theChua circuit. 3.0 2 2.7 1 2.4 1 Xn+ 0 V] 2.1 [ 1 + -1 n V 1.8 -2 1.5 -2 -1 0 1 2 1.2 X 1.2 1.5 1.8 2.1 2.4 2.7 3.0 n V [V] n FIG.9: ChaoticHenonmapwith ameasurement noiseafter noise reduction with theLPNC method, G =4.3 FIG. 11: The stroboscopic map received from the Chua cir- cuitinthepresenceofameasurementnoiseN =96.5%. Note thedifference in scale. centageofeliminatednoiseforseveralvaluesofthe noise level in the Chua circuit. 2.4 All the above applications of the LPNC method con- siderthe caseofmeasurementnoise thathasbeen added 2.3 to the signal in numerical or electronic experiments. However, our LPNC method can also be applied to dy- 2.2 namicalnoise i.e to the noise which in experiments is in- ] V cludedintheequationsofmotion[26]. Insuchacaseone [1 2.1 + n V 2.0 TABLE I: Results of noise reduction by the LPNC method for the Henon map and Lorenz model. 1.9 System N G percent. of eliminated noise 1.9 2.0 2.1 2.2 2.3 2.4 Henon N =1000 10% 9.58 89% Henon N =3000 10% 10.04 91% V [V] n Henon N =1000 66% 5.08 69% Lorenz N =1000 78% 5.85 74% FIG. 12: The stroboscopic map received after the noise re- Lorenz N =3000 76% 6.02 75% duction by the LPNC method applied to data presented at Lorenz N =1000 34% 7.21 81% Fig. 11. 9 TABLE II: Results of noise reduction by theLPNC method for the Chuacircuit with a measurement noise (N =3000). 2.3 N G percent. of eliminated noise 24.9% 5.4 71% 2.2 28.3% 4.9 68% ] V 46.1% 7.0 80% [ 73.7% 4.81 67% +1 90.6% 7.4 82% Vn 2.1 96.5% 6.4 77% 2.0 2.3 2.0 2.1 2.2 2.3 V [V] n 2.2 FIG.14: Stroboscopicmapreceivedafternoisereductionby V] theLPNC method applied to data presented at Fig. 13 [ 1 + n2.1 V TABLE III: Noise level estimated by the LPNC method for theChua circuit with measurement noise (N =3000). 2.0 N σnoise [mV] σ˜noise [mV] 2.0 2.1 2.2 2.3 0% 0 5.5 3.1% 30.4 28.9 V [V] n 6.2% 60.8 53.7 12.3% 121.7 110 FIG.13: StroboscopicmapreceivedfromtheChuacircuitin 24.9% 243.4 235 the presence of a mixture of a measurement and dynamical 28.3% 304 305 noise N ≈22%. 46.1% 486 454 73.7% 973 938 90.6% 1520 1375 cannotcompare the noisy data with the cleantrajectory 96.5% 2120 1844 since the latter one does not exist anymore, and there are only ǫ-shadowedtrajectories[8] that can be approxi- matedbymeansoftheLPNCmethod. Figure13showsa measuredsignalgeneratedbyaChuacircuitwhereamix- and the fact that the method can be used only for low- ture of measurement noise and dynamical noise occurs. dimensional systems. On the other hand the LPNC Figure 14 shows the result of noise reduction applied to method can be applied for estimation of any noise level such a signal. includingalargeone. InTable III wehavepresentedthe estimated noise level σ˜ for the Chua circuit. noise VII. NOISE LEVEL ESTIMATION BY LPNC METHOD The LPNC method introduced in the previous section VIII. CONCLUSIONS canbeusedtoquantifythenoiselevelofdata. Thenoise level,i.e. thestandarddeviationinnoisytimeseries,may be approximated as the Euclidian distance between the In conclusion we have developed a method of noise vectors{x } and{x¯ } representingthe time series before reduction that makes use of nonlinear constraints which i i and after noise reduction [16] occurinanaturalwayduetothelinearizationofadeter- ministicsystemtrajectoryinthenearestneighborhoodof every point in the phase space. This neighborhood has N 1 σ˜ ≈ (x −x¯ )2. (29) beendeterminedbyDelaunaytriangulation. Themethod noise vuN i=1 i i has been applied to data from the Henon map, Lorenz u X t modelandelectronicChuacircuitcontaminatedbymea- The main disadvantage of the LPNC method used for surement (additive) noise. The efficiency of our method the noise level estimation is its small rate of conver- is comparable to that of standard LP methods but it is gencewithrespecttootherknownmethods[4,27,28,29] more robust to input parameter adjustment. 10 Acknowledgments near neighbors i.e. the number of points in the set X˜NN n must be larger to allow a unique estimation of the Jaco- We grateful acknowledge helpful discussion with Hol- bian A. We assume that the Jacobi matrix can be ap- gerKantzandRainerHegger. KUisthankfultoOrganiz- proximately received by minimalization of the following ers of the Summer School German-Polish Dialogue 2002 cost functional in Darmstadt. He has partially been supported by the KBNGrant2P03B03224andJAHhasbeensupported (a x +a x +... bythespecialprogramDynamics of Complex Systems of m1 s m2 s−τ s Warsaw University of Technology. X ...+a x −x )2 =min APPENDIX A: MULTIDIMENSIONAL VERSION md s−(d−1)τ s+1−mτ OF LPNC METHOD (∀ ) and s:x ∈XNN , (A2) m=1,...,d s n (cid:8) (cid:9) In Sec. III the LPNC method has been presented for wherea =[A] . ByanalogywithEq.(13)weintroduce one-dimensional systems. Here we show the generaliza- ij ij the follow tion of this approach for d-dimensional dynamics. For one-dimensional problems the Jacobi matrix of the sys- temdoesnotappearexplicitlyinourmethod. Forhigher dimensionalmodelsthecorrespondingJacobianAhasto Gd X˜NN ≡a G X˜NN,X˜ + be calculatedbut wemanageto minimalize errorsoccur- m n m1 n n+1 mτ − ringbyitsestimation. Thelinearizedequationofmotion (cid:16) (cid:17) (cid:16) (cid:17) forvectorsfromthenearestneighborhoodX˜NN ofavec- am2G X˜n τ,X˜n+1 mτ +... n − − tor x˜ can be written in the form (cid:16) (cid:17) n ...+a G X˜ ,X˜ =0 md n (d 1)τ n+1 mτ − − − x˜n+1 =A·x˜n+b. (A1) (cid:16) (∀m=1(cid:17),...,d), (A3) In such a case one needs three vectors x˜ ,x˜ ,x˜ ∈X˜NN n k j n to write constraints corresponding to Eq. (13). In com- where we used the notation corresponding to equa- parison with to the one-dimensional case the number of tion (17) i.e. G XNN,X =x (x −x )+x (x −x )+x (x −x ) n n+l n k+l j+l k j+l n+l j n+l k+l G(X ,X )=x (x −x )+x (x −x )+x (x −x ), (A4) n s (cid:0) n+l n (cid:1)s k+l j+l k s j+l n+l j s n+l k+l − − − − X = {x ,x ,x } (∀ ) where Now the cost problem (22) can be transformed to the n+s n+s k+s j+s s=0, 1, 2,... n,j,k :x ,x ,x ∈XNN . Since the±cle±an trajectory form n k j n is not knownthus in the Eq.(A3)the observedvariables X(cid:8)NN, X etc. are u(cid:9)sed. S˜nNN = (δxs)2+λmnGdm XNnN =min n n+1 mτ − s In such a way the equation (20) can be written in a X (cid:0) (cid:1) (∀ ,∀ ) more general way as n=1,...,N m=1,...,d and s:x ∈XNN orx ∈X . (A7) s n s n+1 d a G δX ,δX ∼=0 Findingzerosofp(cid:8)artialderivativesofthefunct(cid:9)ional(A7) ml n−(l−1)τ n+1−mτ one can linearize this problem and write it in the form l=1 X (cid:0) (cid:1) similar to the Eq. (23) (∀ ,∀ ) (A5) n=1,...,N m=1,...,d M ·δXλ =B (∀ ). (A8) n n n n=1,...,N where we use Vectors δXλ and B occurring in Eq. (A8) are equal to n n G(δXn s,δXn+l)=δxn s(δxk+l−δxj+l) δXλ T ={δx ,δx ,... − − n n (d 1)τ n (d 2)τ +δx (δx −δx )+δx (δx −δx ) (A6) − − − − k−s j+l n+l j−s n+l k+l (cid:0) (cid:1) ...,δxn,δxn+1,λ1n,λ2n,...,λdn}, (A9) δX = {δx ,δx ,δx } (∀ ), whenr+es n,j,k:xnn+,xsk,xkj+∈s XNnjN+s , sδ=x0n,±1,±2,...= BTn ={0,0,...,0,−Gd1 XNnN ,−Gd2 XNnN ,... δxn,δxn−(cid:8)τ,...,δxn−(d−1)τ and (cid:9) n:xn ∈XNnN . (cid:0) (cid:1)...,−(cid:0)Gdd XNn(cid:1)N }(A10) (cid:8) (cid:9) (cid:8) (cid:9) (cid:0) (cid:1)

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.