Table Of Content1
A Unified Analysis Approach for LMS-based
Variable Step-Size Algorithms
Muhammad O. Bin Saeed, Member, IEEE
Abstract—Theleast-mean-squares(LMS)algorithmisthemost This analysis can be applied to most existing as well as any
popularalgorithminadaptivefiltering.Severalvariablestep-size forthcoming VSS algorithms.
strategies have been suggested to improve the performance of The rest of the paper is divided as follows. Section II
5 the LMS algorithm. These strategies enhance the performance
presents a working system model and problem statement.
1 of the algorithm but a major drawback is the complexity in
Section III details the complete theoretical analysis for VSS
0 the theoretical analysis of the resultant algorithms. Researchers
2 use several assumptions to find closed-form analytical solutions. LMS algorithms. Simulation results are presented in section
Thisworkpresentsaunifiedapproachfortheanalysisofvariable IV. Section V concludes this work.
n
step-sizeLMSalgorithms.Theapproachisthenappliedtoseveral
a
variablestep-sizestrategiesandtheoreticalandsimulationresults II. SYSTEMMODEL
J
are compared.
TheunknownsystemismodeledasanFIRfilterintheform
1
1 Index Terms – Variable step-size, least-mean-square algo- of a vector, wo, of size (M ×1). The input to the unknown
rithms system at any given time i is a (1 × M) complex-valued
] regressorvector, u(i). The observedoutputof the system is a
S
D I. INTRODUCTION noise corrupted scalar, d(i). The variables of the system are
Manyalgorithmshavebeenproposedforestimation/system related by
.
s identification but the LMS algorithm has been the most d(i)=u(i)wo+v(i), (1)
c
[ popular [1] as it is simple and effective. However, a limit- where v(i) is the complex-valued zero-mean additive noise.
ing factor of LMS is that if the step-size of the algorithm The LMS algorithm iteratively estimates the unknown sys-
1
v is kept high then the algorithm converges quickly but the tem with an update equation given by
7 resultant error floor is high. On the other hand, lowering w(i+1)=w(i)+µe(i)u∗(i), (2)
8 the step-size is results in improvement in the error perfor-
4 mance but the speed of the algorithm becomes slow. In order where w(i) is the estimate of the unknown system vector at
2 to overcome this problem, various variable step-size (VSS) time i, e(i)= d(i)−u(i)w(i) is the instantaneous error and
0
strategies have been suggested, which have a high step-size (.)∗ isthecomplexconjugatetransposeoperator.Thestep-size
.
1 initially for fast convergence but then reduce the step-size for the update is defined by the variable µ, which is fixed for
0
with time in order to achieve a low error performance [2]- the LMS algorithm. In case of a variable step-size algorithm,
5
[25]. Some algorithms are proposed in literature for specific thestep-size isalso updatediteratively.TheVSS LMSupdate
1
: applications [12],[14],[15],[20]-[25]. There are several algo- equations are given by
v
i rithms that are derived from a constraint on the cost function w(i+1) = w(i)+µ(i)e(i)u∗(i), (3)
X [2],[7],[8],[10],[14].
µ(i+1) = f{µ(i)}, (4)
r Ingeneral,allVSSalgorithmsaimto improveperformance
a
at the cost of computational complexity. This trade-off is where f{.} is a function that defines the update equation for
generally acceptable due to the improvementin performance. the step-size and is different for every VSS algorithm.
However, the additional complexity also results in difficulty While performing the analysis of the LMS algorithm, the
in analyzingthe algorithm.Authorsuse severalbasicassump- input regressor vector is assumed to be independent of the
tions to find closed-form solutions for the analysis of these estimated vector. For the VSS algorithms, it is generally
algorithms. Most of these assumptions are similar. However, assumed that control parameters are chosen such that the
each algorithm has to be dealt with separately in order to step-size and the input regressor vector are asymptotically
findthesteady-statemisadjustment,leadingtothesteady-state independent of each other, resulting in a closed-form steady-
excess-mean-square-error(EMSE).Similarly,themean-square state solutionthatclosely matcheswith thesimulationresults.
analysis for each algorithm has to be performed individually. ForsomeVSSalgorithms,theanalyticalandsimulationresults
Based on the similarity of the assumption used by the arecloselymatchedduringthetransitionstageaswellbutthis
authors of all these VSS algorithms, this work presents a is not always the case. The results are still acceptable for all
unified approach for the analysis of VSS LMS algorithms. algorithms as a closed-form solution is obtained.
The aim of this work is to perform a generalized analysis The main objective of this work is to providea generalized
for any VSS strategy that is based on the LMS algorithm. analysis for VSS algorithms, in lieu with the assumptions
mentionedabove.Theresultsofthisanalysiscanbeappliedto
M.O.BinSaeediswiththeDepartmentofElectricalEngineering,Affiliated
VSS algorithms in general as will be shown through specific
Colleges at Hafr Al Batin, King Fahd University of Petroleum & Minerals,
HafrAlBatin31991,SaudiArabiae-mail: (mobs@kfupm.edu.sa). examples.
2
III. PROPOSED ANALYSIS wherek.kisthel2-normoperatorandΣisaweightingmatrix.
The weighting matrix Σ′ is given by
The weight-error vector is given by
w˜(i)=wo−w(i). (5) Σ′ = {IM −µ(i)u¯∗(i)u¯(i)}∗Σ{IM −µ(i)u¯∗(i)u¯(i)}
= IM −µ(i)u¯∗(i)u¯(i)Σ−µ(i)Σu¯∗(i)u¯(i)
Using (5) in (3) results in +µ2(i)u¯∗(i)u¯(i)Σu¯∗(i)u¯(i) (12)
w˜(i+1) = [IM −µ(i)u∗(i)u(i)]w˜(i)
The last term in (11) is 0 due to independence of additive
−µ(i)u∗(i)v(i), (6)
noise.Usingthedataindependenceassumption,theremaining
where IM is an identity matrix of size M. Before beginning 2 terms are simplified as
the analysis, another assumption is made, without loss of
generality.The inputdata is assumed to be circular Gaussian. E kw¯(i+1)k2Σ =E kw¯(i)k2Σ′ +σv2E µ2(i) Tr{ΣΛ},
As a result, the auto correlation matrix of the input regressor h i h i (cid:2) (cid:3) (13)
vector, given by Ru = E[u∗(i)u(i)], where E[.] is the ewrhateorreaσnv2disEthue(aid)ΣdiutivTe(in)ois=eTvarr{iaΣncΛe},.TOr{n.c}eiasgtahientirnavcoekoinpg-
expectation operator, can be decomposed into its component
matrices of eigenvalues and eigenvectors, Ru = TΛT∗, the data inde(cid:2)pendenceassu(cid:3)mption, we write E kw¯(i)k2Σ′ =
where T is the matrix of eigenvectors such that T∗T = IM E kw¯(i)k2 .Further,takingE[Σ′]=Σ′ anhdsimplifyiing,
and Λ is a diagonal matrix containing the eigenvalues. Using E[Σ′]
the matrix T, the following transformations are made (12h) is rewrittenias
w¯(i)=T∗w˜(i) u¯(i)=u(i)T Σ′ = IM −2E[µ(i)]ΛΣ+E µ2(i) ΛTr[ΣΛ]
+E µ2(i) ΛΣΛ. (14)
(cid:2) (cid:3)
The weight-error update equation thus becomes
Using the diag op(cid:2)erator,(cid:3)(13) and (14) are simplified as
w¯(i+1)=[IM −µ(i)u¯∗(i)u¯(i)]w¯(i)−µ(i)u¯∗(i)v(i). (7)
E kw¯(i+1)k2σ =E kw¯(i)k2F(i)σ +σv2E µ2(i) λTσ (15)
A. Mean Analysis
Applying the expectation operator to (7) results in whhereσ =diagi{Σ},λh =diag{Λ}i,theweig(cid:2)hting(cid:3)matrixΣ′
is replaced with diag{Σ′}=σ′ =F(i)σ, where F(i) is
E[w¯(i+1)] =E[{IM −µ(i)u¯∗(i)u¯(i)}w¯(i)
−µ(i)u¯∗(i)v(i)] F(i)=IM −2E[µ(i)]Λ+E µ2(i) Λ2+λλT . (16)
={IM −E[µ(i)u¯∗(i)u¯(i)]}E[w¯(i)], (8) (cid:2) (cid:3)h i
Now, using (15) and (16), the analysis iterates as
where the data independence assumption is used to separate
E[w(i)] from the rest of the variables. The second term is E kw¯(0)k2σ =kwok2σ,
0assausmapdtdioitnivtehantotihseeissteipn-dsiezpeencodnentrtoalnpdarzaemroe-temrseaanre. Uchsoinsgenthine h F(0i)=IM −2µ(0)Λ+µ2(0) Λ2+λλT ,
such a way that the step-size and the input regressor data are h i
where E[µ(0)] = µ(0) and E µ2(0) = µ2(0) as this is the
asymptotically independent, (8) is further simplified as
initial step-size value. The first iterative update is given by
(cid:2) (cid:3)
E[w¯(i+1)] = {IM −E[µ(i)]Λ}E[w¯(i)], (9)
E kw¯(1)k2 =E kw¯(0)k2 +σ2µ2(0)λTσ
whereE[u¯∗(i)u¯(i)]=Λ.Thesufficientconditionforstability σ F(0)σ v
is evaluated from (9) and is given by h i=kw¯hok2F(0)σ+σv2iµ2(0)λTσ
2 F(1)=IM −2E[µ(1)]Λ+E µ2(1) Λ2+λλT ,
0<E[µ(i)]< , (10)
βmax (cid:2) (cid:3)h i
where the updates E[µ(1)] and E µ2(1) are obtained from
where βmax is the maximum eigenvalue of Λ. the particular step-size update equation of the VSS algorithm
(cid:2) (cid:3)
being used. Similarly, the second iterative update is given by
B. Mean-Square Analysis
E kw¯(2)k2 =E kw¯(1)k2 +σ2E µ2(1) λTσ
Taking the expectation of the squared weighted l -norm of σ F(1)σ v
2
(7) yields h i=kw¯hok2F(1)F(0)σ+iσv2µ2(0(cid:2))λTF((cid:3)1)σ
E kw¯(i+1)k2 (11) +σv2E µ2(1) λTσ
Σ
=hE[w¯∗(i)Σ′w¯(ii)]+E µ2(i)v2(i)u¯(i)Σu¯∗(i) =kw¯o(cid:2)k2F(1)F(cid:3)(0)σ
−E[µ(i)v(i)u¯(i)Σ{IM (cid:2)−µ(i)u¯∗(i)u¯(i)}w¯(i)](cid:3) +σv2λT µ2(0)F(1)+E µ2(1) IM σ
−E[w¯∗(i){IM −µ(i)u¯∗(i)u¯(i)}Σµ(i)v(i)u¯∗(i)], F(2)=IM −(cid:8)2E[µ(2)]Λ+E(cid:2)µ2(2)(cid:3) Λ(cid:9)2+λλT .
(cid:2) (cid:3)h i
3
Continuing, the third iterative update is given by Taking the weighting matrix Σ = IM results in the mean-
square-deviation (MSD) while taking the weighting matrix
E kw¯(3)k2σ =E kw¯(2)k2F(2)σ +σv2E µ2(2) λTσ Σ=Λ gives the EMSE.
h i=kw¯hok2F(2)A(2)σ +iσv2E µ(cid:2)2(2) λ(cid:3)Tσ [1]Itfoshrotuhled bLeMnSotaeldgohreitrhemth,attheunwlikeeighthtienganmalaytsriisxgFiv(ein) iins
1 k(cid:2)+1 (cid:3) not constant. As a result, the Cayley-Hamilton theorem is
+σv2λT E µ2(k) F(m) σ not applicable. In this context, (18) and (21)-(24) are very
(Xk=0 (cid:2) (cid:3)mY=2 ) significant contributions of this work.
F(3)=IM −2E[µ(3)]Λ+E µ2(3) Λ2+λλT ,
C. Steady-State Analysis
where the weighting matrix A(2) = F(cid:2)(1)F((cid:3)0h). The fourtih At steady-state, the recursions (15) and (16) become
iterative update is then given by
E kw¯ssk2σ =E kw¯ssk2F σ +σv2µ2ssλTσ (25)
E kw¯(4)k2σ =kw¯ok2F(3)A(3)σ +σv2E µ2(3) λTσ h Fss =IiM −2hµssΛ+sµs2ssi Λ2+λλT , (26)
h i 2 k(cid:2)+1 (cid:3)
+σv2λT E µ2(k) F(m) σ where the subscript ss denotes steadhy-state. Simiplifying (25)
(k=0 m=3 ) further gives
X (cid:2) (cid:3) Y
F(4)=IM −2E[µ(4)]Λ+E µ2(4) Λ2+λλT , E kw¯ssk2σ =σv2µ2ssλT [IM −Fss]−1σ, (27)
wheretheweightingmatrixA(3)=F(2(cid:2))A(2).(cid:3)Nhow,fromthie which definehs the steiady-state MSD if Σ = IM and steady-
state EMSE if Σ=Λ.
third and fourth iterative updates, we generalize the recursion
for the ith update as
D. Steady-State Step-Size Analysis
E kw¯(i)k2σ =kw¯ok2F(i−1)A(i−1)σ +σv2E µ2(i−1) λTσ Theanalysispresentedintheabovesectionhasbeengeneric
for any VSS algorithm. In this section, 5 different VSS algo-
h i i−2 k+1(cid:2) (cid:3)
+σ2λT E µ2(k) F(m) σ (17) rithms are chosen to present the steady-state analysis for the
v
step-size. These steady-state step-size values are then directly
(k=0 m=i−1 )
X (cid:2) (cid:3) Y inserted into (27) and (26). The 5 different VSS algorithms
F(i)=IM −2E[µ(i)]Λ+E µ2(i) Λ2+λλT . and their step-size update equations are given in Table I. The
(cid:2) (cid:3)h (1i8) firstalgorithm,denotedKJistheworkofKwongandJohnston
[4]. The second algorithm, denoted by AM, also refers to the
Similarly, the recursion for the (i+1)th update is given by authorsAboulnasrandMayyas[6].TheNCalgorithmrefersto
thenoise-constrainedLMSalgorithm[10].TheVSQalgorithm
E kw¯(i+1)k2σ =kw¯ok2F(i)A(i)σ +σv2E µ2(i) λTσ(19) isthevariablestep-sizequotientLMSalgorithm,basedonthe
h i i−1 (cid:2)k+1 (cid:3) quotient form [18]. The last algorithm, denoted by Sp, refers
+σ2λT E µ2(k) F(m) σ to the Sparse VSSLMS algorithm of [22].
v
(k=0 m=i )
F(i+1)=IM −2XE[µ(i(cid:2)+1)]Λ(cid:3) Y AlKgoJri[t4h]m Sµt(eip+-siz1e)=upαdaktjeµe(qiu)a+tioγnkje2(i)
+E µ2(i+1) Λ2+λλT . (20) AM[6] p(i)=βamp(i−1)+(1−βam)e(i)e(i−1)
µ(i+1)=αamµ(i)+γamp2(i)
Subtracting(17)from(1(cid:2)9)andsim(cid:3)plhifyingtheteirmsgivesthe NC[10] µθn(ci(+i+1)1=)=µ0(1(1−+αγnncc)θθnncc((ii)++1α))2nc (cid:0)e2(i)−σv2(cid:1)
final recursive update equation Aq(i)=aqAq(i−1)+e2(i)
VSQ[18] Bq(i)=bqBq(i−1)+e2(i)
E kw¯(i+1)k2σ =E kw¯(i)k2σ θq(i)= ABqq((ii))
h i+kw¯hok2F(i)A(i)iσ +σv2E µ2(i) λTσ Sp[22] µµ((ii++11))==ααqspµµ(i()i)++γγqsθpq|(ei)(i)|
+σv2λT {F(i)−IM}B((cid:2)i)σ, (cid:3) (21) TABLEI
STEP-SIZEUPDATEEQUATIONSFORTHEVSSLMSALGORITHMS.
where
i−2 k+1
B(i)= E µ2(i−1) IM + E µ2(k) F(m) . Applying the expectation operator to the step-size update
( k=0 m=i−1 ) equations and simplifying gives the equations presented in
(cid:2) (cid:3) X (cid:2) (cid:3) Y (22) Table II.
The final set of iterative equations for the mean-square At steady-state, the expected step-size E[µ(i)] is replaced
learning curve are given by (21), (18) and by µss. The approximate steady-state step-size equations are
given in Table III. The steady-state EMSE (denoted by ζ in
A(i+1)=F(i)A(i) (23) thetables)valueisassumedtobesmallenoughtobeignored.
B(i+1)=E µ2(i) IM +F(i)B(i). (24)
(cid:2) (cid:3)
4
Algorithm Expectation ofupdate equation
KJ[4] E[µ(i+1)]=αkjE[µ(i)]+γkj(cid:2)ζ(i)+σv2(cid:3). 0
AM[6] E(cid:2)p2(i)(cid:3)=βa2mE[p(i−1)]+(1−βam)2(cid:0)ζ(i)+σv2(cid:1) Theory
.(cid:0)ζ(i−1)+σv2(cid:1) −5 Simulation
E[µ(i+1)]=αamE[µ(i)]+γamE(cid:2)p2(i)(cid:3) −10
NC[10] E[θnc(i+1)]=(1−αnc)E[θnc(i)]+αncζ(i)/2 Quotient VS [18]
E[µ(i+1)]=µ0(1+γncE[θnc(i+1))] −15
VSSpQ[2[128]] EE[[µµ((ii++11))]]==ααqspEE[µ[µ(i()i])]++γγqsabpqqpEE[[2BAσqqv2((ii/−−π11))]]++ζζ((ii))++σσv2v2 D (dB) −−2250 NCLMS [10] Sparse VS [22A]boulnasr [6K]wong [4]
S
TABLEII M −30
EXPECTEDVALUESFORTHEUPDATEEQUATIONSFROMTABLEI.
−35
−40
Algorithm Steady-state step-size value
KJ[4] µss≈ 1−γkαjkjσv2. −45
AM[6] µss≈ γam1−(1α−aβmam)σv2. −500 500 1000 1500 2000 2500 3000
NC[10] µss≈µ0. iterations
VSQ[18] µss≈ 1−γqα(q1(−1−bqa)q).
Sp[22] µss≈ 1−γsαpspp2σv2/π. Faliggo.ri1t.hmsT.heory (21) v simulation MSD comparison for 5 different VSS
TABLEIII
Algorithm MSD(dB) MSD (dB)
STEADY-STATESTEP-SIZEVALUESFOREQUATIONSFROMTABLEI. equation (27) simulation
KJ[4] −43.96 −43.94
AM[6] −46.98 −46.98
NC[10] −29.42 −29.26
VSQ[18] −33.76 −33.77
IV. RESULTSAND DISCUSSION Sp[22] −34.78 −34.72
In this section, the analysis presented above will be tested TABLEV
uponthe5VSSalgorithmslisted inTableI.Thesealgorithms THEORYVSIMULATIONCOMPARISONFORSTEADY-STATEMSDFOR
DIFFERENTVSSLMSALGORITHMS.
are used in 2 different experiments to test the validity of the
analysis. In the first experiment, MSD is plotted using (21)
andcomparedwith simulationresults. Thesecondexperiment
compares the steady-state simulation results with the theoret- V. CONCLUSION
ical results obtained using (27). This work presents a unified approach for the theoretical
For the first experiment, the length of the unknown vector analysis of LMS-based VSS algorithms. The iterative recur-
is M = 4. The input regressor vector is a realization of a sions presented here differentiate this work from previous
zero-mean Gaussian random variable with unit variance. The analyses in that this set of equations provides a generic
signal-to-noise ratio (SNR) is chosen to be 20 dB. The step- treatmentoftheanalysisforthisclassofalgorithmsforthefirst
size control parameters chosen for this experiment are given time. Simulation results confirm the generic behavior of the
in Table IV. The results are shown in Fig. 1. As can be seen presented work, for both the transient state as well as steady-
fromthefigure,thereisaclosematchbetweensimulationand state.
analytical results for all the algorithms.
REFERENCES
Algorithm Parameters
[1] A.H.Sayed,FundamentalsofAdaptiveFiltering.NewYork:Wiley,2003.
KJ[4] αkj =0.995,γkj =1e−3 [2] J.NagumoandA.Noda,“Alearning methodforsystemidentification,”
AM[6] βam=0.9,αam=0.995,γam=1e−3 IEEETrans.Autom.Control,vol.12,no.3,pp.282-287,Jun.1967.
NC[10] γnc =10,αnc=1e−3 [3] R.Harris,D.M.ChabriesandF.Bishop,“Avariablestep(VS)adaptive
VSQ[18] a=0.99,bq =1e−3,αq =0.995,γq =1e−3 filter algorithm,” IEEE Trans. Acous., Speech and Sig. Process., vol. 34,
Sp[22] αsp =0.995,γsp=1e−3 no.2,pp.309-316,Apr.1986.
[4] R.H.Kwongand E.W.Johnston, “Avariable step-size LMSalgorithm,”
TABLEIV IEEETrans.SignalProcess.,vol.40,no.7,pp.1633-1642, Jul.1992.
STEP-SIZECONTROLPARAMETERSFORTHEVSSLMSALGORITHMS. [5] V. J. Mathews and Z. Xie, “A stochastic gradient adaptive filter with
gradient adaptive step size,” IEEETrans. Signal Process., vol. 41, no. 6,
pp.2075-2087,Jun.1993.
[6] T. Aboulnasr and K. Mayyas, “A robust variable step size LMS-type
The second experiment compares simulation results for algorithm: analysis and simulations,” IEEE Trans. Signal Process., vol.
45,no.3,pp.631-639, Mar.1997.
the VSS algorithms with theoretical steady-state MSD results
[7] S.Gollamudi, S.Nagaraj,S.KapoorandY.-F.Huang,“Set-membership
obtained using (27). The step-size control parameters are filteringandaset-membershipnormalizedLMSalgorithmwithanadaptive
chosen the same as the previous experiment. The results for stepsize,”IEEESig.Process.Letts.,vol.5,no.5,pp.111-114,May1998.
[8] D.I.PazaitisandA.G.Constantinides,“Anovelkurtosisdrivenvariable
this experimentare givenin Table V. Itcan be seen thatthere
step-size adaptive algorithm,” IEEE Trans. Sig. Process., vol. 47, no. 3,
is an excellent match between theory and simulation results. pp.864-872,Mar.1999.
5
[9] W.-P.AngandB.Farhang-Boroujeny, “Anewclassofgradientadaptive
step-size LMSalgorithms,” IEEETrans.Sig.Process.,vol.49,no.4,pp.
805-810,Apr.2001.
[10] Y. Wei, S. B. Gelfand and J. V. Krogmeier, “Noise-constrained least
mean squares algorithm,” IEEE Trans. Sig. Process., vol. 49, no. 9, pp.
1961-1970,Sep.2001.
[11] A.I.SulymanandA.Zerguine,“Convergenceandsteady-stateanalysis
ofavariablestep-sizeNLMSalgorithm,”Sig.Process.,vol.83,no.6,pp.
1255-1273,Jun.2003.
[12] K.Egiazarian,P.KuosmanenandR.C.Bilcu,”Variablestep-sizeLMS
adaptivefiltersforCDMAmultiuserdetection,” inProc.ofTELSIKS’03,
pp.259-264,Oct.2003.
[13] H.-C.Shin,A.H.SayedandW.-J.Song,“Variablestep-sizeNLMSand
affineprojectionalgorithms,”IEEESig.Process.Letts.,vol.11,no.2,pp.
132-135,Feb.2004.
[14] J.Benesty,H.Rey,L.R.VegaandS.Tressens,“AnonparametricVSS
NLMSalgorithm,”IEEESig.Process.Letts.,vol.13,no.10,pp.581-584,
Oct.2006.
[15] M.T.Akhtar,M.AbeandM.Kawamata,“AnewvariablestepsizeLMS
algorithm-based method for improved online secondary path modeling
in active noise control systems,” IEEE Trans. Audio, Speech and Lang.
Process.,vol.14,no.2,pp.720-726,Mar.2006.
[16] Y. Zhang, N. Li, J. A. Chambers and Y. Hao, ”New gradient-based
variable step size LMS algorithms,” EURASIPJ. Adv. Sig. Process., vol.
2008,Article ID529480, pp.1-9,Jan.2008.
[17] M.H.CostaandJ.C.M.Bermudez,“Anoiseresilientvariablestep-size
LMSalgorithm,” Sig.Process.,vol.88,no.3,pp.733-748,Mar.2008.
[18] S. Zhao, Z. Man, S. Khoo and H. R. Wu, “Variable step-size LMS
algorithm with a quotient form,”Sig. Process.,vol. 89, no. 1, pp. 67-76,
Jan.2009.
[19] J.-K.Hwang andY.-P.Li,”Variable Step-Size LMSAlgorithm With a
Gradient-BasedWeightedAverage,”IEEESig.Process.Letts.,vol.16,no.
12,pp.1043-1046, Dec.2009.
[20] A. M. A. Filho, E. L. Pinto and J. F. Galdino, ”Simple and robust
analytically derived variable step-size least mean squares algorithm for
channel estimation,” IET Comms., vol. 3, no. 12, pp. 1832-1842, Dec.
2009.
[21] S. V. Narasimhan and S. Veena, ”New unbiased adaptive IIR filter:
its robust and variable step-size versions and application to active noise
control,” Sig., Image and Vid. Process., vol. 7, no. 1, pp. 197-207, Jan.
2013.
[22] M.O. Bin Saeed and A. Zerguine, “A variable step-size strategy for
sparsesystemidentification,” inProc.ofSSD’13,pp.1-4,2013.
[23] M.O.BinSaeed,A.ZerguineandS.A.Zummo,”Avariablestep-size
strategy for distributed estimation over adaptive networks,” EURASIP J.
Adv.Sig.Process.,vol.2013,2013:135,Aug.2013.
[24] M.O.BinSaeed,A.ZerguineandS.A.Zummo,”Anoise-constrained
algorithm for estimation over distributed?networks,” Int’l J. Adap. Cont.
&Sig.Process.,vol.27,no.10,pp.827-845,Oct.2013.
[25] J. M. Gil-Cacho, T. v. Waterschoot, M. Moonen and S. H. Jensen,
”Wiener variable step size and gradient spectral variance smoothing
for double-talk-robust acoustic echo cancellation and acoustic feedback
cancellation,” Sig.Process.,vol.104,pp.1-14,Nov.2014.