ebook img

Neural Signal Multiplexing via Compressed Sensing PDF

0.11 MB·
Save to my drive
Quick download

Preview Neural Signal Multiplexing via Compressed Sensing

Neural Signal Multiplexing via Compressed Sensing Nithin Nagaraj∗ and K. R. Sahasranand† ∗Consciousness Studies Programme, School of Humanities, National Institute of Advanced Studies, Indian Institute of Science Campus, Bangalore, INDIA. †Department of Electrical Communication Engineering, Indian Institute of Science, Bangalore, INDIA. (Email: [email protected], [email protected]) Abstract—Transport of neural signals in the brain is chal- representation of an otherwise noisy and variable sensory lenging, owing to neural interference and neural noise. There environment. 6 is experimental evidence of multiplexing of sensory information In this paper, we propose a phenomenological model to 1 acrosspopulationofneurons,particularlyinthevertebratevisual study the feasibility of such efficient multiplexing of neural 0 and olfactory systems. Recently, it has been discovered that in 2 lateral intraparietal cortex of the brain, decision signals are signalsinthepresenceofneuralinterferencefromaverylarge multiplexedwithdecision-irrelevantvisualsignals.Furthermore, number of neurons. We wish to point out the possibility of n it is well known that several cortical neurons exhibit chaotic such mechanisms from a theoretical perspective while trying a spikingpatterns.Multiplexingofchaoticneuralsignalsandtheir J to use reasonable models which simulate the behaviour of successfuldemultiplexingintheneuronsamidstinterferenceand 3 noise, is difficult to explain. In this work, a novel compressed actual neurons. To the best of our knowledge, there are no 1 sensingmodel for efficientmultiplexingof chaotic neuralsignals mathematicalmodels in the neuroscienceliterature which can constructed using the Hindmarsh-Rose spiking model is pro- explain a mechanism of simultaneous compression, reliable T] posed. The signals are multiplexed from a pre-synaptic neuron multiplexing and robust demultiplexing of chaotic neural s.I tc1oo0m4itpsirnetnsesereifgdehrisbneongusrinninoggistyepcohnsnetu-iqsryuanleasp.sitgicnanlseuarnodn,dienmuthlteipplerxeesdencuesinogf stoignbaelspe(rafmorimdsitng10s4ucnceosissyfulslyig.nWales)hwopheichtothdeembornasintrasteeevmias c Index Terms—neural signal multiplexing,compressed sensing, theoreticalargumentsandsimulationresultsthatourproposed [ chaotic signals. compressed sensing model has all the three benefits. 1 The paper is organizedas follows. We briefly review exist- v I. INTRODUCTION ing methods of multiplexing of chaotic signals and point out 4 Thenumberofneuronsinthehumanbrainisapproximately their drawbacksin Section II. None of these existing methods 1 86 million [1] which is roughly half the number of stars in seem feasible to model multiplexing of chaotic neural signals 2 3 our Milky Way galaxy. Each of these neurons is connected in the brain. We propose a novel model of multiplexing (and 0 anywhere between 102 to 104 other neighbouring neurons on demultiplexing)ofchaoticneuralsignalsusingthe framework 1. an average. It remains a mystery to a large extent as to how ofcompressedsensinginSectionIII.Weconcludewithfuture 0 informationencodedinactionpotentials(spikes)istransported research directions in Section IV. 6 fromoneneuroninthe middleofa labyrinthof neuronsto its 1 immediateneighbouramidstacacophonyofinterferingneural II. MULTIPLEXINGOF CHAOTIC SIGNALS : v signals and neural noise. The brain not only employs very Chaos as a possible neuronal code was proposed by re- i efficientcompressionofinformationwithlowconsumptionof searchers in neuroscience owing to enormous experimental X energy,butalso isknowntorobustlymultiplexneuralsignals. evidence of chaos in the nerve cells, neural assemblies, be- r a In a recent study involving adult rhesus monkeys, sin- haviouralpatternsas wellas higherbrainfunctions.Thereare gle neuron spiking measurements in the lateral intraparietal several advantages of chaos and nonlinear response models area (LIP) showed that decision signals are multiplexed with over stochastic ones, especially the possibility of control of decision-irrelevant visual signals [2]. Furthermore, the same the former. Korn et al., in their excellent review article [5], studyalsorevealedthatneuronsinLIPexhibitdiversedynam- trace the history of the search for chaos in the nervous ics indicatinga broadrange of neuralcomputations.Friedrich system providing pointers to numerous experimental studies. etal. [3]foundthatoscillatoryfield potentialsin thezebrafish Dynamical approach to brain operations and cognition has olfactory bulb multiplex information about complimentary been vindicated and there is a large body of compelling stimulus features by the same population of neurons. These evidence of chaos at all levels. can be retrieved selectively by biologically plausible mecha- The problem that we wish to address is the ability of nisms. Panzeri et al. [4] present a strong case for temporal neurons to multiplex chaotic signals. To this end, we briefly multiplexing where neural responses at different timescales review known methods for multiplexing of chaotic signals. encode different stimulus attributes. They argue that such Startingfrom1996,continuoustimechaoticsignalshavebeen multiplexing increases encoding capacity of neural responses multiplexedusing the notion of dual synchronization[6]. The and offers advantages over single response timescales, thus idea of dual synchronization proposed by Liu and Davis [7] enabling the brain to arrive at an information-rich and stable is to use a scalar signal to simultaneously synchronize two differentpairsofchaoticoscillators.Dualsynchronizationhas where n = 1,...,M. Equivalently, the above equation could been implementedin electroniccircuits[8], butthis hasprob- be expressed in matrix form: lems such as difficulty of extendingto more than two signals, sensitivitytonoiseandtheneedtocarrytwoscalarsignalsfor w1   multiplexing more than two signals (but action potentials are w2 trreasnuslmtsiitnteadsaysncjuhsrtonaizsaintigolneesrcroalraorfs4ig%naals).reEpvoernted1%byoLfiunoainsde  y(...1) = s1 s2 ... sN  ..... , (2) Davis [7]. Symbolic dynamics-based approach to multiplex  y(M)    .  two pairs of low frequency chaotic electronic circuits that  ..   .  produceRo¨ssler-likeoscillationswasdemonstratedbyBlakely    wN  and Corron [9]. However, even a small synchronization error leadstohugeerrorstherebymakingitunsuitable.Nagarajand which can be further written succinctly as: Vaidya [10] proposed multiplexing of discrete chaotic signals in the presence of noise using the idea of symbolic sequence y =Ax. (3) invariance. While they demonstrate multiplexing up to 20 Some points to note: chaoticsignals, their methodis notrobustto hugeamountsof noiseandasthenumberofsignalstobemultiplexedincreases, 1) It is quite reasonable to assume linear processing of in- the noise tolerance comes down quite rapidly. putsasintheaboveequation.Forexample,theolfactory Even if we grant that some of the above methods could be bulblinearlyprocessesfluctuatingodorinputs. Guptaet modified to work with multiple chaotic signals, it is not at al.[11]haveshownthatindividualmitral/tuftedcells(in allobviouswhetherthese can be extendedto multiplexneural anesthetized rats) sum inputs linearly across odors and signals from neuronal models, as it has never been attempted time. before.Mostimportantly,these methodswouldfailto address 2) In this work,we assume a valueof N =104. It may be the problem that we are interested in multiplexing a small thecasethattherearefewerthan104neuronsconnecting number of neural signals amidst 104 c−ompeting noisy neural to Z (and E). We have assumed this huge number as a signalsfromneighbouringneurons.Thesemethodshavenever worst-case scenario, but our methods work with lower been tested for such large number of signals, but this is values of N as well. precisely what the brain has to contend with. 3) We shall assume that only a small fraction (say 1.5%) of the N neural signals si reaching Z are from { } neighbouring neurons. These are chaotic outputs of III. PROPOSED APPROACH the Hindmarsh-Rose neuron model (described below) but corrupted with small amounts of additive noise. The problem addressed in this work is stated as follows. Hence, this small set of signals (1.5% of the columns Given a particular pre-synaptic neuron (Z) and its eastern of matrix A) is modeled as signal-dominant with low post-synaptic neighbour (E), both of these being connected to 104 other neurons, how does neuron Z simultaneously noise (additive Gaussian noise2 (0,0.01)). The re- N ≈ maining 98.5% of the neural signals are completely transport 4 neural signals s1(n), s2(n), s3(n) and s4(n) dominated by additive Gaussian noise and are modeled (corresponding to North, North-West, South-West and South as i.i.d Gaussian samples (0,1). Even though these neighbours respectively) to neuron E? Here n stands for N vast number of columns of A are also neural signals, discretetimeindex(weassumeasampledneuralsignal,butin they are effectively modeled as Gaussian noise, since whatfollows,ourmethodscanbeextendedtocontinuoustime by the time these neural signals arrive at neuron Z (as signalsaswell).TheneuronZ canonlytransmitasinglescalar wellasE -rememberthatE isnearestneighbourofZ), signal across its synapse to neuronE. It is to be remembered they are dominated completely by noise with no signal thatallneuralsignalsare alwayscorruptedwith someamount of noise, which we shall assume to be additive Gaussian1. component whatsoever. 4) The weights wi Ni=1 are scalars (real-valued) and cor- { } respondtoeachpre-synapticneuronofZ (theseweights A. Neural Signal Multiplexing could be thought of as contributing to either an in- At neuron Z, up to a first order approximation, we shall hibitory or an excitatory response). These weights are assume thatit formsa weighted additionof all its inputnoisy completely unknown to E. It is reasonable to assume neural signals. This can be mathematically represented as: that only a few neural signals (say k) among the 1.5% signal-dominant columns of A (in our study we have N taken k = 4) need to be multiplexed from Z to E. y(n)= wi si(n), (1) Hence, we shall assume that only this k sparse set of X · i=1 weights are significant (greater than some threshold T), 1Morerealistic noisemodelsneedtobetestedinthefuture. 2LetN(µ,σ2)denoteaGaussiandistributionwithmeanµ&varianceσ2. thoughtheactualnumberofsignificantweights(k)itself similar fashion, the remaining 146 neural signals si(n) 1i=505 { } is unknown at E3. are also obtainedfromthe same modelwith the same settings of r and I but with randomly chosen initial conditions. As Hindmarsh-Rose Neuron Model explained previously, the rest of the columns of A (9850) are The Hindmarsh-Rose neuron model [12] is a widely used modeledasi.i.dGaussian (0,1)entries.Thus,wenowhave modelforbursting-spikingdynamicsofthemembranevoltage our ensemble of 104 neurNal signals and neural noise which S(t) of a single neuron. The equations of the model (in makes up Equation (3). For particular chosen values of the dimensionless form) are: weights wi 1i=041, we plot y(n) in Fig. 3 along with neural { } interference and noise. S˙ = P +3S2 S3 Q+I, (4) − − P˙ = 1 5S2 P, (5) − − 2 8 Q˙ = r[Q 4(S+ )]. (6) − − 5 1.5 In the above set of differential equations, P(t) and Q(t) 1 represent auxiliary variables corresponding to fast and slow 0.5 transport processes across the membrane respectively. The 0 control parameters of the model are the external current ap- plied,I,andtheinternalstateoftheneuron,r.Weuseavalue −0.5 of r = 0.0021, as it yields dynamics which corresponds to a −1 realistic description of the electrophysics of axons of certain neurons[13]. Themodelis capableofdisplayingbothregular −1.50 10 20 30 40 50 60 70 80 90 100 spiking as well as bursting (chaotic) dynamics for suitable regimes of the parameter I. The bursting behaviour exhibits 2 chaoswherethereisalternatingirregularspikingfollowedbya 1.5 non-oscillatoryphasewhichsuddenlyendsinaburstofspikes. Inourstudy,wehavechosenI =3.28whichisinthechaotic 1 0.5 2 0 1.5 −0.5 1 −1 0.5 −1.50 10 20 30 40 50 60 70 80 90 100 0 2 −0.5 1.5 −1 1 −1.5 0 10 20 30 40 50 60 70 80 90 100 0.5 0 Fig. 1. Noiseless neural signal s(n) simulated using the Hindmarsh-Rose neuronmodel(dotted). Thenoisysignals1(n)(solid)isobtainedbyadding −0.5 N(0,0.1)tos(n).Allsuchplotsinthispaperdepictsignalamplitudeagainst discrete timeindexn. −1 −1.5 0 10 20 30 40 50 60 70 80 90 100 regime [14]. We have sampled S(t) in steps of 1 time unit toyielddiscrete-timeneuralsignals(n)andsubsequentlyadd i.i.d.Gaussiannoise (0,0.01)toyieldthefinalsignals1(n) Fig. 2. The three noisy neural signals to be multiplexed along with s1(n). N Top:s2(n),Middle: s3(n),Bottom:s4(n). as shown in Fig. 1. In Fig. 2, we plot the remaining three (noisy) signals s2(n), s3(n) and s4(n) obtained in a similar Under normal circumstances, Equation (3) is unsolvable, fashionfromrandomlychoseninitialconditionsofthe neuron unlessM N.Thiswouldimplythattheneuralsignalsneed model (followed by sampling and addition of i.i.d. Gaussian ≥ to be observed for a longer duration (thereby making M at noise (0,0.01)). These are all essentially chaotic in nature (the adNded noise, in fact makes it stochastic). The goal is to least=104).However,practicalconstraintssuchasimmediate responsetostimulusdemandsolvingEquation(3)withshorter multiplexallthefoursignalsfromneuronZ toneuronE.Ina duration signals, which leads to the under-determined case 3Itisthissparsityorrathercompressibility ofxthatcomestoourrescue. (M < N). Surprisingly, this admits a unique solution which recoverx. However,in our problem,we are onlyinterestedin −0.5 identifying the columns that contribute to the signal and not −1 theexactamplitudes.IntheCSparlance,thisiscalledsupport −1.5 recovery [22]. −2 −2.5 C. Neural Signal Demultiplexing: A Compressed Sensing −3 Model −3.5 Consider the neuron E which has received the signal −4 y = Ax from its neighbour Z. At E, it is only reasonable −4.5 to assume that the signal set A is not perfectly available. −5 0 10 20 30 40 50 60 70 80 90 100 We consider the case where the neuron E has to decode using a matrix Aˆ (instead of A) where noisy versions 1.5% Fig. 3. Signal vs. Noise. The mixed scalar signal y(n) (computed using of the columns of A are present in Aˆ (these correspond to Equation (1)) transported from the pre-synaptic neuron Z to post-synaptic signalsfrom nearbyneurons,the signal-dominantset) and the neuronEisshowninsolid.Neuralinterferenceandnoisefrom9996neurons (excluding theeffect ofthe4signals tobemultiplexed) isshownindotted. rest 98.5% of the columns are composed of independently sampled Gaussian noise (these correspond to signals from far away neurons, the noise-dominant set). This could be could be accurately determined via a recently developed mathematically represented as follows: Let ai,aˆi denote the ith columnsofAandAˆrespectively.Considerasetofindices, signal processing paradigm known as compressed sensing (to corresponding to the signal-dominant columns of A and let be discussed in the next section), provided the wis to be them be labelled 1,2,...,150. Then, recoveredformasparse-set(oracompressibleset),andcertain conditions are satisfied by the matrix composed of the noisy aˆi =ai+ǫi, neural signals. where ǫis are independent Gaussian noise vectors. Now, the B. Compressed Sensing problemisthatofrecoveringxgivenAˆandyinsteadofAand Compressedsensing(CS)isarecentsignaldetectionframe- y. However,thisis equivalentto decodingx fromy =Ax+ǫ 150 work which has received considerable attention in the signal where ǫ= i=1xiǫi which has variance, say η. This can be P processing community [15], [16]. The signal measurement achieved by solving the convex program process is linear as in, the observed vector y = Ax. If the number of non zero entries of x is less than or equal to k, min||x||l1 subject to ||Ax−y||l2 ≤η, we say x is k-sparse. The idea behind CS is that, if the whichgivesthe sparsesolutionx∗.In fact,it isshownin [15] received signal is sparse (in an appropriate domain), most that, under this setting, of what is received is thrown away during compression or pmreoacseusrseinmgen[1ts7](.asThaufsu,nitctiisonpoosfsitbhlee stoparresdituyceofthxe)naunmdbuesrinogf ||x−x∗||l2 ≤C1ǫ+C2.||x−√xkK||l1, randomprojectionsit is possible to recoverthe entire(sparse) where C1, C2 are constants and xK is the same as x at k signal exactly [18]. Further, it has been shown that perfect leading entries and the rest set to zero. Surprisingly, even in recovery is possible in the presence of noise as well [15]. theabsenceof98.5%ofcolumns(andtheremainingcolumns Let x RN be a k-sparse signal. Let A RM×N denote corrupt), l1 minimization is seen to perform well. This is the meas∈urement matrix (N = 104). The∈columns of the evident from our simulation result depicted in Figure 4. It matrix correspond to the signals (vector-valued)generated by shows that signals multiplexed and transported from Z are each of the 104 neurons. The received vector y RM is a successfully demultiplexed at E despite having only a noisy, ∈ linear combination of columns of A where the coefficients partial version of the mixing matrix A. form the vector x i.e.,y =Ax. D. Simulation settings In our problemof interest, this means that the signals from different neurons (columns of A) are multiplexed together We have used the following settings for our simulations. during transmission to the nearby neuron. The number of N = 104, M = 100, 150 signal-dominant neural signals signals multiplexed is equal to the sparsity of x. from Hindmarsh-Rose model (I = 3.28, r = 0.0021 and Decodingof x giveny can be carried out througha variety randomly chosen initial conditions) with additive Gaussian of techniques like greedy approach (algorithms like Orthogo- noise (0,0.01), 9850 noise-dominant neural signals sim- N nal MatchingPursuit[19]) or optimizationapproach(l1 mini- ulated as additive Gaussian noise (0,1), k = 4 signals N mization[20]),providedthe measurementmatrixAsatisfiesa to be multiplexed with weights w1 = 1.0, w2 = 0.9, propertycommonlyreferredtoasRestrictedIsometryProperty w3 =0.8 and w4 =0.7. The weights for the signal-dominant (RIP)[21].Forexample,whentheentriesarealldrawnfroma columns wi ii==1550 are drawn from uniform random distri- { } Gaussian distribution (say (0,1)) appropriately normalized, bution in (0,0.02), weights for the noise-dominant columns thematrixsatisfiesRIP.InNthesecases,itispossibletoexactly wi ii==115041 are drawn from uniform random distribution in { } theory demands that the number of measurements M obeys 2 N M C klog [24] for good recovery of x. k ≈ · 1.5 IV. CONCLUSIONS AND FUTURE RESEARCH DIRECTIONS 1 The transport model between adjacent neurons in the brain 0.5 that we have proposed has some unique features. The mixing 0 matrix A at pre-synaptic neuron Z and the corresponding −0.5 matrix Aˆ at the post-synaptic neuron E need not have the samenumberofneuralsignalsin thesameorder(numberand −1 position of columns can be different, and is most likely the −1.50 10 20 30 40 50 60 70 80 90 100 case in the brain). As a next step, it will be interesting to study the theoretical properties of the matrix A (and Aˆ) and Fig. 4. Demultiplexing: Original signal w3·s3 (dotted) at neuron Z and the limits of such multiplexing. Experiments with real neural reconstructed signal wˆ3 ·sˆ3 (bold) at neuron E obtained by means of l1 signals in the brain, to test the proposed modelare necessary. minimization.Similarrecoverywasobtainedforwˆ1·sˆ1,wˆ2·sˆ2andwˆ4·sˆ4, butnotshownhereduetospaceconstraints. REFERENCES [1] F.A.Azevedo,L.R.Carvalho,L.T.Grinberg,J.M.Farfel,R.E.Ferretti, R. E. Leite, W. Jacob Filho, R. Lent, S. Herculano-Houzel, “Equal (0,0.001).CVX,apackageforspecifyingandsolvingconvex numbers of neuronal and nonneuronal cells make the human brain an programs [23] was used for l1 minimization. Threshold for isometrically scaled-up primate brain,” JCompNeurol., vol.513,no.5, choosingthe k significantweights(in x∗) was set at T =0.4. pp.532-541,Apr.2009. [2] M. L. R. Meister, J. A. Hennig, A. C. Huk, “Signal Multiplexing E. Discussion and Single-Neuron Computations in Lateral Intraparietal Area During Decision-Making,” The J. of Neurosci., vol. 33, no. 6, pp. 2254 2267, It seems imperative to study the neural signal detection Feb.2013. problem using the compressed sensing framework because it [3] R. W. Friedrich, C. J. Habermann, G. Laurent, “Multiplexing using synchrony in the zebrafish olfactory bulb.,” Nat. Neurosci., vol. 7, no. is one approach in which the number of measurements (size 8,pp.862-871.Aug.2004. of the signal vector) is greatly reduced depending on how [4] S.Panzeri,N.Brunel,N.K.Logothetis,C.Kayser,“Sensoryneuralcodes many signals are multiplexed. In the context of neural signal usingmultiplexed temporalscales,”TrendsNeurosci.,vol.33,no.3,pp. 111-120,Mar.2010. transmission,thismeansthatthetimetorespondtoastimulus [5] H.Korn,P.Faure,“Istherechaosinthebrain?II.Experimentalevidence is reduced when fewer signals are multiplexed. andrelated models,”C.R.Biologies, vol.326,pp.787840,Sep.2003. Usual decoding algorithms in CS aim at exact recovery [6] L. S. Tsimring, M. M. Sushchik, “Multiplexing chaotic signals using synchronization,” Phys.Lett.A,vol.213,pp.155–166, 1996. of the vector x. However, in our problem, there are two [7] Y.Liu,P.Davis,“Dualsynchronization ofchaos,”Phys.Rev.E,vol.61, aspects which makesdecodingeasier. Firstly, the neurondoes pp.R2176–R2179, Mar.2000. not need the entire x, but only a few leading coefficients, [8] S.Sano,A.Uchida,S.Yoshimori,R.Roy,“Dualsynchronizationofchaos in Mackey-Glass electronic circuits with time-delayed feedback,” Phys. which correspond to the signals of interest (s1(n), s2(n), Rev.E,vol.75,no.1,pp.016207-1–016207-6, 2007. s3(n) and s4(n)). Secondly, the neuron does not need to [9] J.N.Blakely,N.J.Corron,“Multiplexingsymbolicdynamics-basedchaos extract the exact amplitude of these coefficients either, since communications usingsynchronization,” inJ. Phys.:Conf. Ser.,vol.23, pp.259–266,2005. it only needs to identify the “active” columns of A and no [10] N. Nagaraj, P. G. Vaidya, “Multiplexing of discrete chaotic signals in informationisconveyedthroughspikeamplitudeswhatsoever. presenceofnoise,”Chaos,vol.19,pp.033102-1-033102-9,Jul.2009. Thiscorrespondstoasupportrecoveryproblemandforcertain [11] P.Gupta,D.F.Albeanu,U.S.Bhalla,“Olfactorybulbcodingofodors, mixturesandsniffsisalinearsumofodortimeprofiles,”Nat.Neurosci., classes of noise models (l∞-bounded, Gaussian), when the vol.18,no.2,pp.272-281,Feb.2015. minimum amplitude is greater than √k times the noise level [12] J.L.Hindmarsh,R.M.Rose,“Amodelofneuronalburstingusingthree where k is the sparsity of x, exact recovery is possible [22]. coupled firstorderdifferential equations,” Proc. R.Soc. London, Ser.B, Bio.Sci.,vol.221,no.1222,pp.87-102,Mar.1984. Besides, since CS delivers perfect recovery under noisy [13] M.I.Rabinovich,H.D.I.Abarbanel,R.Huerta,R.Elson,A.I.Selver- conditions as well [15], the recovery is robust even when the ston, “Self-regularization of chaos in neural systems: Experimental and exact signal is not known at the receiver neuron (E in our theoretical results,” IEEE Trans. Cir. Sys. I, vol. 44, no. N10, pp. 997- example) but only a noisy version like Aˆ in our example. 1005,Oct.1997. [14] J.M.Gonza´lez-Miranda, “Observationofacontinuousinteriorcrisisin Another point of biological relevance is the connection the HindmarshRose neuron model,” Chaos, vol. 13, no.3, pp. 845-852, between neural complexity (N), number of neural signals Aug.2003. [15] E. J. Candes, J. Romberg, T. Tao, “Stable signal recovery from in- multiplexed (k) and the time to respond (M). Organisms complete and inaccurate measurements”, Comm. Pure Appl. Math., 59: with lower neural complexity, although have good response 12071223,2006. time, are likely to multiplex fewer signals. On the other [16] M.F.Duarte,Y.C.Eldar,“StructuredCompressedSensing:FromTheory toApplications”, IEEETransactions onSignalProcessing,vol.59,no.9, hand, organismswith higher neural complexity (large N) can pp.4053-4085,Sept.2011. multiplex more signals and also have higher throughput of [17] E.J.Candes,M.B.Wakin,”AnIntroductionToCompressiveSampling,” information transfer between adjacent neurons, albeit at the IEEESignalProcessingMagazine, vol.25,no.2,pp.21-30,March2008. [18] R.Baraniuk,M.Davenport,R.DeVore,M.B.Wakin,“ASimpleProof costoflongerresponsetime.Thisrelationshipisverymuchin oftheRestricted IsometryPropertyforRandomMatrices”, Constructive agreementwithourproposedmodelsincecompressedsensing Approximation, vol28,pp.253-263, 2008. [19] J. A. Tropp, A. C. Gilbert, “Signal Recovery From Random Mea- surements Via Orthogonal Matching Pursuit”, IEEE Transactions on Information Theory,vol.53,no.12,pp.4655-4666, Dec.2007. [20] E.J.Candes,J.Romberg,T.Tao,“Robustuncertaintyprinciples: exact signal reconstruction from highly incomplete frequency information,” IEEETransactionsonInformationTheory,vol.52,no.2,pp.489-509,Feb. 2006. [21] E. J. Candes, “The restricted isometry property and its implications for compressed sensing”, Comptes Rendus Mathematique, Volume 346, Issues910,pp589-592,May2008. [22] R.Wu,W.Huang,D.-R.Chen,“TheExactSupportRecoveryofSparse Signals With Noise via Orthogonal Matching Pursuit”, IEEE Signal ProcessingLetters, vol.20,no.4,pp.403-406,April2013. [23] M. Grant, S. Boyd, “CVX: Matlab software for disciplined convex programming”, version2.0beta.http://cvxr.com/cvx,September2013. [24] K.D.Ba,P.Indyk,E.Price,D.P.Woodruff,“Lowerboundsforsparse recovery”,Proceedingsofthetwenty-firstannualACM-SIAMsymposium onDiscreteAlgorithms (SODA’10),pp.1190-1197, 2010.

See more

The list of books you might like

Upgrade Premium
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.