Memory Recall and Spike Frequency Adaptation James P. Roach∗ Neuroscience Graduate Program, University of Michigan Leonard M. Sander Department of Physics, University of Michigan and Center for the Study of Complex Systems, University of Michigan Michal R. Zochowski Department of Physics, University of Michigan 6 Center for the Study of Complex Systems, University of Michigan and 1 Biophysics Program, University of Michigan 0 (Dated: March 15, 2016) 2 r The brain can reproduce memories from partial data; this ability is critical for memory recall. a Theprocessofmemoryrecallhasbeenstudiedusingauto-associativenetworkssuchastheHopfield M model. Thiskindofmodelreliablyconvergestostoredpatternswhichcontainthememory. However, itisunclearhowthebehavioriscontrolledbythebrainsothatafterconvergencetooneconfiguration, 4 it can proceed with recognition of another one. In the Hopfield model this happens only through 1 unrealisticchangesofaneffectiveglobaltemperaturethatdestabilizesallstoredconfigurations. Here we show that spike frequency adaptation (SFA), a common mechanism affecting neuron activation ] in the brain, can provide state dependent control of pattern retrieval. We demonstrate this in a C Hopfield network modified to include SFA, and also in a model network of biophysical neurons. In N bothcasesSFAallowsforselectivestabilizationofattractorswithdifferentbasinsofattraction,and . also for temporal dynamics of attractor switching that is not possible in standard auto-associative o schemes. Thedynamicsofourmodelsgiveaplausibleaccountofdifferentsortsofmemoryretrieval. i b - q The brain stores memories as patterns of synaptic than globally smearing out all of them, as temperature [ strengths in the network of neurons. It can store mul- does. 2 tiple memories and retrieve them in a reliable way, and SFA is an activity induced reduction in neural firing v can change from one to another as attention wanders. rate induced by a hyperpolarizing current that grows as 4 However,thereis no agreementinthe neurosciencecom- aneuronfires–neuronsthatfireagooddealtendtostop 7 9 munityofhowthisoccurs. Thispaperoffersapartialso- firing after a delay. Thus SFA is a naturalmechanismto 2 lution to understanding the mechanism for retrieval and turn offactivity. Further, SFA canbe controlledby neu- 0 switching based on a known physiological effect, spike romodulatorssuchasAcetylcholine(ACh),animportant 1. frequency adapation (SFA). regulator of neural excitability. ACh causes a reduction 0 Decades of work on understanding storage and re- in SFA and provides for its dynamic regulation[7, 8]. In 6 trievalhavefocussedonversionsoftheHopfieldmodel(a previous work [9] we have presented a network model of 1 specialformofthe Isingmodel)[1–3]. Hopfieldnetworks Hodgkin-Huxley (HH) neurons with SFA and ‘Mexican : v havemanyattractivefeatures: theyareauto-associative: Hat’couplingwhichreproducesmanyfeaturesofcortical Xi that is, memories are recalled from a fragment of their activity as ACh levels change between sleep and waking data because the memories are stored in attractors, i.e. states. In the present work (below) we use this model r a metastable states. to concentrate on memory retrieval. However, we claim that the essentials of our results are quite robustand in- However, as in any statistical model at zero temper- dependent of the details of the neuron model. To show ature there is no mechanism for escaping an attractor: this, we first consider a version of the Hopfield model a single memory pattern would exist for all time. To [1–3] which has SFA. overcome this problem an artificial ’temperature’ is in- In our model we consider networks composed of troduced in Hopfield models to allow switching. This N=1000 spins, S = {s } where s = ± 1. The network ‘temperature’ (i.e fastrandomnoise in synaptic current) i i is fully connected with weights σ . As usual, spin up has no obvious biological origin. Thus, despite the ele- i,j corresponds to a neuron that fires, and spin down to a ganceofthemodel,anditsutilityincomputerscience,its silent one. Each spin gets an input: application to the brain is problematic. Previous efforts toovercomethislimitationhaveusedfeedbackinput[4], N synapticdepression[5],andadaptivemechanisms[6]. As h (t)= σ s −θ (t), (1) i X i,j j i we will see, SFA allows escape from attractors, and, in j=1 somecasesactsinthesamewayastheHopfieldtempera- ture. In addition, it turns off particular memoriesrather where θ (t) is a local offset field at site i which changes i 2 slowly in time. The first term is the usual Hopfield-Ising term and the second represents SFA. 1 The dynamics ofthe spins areas follows: ateachtime 0.11 0.17 | step a random spin is flipped with probability: m s 1 tr P (s )= , (2) o h i 1+e−2sihi/T A0.07 T0.11 0 ng|− where T is the noise. In much of what follows we take T | to be verysmallso that P is essentiallya step function. m h w The dynamics of θ is: 0.03 0.05 e i a k | A θ (s )= . (3) i i 1+e−si(tˆ−τ1)/τ2 0.05 0.1 0.15 0.05 0.15 0.25 −1 α α Here, tˆisthetime sincethe laststatechangeofthespin, A 1 and τ are time constants which govern the dynamics 1,2 i of attractors. The field θ increasesto A for up spins and m 0.5 decreasestozerofordownspins. Thetime constantτ is 1 thetimetothehalf-maximumvalueofθi. Wetakeτ1 =5 0 (timesteps/N),exceptforthedatainFigure1whereτ1= B 1 1.5 (timesteps/N). The rate at which SFA activates/ de- i activatesiscontrolledbyτ2 whichissetto0.2,exceptfor m 0.5 the data in Figure 4 where τ = 0.6 (timesteps/N). This 2 implementationsofadaptationisdifferentthanothersin 0 thehopfieldmodel[6]. becauseitintegratesoveralonger C 1 time (i.e. considers more than the activity at the previ- i ous time step). This more closely resembles adaptation m 0.5 in biophysical models. In the Hopfield scheme memories are stored as attrac- 0 tors, i.e. metastable configurations, Ξµ = {ξµ}. We 0 5 10 15 20 i encode attractors using a modified Hebbs rule [1]: time steps x104 1 p FIG.1. SFAandT controlattractorstabilityinHopfieldnet- σi,j = NW Xwµξiµξjµ. (4) works. With noise Hopfield networks have three functional µ states: stability of local attractors, stability of global attrac- tors,andstabilityofnoattractor. Thestabilityoflocalversus Each attractor is given a weight, w and W = pw . µ Pµ µ global versus no attractors is shown by the ability of the at- Thusσi,j =1fortwospinswithcorrelatedactivityacross tractortomovefrom aweakattractortoastrongone,which all attractors, Ξµ, and -1 for spins with anti-correlated isquantifiedbythedifferenceofthemµofthestrongattractor activity. We set w1/wp−1 = 0.5 and wp = 1, except for and the weak attractor in which the system was initialized. thedatareportedinFigure4Dwherealltheweightsw = (Top)Thesaturationofmemories(α),noise(T),andadapta- i tion(A)affectstabilityinasimilarmanner. Forlowlevelsof 1.0. The saturation is defined as α = p/N. Attractors noise local attractors are stable (black). For a given α either encoded with lower w are weaker attractors. µ increasingA(left)orT (right)leadstoastrongattractorsbe- In order to determine if the dynamics has settled into ingstable(white). Furtherincreasedestabilizesallattractors thevariousΞµwemeasuretheoverlapbetweenthestored (gray). (Bottom)Exampledynamicsofmemoryoverlap,mi, memory and the current state: forstrongly(black)andweakly(gray)weightedmemories. In eachcaseα=0.01;adaptationlevelsare(A)A=0.01(B)A 1 N = 0.05 (C) A = 0.3. m = s ξµ, (5) µ N X i i i which is ±1 when S = ±Ξ and 0 when S⊥Ξ. In each standard T versus α plot with a plot of A versus α in simulation S was always initialized to a random weak Figure1. TherightpanelshowshowT andαinteractto attractor, Ξ . affectthestabilityofthestrongandweakattractorsand, weak In the usual Hopfield model the preference for local, eventually, to destabilize all attractors. For small T the global,ornoattractorschangesasthenoise,T,increases dynamicskeepsthe systemina weakattractor(blackon [1, 10]. This transition depends on the storage capacity the colormap);forlargerT the systementersaregimeof α [1]. We find very similar behavior as we increase the stability of stronger attractors (white). For large T no magnitude of SFA, i.e. A. To show this we compare the attractors are stable (gray). 3 A 1 A B ) 1 1 0.8 A 2 wi 0.6 1T0.8 0.8 − 0.4 m 0.6 0.6 p w 1T0.4 0.4 0.1 0.2 0.3 0.4 0.5 ( A h n 0.2 0.2 B C a h 1 1 A t 0 0 t t g t stren 0.8 11..46 00..68 ractor m)C0.18 D0.18 Attractor 00..46 1.12 000..24 preferenc 1wtanh(pT000...246 000...246 0 0.3 0.6 0 0.1 0.2 0.3 e A gKs 0 0 0 0.5 1 0 0.5 1 FIG. 2. Attractor stability varies as a function of A. For m m SFA to induce the network to leave a attractor it must be large enoughtoovercometheenergybarrierof theattractor. FIG.3. SFAandT destabilizeattractorsofdifferentstrengths Mean field calculations predict a linear relationship between inHopfieldnetworks. Solutionstomean fieldequationsillus- thestrengthofanattractorandtheamountofadaptation,A, trate how adaptation and temperature destabilize weak at- to destabilize it (A).This is best seen in the Hopfield model tractors. When A is low many memories are stabile (panel (B). The threshold value of A increases linearly as the at- A; A=0.1). Increasing A destabilizes weak memories while tractor strength, wµ increases. A similar effect is seen in the preserving strong memories (panel B; A=0.25). When adap- spiking network model (C). tation is absent temperature has a similar effect where all memories are stable for low T (panel C; T =0.2), while only strong memories are stable for high T (panel D; T = 0.5). SFAhasaverysimilareffect; seeFigure1,left. Tosee Thedashed lineshows m=m;thesolid black line showsthe how this comes about, consider the Hamiltonian: mean field equation for wυ = 0.45; the solid gray line shows 1 N N themean field equation for wυ =0.75. E =− σ s s + θ (t)s , (6) 2X i,j i j X i i i,j i If p ≪ N any overlap between memories is negligible so which defines a slowly varying energy landscape. The the mean field equation and the system is in memory υ first term is the ordinary Ising energy, and the second for a time ≫τ becomes: 1 canbethoughtofasatime-dependentmagneticfieldthat 1 idnecsrteaabsielisze(sfomr iunpimspainins)Et;oaAn awthtreanctto≫rbτe2c.omInecsreuanssitnagbAle mξiυ =tanh(βwυmξiυ − T2A), (9) for t→∞ if A is large enough. In Figure 2 we show the which be simplified to m = tanh(βw m−β2A). When υ magnitude of SFA required to cause the system to leave thisequationhassolutionsbeyondm=0amemorywith an attractor of a given strength. It increases linearly as strength w is stable for a given T or A. Figure 3 shows υ attractor strength increases. A similar effect occurs in mean field solutions for memories with strengths w = υ the HH model, below. 0.45(solidblackline)andw =0.75(solidgrayline). As υ The stabilityofanattractorofa givenweightcanalso in the numerical results adaptation (3 top panels) and be investigated by mean field theory [10, 11]. The mean temperature (bottom panels) have similar effects on the field equations for the system are: stability of memories. For low levels (A = 0.1, T = 0.2; left panels) both strong and weak memories are stable β (i.e. both have solutions beyond m = 0), but moderate hsii=tanh(NW Xwµξiµξjµhsji−2θi), (7) increasesinAorT destabilizeweakermemories(A=.25, j,µ T =0.5; right panels) where β = 1/T. By exploiting the fact that hsii = mξiυ Thus,changesinthestrengthofSFAcanplaythesame role as changes in T by destabilizing attractors of vary- the mean field equations can be rewritten as: ing strength as A increases. Interesting time-dependent β mξυ =tanh( w ξµξµmξυ −2θ ). (8) effects occur for intermediate values of A when the τ’s i NW X µ i j i i are not too large. Because θ is a function of t we can j,µ i 4 A 1 1 i m 0 Strong Weak −1 φ 3 6 9 B 1 i 0 m 0 1 −1 3 6 9 C 1 i φ m 0 −1 0 5 15 25 0 0.5 1 time steps x104 g (mS/cm2 ) Ks FIG.4. Examples of networkdynamicsin themodified Hop- FIG. 5. Attractor preference and gKs. The quantity φ fieldmodel. ForsmallAevenaweakattractorisstable(panel is the fraction of time that activity is located within an at- A; A = 0.01). A moderate increase leads to the strong at- tractor. Black dashed line, control value. (Top) For initial tractor becoming stable (B, A = 0.1). For larger A damped locations outside any attractor no clear preference emerges oscillations with period ∼4τ1 emerge: C, A = 0.4. In panels for small gKs. For moderate gKs there is clear preference for A and B the gray line corresponds to a weak attractor and the strong attractor. (Bottom) For initial conditions within the black corresponds to a strong one. In panel C, the light- theweak attractor activityneverleavesfor small gKs. There est gray is the weakest attractor. All other lines are strong issignificant preferencefor thestrong attractor formoderate attractors of equal weight. gKs. generate chains of attractor preferences, as opposed to specific evolution of the two voltage dependent param- stability in a deep attractor or a random walk (as in the eters x∞ and τx. The slow potassium current conduc- standard Hopfield model for large T). These results are tance, g controls the level of SFA (i.e. lower values of Ks shown in Figure 4. For small A local, weak attractors g correspondtolowSFA).ThelevelofAChmodulates Ks are stable. A moderate increase leads to strong attrac- g : the maximum (g = 1.5 mS/cm2) and minimum Ks Ks tors being stable (Figure 4 A, B). Further increase of A (g = 0) correspond to the absence or maximum of Ks leads to oscillations of period ∼4τ1; Figure 4 C. This is ACh, respectively. For more details see [9]. similar to the latching dynamics found in [6]. ThesynapticcurrenttoneuroniisI =g (t)(V − syn,i E i Wenextturntoamorerealisticneuronmodeltocom- E )+g (t)(V −E ) and the dynamics of g (t) is: E I i I E/I pare the effects of SFA for the two cases. The spik- ing network model introduced previously [9] considers ∈E/I −(t˜j−τD) −(t˜j−τD) NE =1225 excitatory and NI =324 inhibitory HH neu- gE/I(t)=K X σi,j(e τS −e τF ) (11) ronsarrayedontwosquarelatticesofsizeLE/I. Thecou- j pling wasof lateralinhibition (MexicanHat) type where where σ is the synaptic weight between neuron i and short range excitation is balanced with global inhibi- i,j neuron j, and τ are time constants equal to 3.0 and tion. Allexcitatoryneuronswereconnectedtoneighbors S,F 0.3 ms respectively. σ is set to 0.02 mS/cm2 unless i,j within radius Rxx = qL2E/Ikxx/πNE/I where kei = 16 otherwise stated, t˜j is the time since the last spike of is the degree of excitatory to excitatory connections, neuron j, and K is a normalization constant. I set so ext kei = 4 is the degree of excitatory to inhibitory con- that all neurons fire at 10 Hz in the absence of synaptic nections. Neural dynamics were modeled by the current input and g . The equations were integrated using the Ks balance equation [12]: 4th order Runge-Kutta method at a 0.05 ms time step to 20 s. Data points are averages of 20 sets of initial conditions. dV cm i =−gNam3∞h(Vi−ENa)−gKdirn4(Vi−EK) We have shown [9] that the nature of the dynamics is dt thatforsmallg (largeconcentrationsofACh)thereisa −g s(V −E )−g (V −E )−I +I (10) Ks Ks i K L i L syn,i ext stationary,localizedregion(a‘bump’)ofspikingactivity. In this equation, as we will see, g sets the magnitude For larger g the bump travels through the lattice. Ks Ks of the SFA; it corresponds to A in the model above. Toconsidermemoryweintroducespatialattractorsby The dynamics of the gating variables h, n and s is of increasing synaptic strength in certain locations [9, 13]. the form dx/dt = (x∞(V) − x)/τx(V) with additional These attractors fix the location of the bump when the 5 dynamics is in the stationary regime. For a single at- 1 1 tractor,preferencefortheattractorfallsasg increases Ks [9]. This model is quite different from the one discussed 0.8 above: theexcitatorycouplingisshort-rangedincontrast 2) φ to the Hopfield σ’s which are long-ranged. The attrac- m s t c r tors here are defined by local geometry. Nevertheless, S/ 0.6 ong SFA gives common results for the two cases. m 0 − ( Toconsidermultiple attractorsofvariablestrengthwe 0.4 s φ strengthened synaptic strength in two network regions gK w e at opposite ends of the lattice. The strong attractor had 0.2 ak 100% stronger excitatory connections and the weak at- tractor had a 50% increase. To examine how network 0 −1 preferencechangedasafunctionofg inmulti-attractor Ks 0.016 0.018 0.02 0.022 0.024 0.026 0.028 networks we did simulations where activity was initial- w (mS/cm2) ei ized by injecting a 0.25 µA/cm2 current to a region out- side either of the attractors for the first 0.5 s of the sim- FIG.6. Strongattractorpreferenceiscontrolledbyinhibition ulation. Preference for a given attractor was quantified strength in spiking networks. We measure the differential by the measure φ which is the fraction of time that the attractor preference for the two attractors by (φstrong − center of the bump, calculated according to [14], is lo- φweak), which is 1 when all activity is located within the strong attractor and -1 when all activity is within the weak. cated within the attractor. For low levels of SFA there Thepreferencefor thestrongattractorat moderateSFAdis- is no clearpreference indicating thatactivity localizes to appears when inhibition is decreased. attractorsrandomly; Figure 5, top. Increasing g leads Ks to a clear preference for the strong attractor. The pref- erence for any attractor disappears for large g as the Ks AChlevelsfall,moderatelevelsofSFAallowforsampling network enters the regime of traveling bump dynamics. of the memory space, see [18]. To further test the stability of the two attractors we The largestvariation in cortical ACh levels occurs be- put activity initially on the weak attractor. For small tween sleep/ wake states. In this case the highest levels g activity remained localizedthere: Figure 5, bottom. Ks occur during rapid eye movement (REM) sleep and the For larger g activity moved to the stronger attractor. Ks lowest during slow wave sleep [19]. We argue that in- This confirms that the strength of SFA can control the termediate levels of ACh during wake states allow for stability of attractors having different depths. memoryrecallwhenexternallydrivennetworkstatesare We also considered how the relative stability of the allowed to wander to find the optimal state. REM sleep attractors depends on the ratio of inhibitory to excita- is thought to be important for memory consolidation, tory coupling, w ). Figure 6 is a phase plot of final e/i whereretrievalofweaklystoredattractorsofpreviousex- attractorpreferencefor differentw ) andg . The rel- e/i Ks perience is essential to their consolidation. NREM sleep ative preference for the two attractors was measured by associated with low ACh levels is characterized by slow φ −φ , which ranges between 1 (preference for strong weak waves and may play a role in synaptic rescaling [20]. thestrongattractor)and-1(preferencefortheweak). In- terestingly,weakeninginhibitionabolishesanypreference JPRwassupportedbyanNSFGraduateResearchFel- for the strong attractor at intermediate levels of g . lowship Program under Grant No. DGE 1256260 and a Ks Thus, for both models SFA can selectively destabi- UM Rackham Merit Fellowship. MRZ and LMS were lizeattractorseffectivelycontrollingattractorpreference. supported by NSF PoLS 1058034. With SFA we can have long-term preferential activation of attractors of different strengths and also non-trivial time dependence of attractor sequences. This provides abiologicallyplausiblemechanismforswitchingbetween ∗ [email protected] encoded patterns [10, 15]. [1] D. J. Amit, H. Gutfreund, and H. Sompolinsky,Annals In the biophysical model SFA depends on ACh [8, 12, of Physics 173, 30 (1987). 16]. Our results imply that ACh controls memory re- [2] M.E.Hasselmo,B.P.Anderson, andJ.M.Bower,Jour- trieval dynamics. Note the relevance of our results to nal of Neurophysiology 67, 1230 (1992). context dependent release of ACh and its role in atten- [3] J. J. Hopfield, Proceedings of the National Academy of Sciences 79, 2554 (1982). tion [17]. During tasks requiring a high degree of fo- [4] S.Recanatesi, M.Katkov,S.Romani, and M.Tsodyks, cus, low SFA allows the brain to fix on the memory that Frontiers in computational neuroscience 9, 275 (2015). closelyfitsthecurrentsensoryinput. Ontheotherhand, [5] I. Lerner and O. Shriki, Frontiers in psychology 5, 314 with highACh the attractorcanbe reinforcedbysynap- (2014). ticplasticity. Asattentionrequirementsarerelaxed,and [6] A. Akrami, E. Russo, and A. Treves, Brain research 6 1434, 4 (2012). (2003). [7] S. P. Aiken, B. J. Lampe, P. A. Murphy, and B. S. [14] L. Bai and D. Breen, Journal of Graphics, GPU, and Brown, British journal of pharmacology 115, 1163 Game Tools 13, 53 (2008). (1995). [15] R.MonassonandS.Rosay,PhysicalReviewLetters115, [8] A.C.Tang,A.M.Bartels, andT.J.Sejnowski,Cerebral 098101 (2015). Cortex 7, 502 (1997). [16] Y. Tsuno, N.W. Schultheiss, and M. E. Hasselmo, The [9] J. P. Roach, E. Ben-Jacob, L. M. Sander, and M. Zo- Journal of Physiology 591, 2611 (2013). chowski, PLoS Computational Biology 11, e1004449 [17] M. E. Hasselmo and M. Sarter, Neuropsychopharmacol- (2015). ogy 36, 52 (2011). [10] M. Lewenstein and A. Nowak, Physical Review Letters [18] J. J. Hopfield, Proceedings of the National Academy of 62, 225 (1989). Sciences 107, 1648 (2010). [11] D.J.Amit,H.Gutfreund, andH.Sompolinsky,Physical [19] J. Vazquez and H. A. Baghdoyan, American journal review. A 32, 1007 (1985). of physiology. Regulatory, integrative and comparative [12] K.M.Stiefel,B.S.Gutkin, andT.J.Sejnowski,Journal physiology 280, R598 (2001). of Computational Neuroscience 26, 289 (2008). [20] G.TononiandC.Cirelli,BrainResearchBulletin62,143 [13] A. Renart, P. Song, and X.-J. Wang, Neuron 38, 473 (2003).