vi J.A. Guest, Biology and Biochemistry, University of Houston, Houston, TX, USA C. Hart, Neurobiology and Anatomy, Drexel University College of Medicine, 2900 Queen Lane, Philadelphia, PA 19129, USA M.Hawken,CenterforNeuralScience,NewYorkUniversity,4WashingtonPlace,NewYork,NY10003, USA M.R.Hinder,PerceptionandMotorSystemsLaboratory,SchoolofHumanMovementStudies,University of Queensland, Brisbane, Queensland 4072, Australia G.E. Hinton, Department ofComputer Science, Universityof Toronto, 10Kings College Road, Toronto, M5S 3G4 Canada A.Ijspeert,SchoolofComputerandCommunicationSciences,EcolePolytechniqueFe´de´raledeLausanne (EPFL), Station 14, CH-1015 Lausanne, Switzerland J.F. Kalaska, GRSNC, De´partement de Physiologie, Faculte´ de Me´decine, Pavillon Paul-G. Desmarais, Universite´ de Montre´al, C.P. 6128, Succursale Centre-ville, Montre´al, QC H3C 3J7, Canada M. Kerszberg, Universite´ Pierre et Marie Curie, Mode´lisation Dynamique des Syste`mes Inte´gre´s UMR CNRS7138—Syste´matique,Adaptation,e´volution,7QuaiSaintBernard,75252ParisCedex05,France U.Knoblich,CenterforBiologicalandComputationalLearning,McGovernInstituteforBrainResearch, Computer Science and Artificial Intelligence Laboratory, Brain and Cognitive Sciences Department, Massachusetts Institute of Technology, 43 Vassar Street #46-5155B, Cambridge, MA 02139, USA C. Koch, DivisionofBiology, California Instituteof Technology, MC 216-76,Pasadena, CA 91125, USA J.H. Kotaleski, Computational Biology and Neurocomputing, School of Computer Science and Communication, Royal Institute of Technology, SE 10044 Stockholm, Sweden M. Kouh, Center for Biological and Computational Learning, McGovern Institute for Brain Research, Computer Science and Artificial Intelligence Laboratory, Brain and Cognitive Sciences Department, Massachusetts Institute of Technology, 43 Vassar Street #46-5155B, Cambridge, MA 02139, USA A. Kozlov, Computational Biology and Neurocomputing, School of Computer Science and Communication, Royal Institute of Technology, SE 10044 Stockholm, Sweden J.W. Krakauer, The Motor Performance Laboratory, Department of Neurology, Columbia University College of Physicians and Surgeons, New York, NY 10032, USA G. Kreiman, Department of Ophthalmology and Neuroscience, Children’s Hospital Boston, Harvard Medical School and Center for Brain Science, Harvard University N.I.Krouchev,GRSNC, De´partementde Physiologie, Faculte´ de Me´decine, Pavillon Paul-G.Desmarais, Universite´ de Montre´al, C.P. 6128, Succursale Centre-ville, Montre´al, QC H3C 3J7, Canada I. Kurtzer, Centre for Neuroscience Studies, Queen’s University, Kingston, ON K7L 3N6, Canada A. Lansner, Computational Biology and Neurocomputing, School of Computer Science and Communication, Royal Institute of Technology, SE 10044 Stockholm, Sweden P.E. Latham, Gatsby Computational Neuroscience Unit, London WC1N 3AR, UK M.F. Levin, Center for Interdisciplinary Research in Rehabilitation, Rehabilitation Institute of Montreal and Jewish Rehabilitation Hospital, Laval, QC, Canada J. Lewi, Georgia Institute of Technology, Atlanta, GA, USA Y. Liang, Biology and Biochemistry, University of Houston, Houston, TX, USA W.J. Ma, Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14627, USA K.M. MacLeod, Department of Biology, University of Maryland, College Park, MD 20742, USA L. Maler, Department of Cell and Molecular Medicine and Center for Neural Dynamics, University of Ottawa, 451 Smyth Rd, Ottawa, ON K1H 8M5, Canada E. Marder, Volen Center MS 013, Brandeis University, 415 South St., Waltham, MA 02454-9110, USA S.N. Markin, Department of Neurobiology and Anatomy, Drexel University College of Medicine, 2900 Queen Lane, Philadelphia, PA 19129, USA vii N.Y. Masse, Department of Physiology, McGill University, 3655 Sir William Osler, Montreal, QC H3G 1Y6, Canada D.A. McCrea, Spinal Cord Research Centre and Department of Physiology, University of Manitoba, 730 William Avenue, Winnipeg, MB R3E 3J7, Canada A.Menciassi,CRIMLaboratory,ScuolaSuperioreSant’Anna,VialeRinaldoPiaggio34,56025Pontedera (Pisa), Italy T. Mergner, Neurological University Clinic, Neurocenter, Breisacher Street 64, 79106 Freiburg, Germany T.E. Milner, School of Kinesiology, Simon Fraser University, Burnaby, BC V5A 1S6, Canada P. Mohajerian, Computer Science and Neuroscience, University of Southern California, Los Angeles, CA 90089-2905, USA L.Paninski,DepartmentofStatisticsandCenterforTheoreticalNeuroscience,ColumbiaUniversity,New York, NY 10027, USA V. Patil, Neurobiology and Anatomy, Drexel University College of Medicine, 2900 Queen Lane, Philadelphia, PA 19129, USA J.F.R. Paton, Department of Physiology, School of Medical Sciences, University of Bristol, Bristol BS8 1TD, UK J. Pillow, Gatsby Computational Neuroscience Unit, University College London, Alexandra House, 17 Queen Square, London WC1N 3AR, UK T. Poggio, Center for Biological and Computational Learning, McGovern Institute for Brain Research, Computer Science and Artificial Intelligence Laboratory, Brain and Cognitive Sciences Department, Massachusetts Institute of Technology, 43 Vassar Street #46-5155B, Cambridge, MA 02139, USA A. Pouget, Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14627, USA A. Prochazka, Centre for Neuroscience, 507 HMRC University of Alberta, Edmonton, AB T6G 2S2, Canada R.Rohrkemper,PhysicsDepartment,InstituteofNeuroinformatics,SwissFederalInstituteofTechnology, Zu¨rich CH-8057, Switzerland I.A. Rybak, Department of Neurobiology and Anatomy, Drexel University College of Medicine, Philadelphia, PA 19129, USA A. Sangole, Center for Interdisciplinary Research in Rehabilitation, Rehabilitation Institute of Montreal and Jewish Rehabilitation Hospital, Laval, QC, Canada S.Schaal,ComputerScienceandNeuroscience,UniversityofSouthernCalifornia,LosAngeles,CA90089- 2905, USA S.H.Scott,CentreforNeuroscienceStudies,DepartmentofAnatomyandCellBiology,Queen’sUniversity, Botterell Hall, Kingston, ON K7L 3N6, Canada T. Serre, Center for Biological and Computational Learning, McGovern Institute for Brain Research, Computer Science and Artificial Intelligence Laboratory, Brain and Cognitive Sciences Department, Massachusetts Institute of Technology, 43 Vassar Street #46-5155B, Cambridge, MA 02139, USA R. Shadmehr, Laboratory for Computational Motor Control, Department of Biomedical Engineering, Johns Hopkins School of Medicine, Baltimore, MD 21205, USA R.Shapley,CenterforNeuralScience,NewYorkUniversity,4WashingtonPlace,NewYork,NY10003, USA J.C. Smith, Cellular and Systems Neurobiology Section, National Institute of Neurological Disorders and Stroke, National Institutes of Health, Bethesda, MD 20892-4455, USA C. Stefanini,CRIM Laboratory,ScuolaSuperiore Sant’Anna,Viale Rinaldo Piaggio34,56025Pontedera (Pisa), Italy J.A. Taylor, Department of Biomedical Engineering, Washington University, 1 Brookings Dr., St Louis, MO 63130, USA viii K.A.Thoroughman,DepartmentofBiomedicalEngineering,WashingtonUniversity,1BrookingsDr.,St Louis, MO 63130, USA L.H. Ting, The Wallace H. Coulter Department of Biomedical Engineering at Georgia Tech and Emory University, 313 Ferst Drive, Atlanta, GA 30332-0535, USA A.-E. Tobin, Volen Center MS 013, Brandeis University, 415 South St., Waltham, MA 02454-9110, USA E. Torres, Division of Biology, California Institute of Technology, Pasadena, CA 91125, USA J.Z.Tsien,CenterforSystemsNeurobiology,DepartmentsofPharmacologyandBiomedicalEngineering, Boston University, Boston, MA 02118, USA D. Tweed, Departments of Physiology and Medicine, University of Toronto, 1 King’s College Circle, Toronto, ON M5S 1A8, Canada D.B. Walther,Beckman InstituteforAdvancedScienceandTechnology, UniversityofIllinoisatUrbana- Champaign, 405 N. Mathews Ave., Urbana, IL 61801, USA A.C.Wilhelm,DepartmentofPhysiology,McGillUniversity,3655SirWilliam Osler,Montreal,QCH3G 1Y6, Canada D. Xing, Center for Neural Science, New York University, 4 Washington Place, New York, NY 10003, USA S. Yakovenko, Departe´ment de Physiologie, Universite´ de Montre´al, Pavillon Paul-G. Desmarais, Universite de Montreal, C.P. 6128, Succ. Centre-ville, Montreal, QC H3C 3J7, Canada D.Zipser,DepartmentofCognitiveScience,UCSD0515,9500GilmanDrive,SanDiego,CA92093,USA Preface In recent years, computational approaches have become an increasingly prominent and influential part of neuroscienceresearch.Fromthecellularmechanismsofsynaptictransmissionandthegenerationofaction potentials, to interactions among networks of neurons, to the high-level processes of perception and memory,computationalmodelsprovidenewsourcesofinsightintothecomplexmachinerywhichunderlies our behaviour. These models are not merely mathematical surrogates for experimental data. More importantly, they help us to clarify our understanding of a particular nervous system process or function, and to guide the design of our experiments by obliging us to express our hypotheses in a language of mathematicalformalisms.Amathematicalmodelisanexplicithypothesis,inwhichwemustincorporateall of our beliefs and assumptions in a rigorous and coherent conceptual framework that is subject to falsification and modification. Furthermore, a successful computational model is a rich source of predictions for future experiments. Even a simplified computational model can offer insights that unify phenomena across different levels of analysis, linking cells to networks and networks to behaviour. Over the last few decades, more and more experimental data have been interpreted from computational perspectives, new courses and graduate programs have been developed to teach computational neuroscience methods and a multitude of interdisciplinary conferences and symposia have been organized to bring mathematical theorists and experimental neuroscientists together. Thisbookistheresultofonesuchsymposium,heldattheUniversite´ deMontre´alonMay8–9,2006(see: http://www.grsnc.umontreal.ca/XXVIIIs). It was organized by the Groupe de Recherche sur le Syste`me Nerveux Central (GRSNC) as one of a series of annual international symposia held on a different topic eachyear.Thiswasthefirstsymposium inthat annualseries thatfocused oncomputational neuroscience, and it included presentations by some ofthe pioneers ofcomputational neuroscience aswellas prominent experimental neuroscientists whose research is increasingly integrated with computational modelling. The symposium was a resounding success, and it made clear to us that computational models have become a major and very exciting aspect of neuroscience research. Many of the participants at that meeting have contributed chapters to this book, including symposium speakers and poster presenters. In addition, we invited a number of other well-known computational neuroscientists, who could not participate in the symposium itself, to also submit chapters. Of course, a collection of 34 chapters cannot cover more than a fraction of the vast range of computational approaches which exist. We have done our best to include work pertaining to a variety of neural systems, at many different levels of analysis, from the cellular to the behavioural, from approaches intimatelytiedwithneuraldatatomoreabstractalgorithmsofmachinelearning.Theresultisacollection which includes models of signal transduction along dendrites, circuit models of visual processing, computational analyses of vestibular processing, theories of motor control and learning, machine algorithms for pattern recognition, as well as many other topics. We asked all of our contributors to address their chapters to a broad audience of neuroscientists, psychologists, and mathematicians, and to focus on the broad theoretical issues which tie these fields together. The conference, and this book, would not have been possible without the generous support of the GRSNC,theCanadianInstituteofAdvancedResearch(CIAR),InstituteofNeuroscience,MentalHealth and Addiction (INMHA) of the Canadian Institutes of Health Research (CIHR), the Fonds de la ix x Recherche en Sante´ Que´bec (FRSQ), and the Universite´ de Montre´al. We gratefully acknowledge these sponsors as well as our contributing authors who dedicated their time to present their perspectives on the computational principles which underlie our sensations, thoughts, and actions. Paul Cisek Trevor Drew John F. Kalaska P.Cisek,T.Drew&J.F.Kalaska(Eds.) ProgressinBrainResearch,Vol.165 ISSN0079-6123 Copyrightr2007ElsevierB.V.Allrightsreserved CHAPTER 1 The neuronal transfer function: contributions from voltage- and time-dependent mechanisms Erik P. Cook1,(cid:1), Aude C. Wilhelm1, Jennifer A. Guest2, Yong Liang2, Nicolas Y. Masse1 and Costa M. Colbert2 1Departmentof Physiology, McGillUniversity, 3655SirWilliam Osler,Montreal, QCH3G 1Y6,Canada 2Biology andBiochemistry, University of Houston,Houston, TX,USA Abstract: The discoverythatanarray ofvoltage-and time-dependentchannelsispresent inboth the dend- rites and soma of neurons has led to a variety of models for single-neuron computation. Most of these models, however, are based on experimental techniques that use simplified inputs of either single synaptic events or brief current injections. In this study, we used a more complex time-varying input to mimic the continuous barrage of synaptic input that neurons are likely to receive in vivo. Using dual whole-cell recordingsofCA1pyramidalneurons,weinjectedlong-durationwhite-noisecurrentintothedendrites.The amplitudevarianceofthisstimuluswasadjustedtoproduceeitherlowsubthresholdorhighsuprathreshold fluctuationsofthesomaticmembranepotential.Somaticactionpotentialswereproducedinthehighvariance inputcondition.Applyingarigoroussystem-identificationapproach,wediscoveredthattheneuronalinput/ output function was extremely well described by a model containing a linear bandpass filter followed by a nonlinearstatic-gain.Usingcomputermodels,wefoundthatarangeofvoltage-dependentchannelproperties can readily account for the experimentally observed filtering in the neuronal input/output function. In addition,thebandpasssignalprocessingoftheneuronalinput/outputfunctionwasdeterminedbythetime- dependence of the channels. A simple active channel, however, could not account for the experimentally observed change in gain. These results suggest that nonlinear voltage- and time-dependent channels con- tributetothelinearfilteringoftheneuronalinput/outputfunctionandthatchannelkineticsshapetemporal signal processing in dendrites. Keywords: dendrite; integration; hippocampus; CA1; channel; system-identification; white noise The neuronal input/output function of understanding information processing in the brain. The past two decades of research have What are the rules that single neurons use to significantly increased our knowledge of how neu- process synaptic input? Put another way, what is ronsintegratesynapticinput,includingthefinding theneuronalinput/outputfunction?Revealingthe thatdendritescontainnonlinearvoltage-andtime- answertothisquestioniscentraltothelargertask dependent mechanisms (for review, see Johnston et al., 1996). However, there is still no consensus (cid:1) on the precise structure of the rules for synaptic Correspondingauthor.Tel.:+15143987691; Fax:+15143988241;E-mail:[email protected] integration. DOI:10.1016/S0079-6123(06)65001-2 1 2 Early theoretical models of neuronal computa- currentinjection,appliedwithnobackgroundacti- tion described the neuronal input/output function vity present (but see Larkum et al., 2001; Oviedo as a static summation of the synaptic inputs and Reyes, 2002; Ulrich, 2002; Oviedo and Reyes, (McCulloch and Pitts, 1943). Rall later proposed 2005; Gasparini and Magee, 2006). Based on the that cable theory could account for the passive average spike rate of central neurons, it is unlikely electrotonic properties of dendritic processing that dendrites receive single synaptic inputs in (Rall, 1959). This passive theory of dendritic inte- isolation. A more likely scenario is that dendrites gration has been extremely useful because it receiveconstanttime-varyingexcitatoryandinhibi- encompassesboththe spatial and temporal aspects torysynapticinput thattogetherproduces random oftheneuronalinput/outputfunctionusingasingle fluctuations in the membrane potential (Ferster quantitative framework. For example, the passive and Jagadeesh, 1992; Destexhe and Pare, 1999; model predicts that the temporal characteristics of Chanceetal.,2002;Destexheetal.,2003;Williams, dendrites are described by a lowpass filter with 2004). The challenge is to incorporate this type a cutoff frequency that is inversely related to the of temporally varying input into our study of the distance from the soma. neuronal input/output function. Fortunately, sys- Therecentdiscoverythatdendritescontainarich tem-identification theory provides us with several collection of time- and voltage-dependent channels useful tools for addressing this question. has renewed and intensified the study of dendritic signal processing at the electrophysiological level (forreviews,seeHausseretal.,2000;Magee,2000; Using a white-noise input to reveal the neuronal SegevandLondon,2000;Reyes,2001;Londonand input/output function Hausser, 2005). The central goal of this effort has been to understand how these active mechanisms The field of system-identification theory hasdevel- augment the passive properties of dendrites. These oped rigorous methods for describing the input/ studies, however, have produced somewhat con- output relationships of unknown systems (for flicting results as to whether dendrites integrate reviews, see Marmarelis and Marmarelis, 1978; synaptic inputs in a linear or nonlinear fashion Sakai,1992;WestwickandKearney,2003)andhas (Urban and Barrionuevo, 1998; Cash and Yuste, been used to describe the relationship between 1999; Nettleton and Spain, 2000; Larkum et al., externalsensoryinputsandneuronalresponsesina 2001;Weietal.,2001;Tamasetal.,2002;Williams varietyofbrainareas(forreviews,seeChichilnisky, and Stuart, 2002). The focus of past electrophysi- 2001;Wuetal.,2006).Aprominenttoolinsystem- ological studies has also been to identify the con- identificationistheuseofa‘‘white-noise’’stimulus ditions in which dendrites initiate action potentials to characterize the system. Such an input theoret- (Stuart et al., 1997; Golding and Spruston, 1998; icallycontainsalltemporalcorrelationsandpower Larkum and Zhu, 2002; Ariav et al., 2003; at all frequencies. If the unknown system is linear, Gasparini et al., 2004; Womack and Khodakhah, orslightlynonlinear,itisastraightforwardprocess 2004), to understand how dendrites spatially and toextractadescriptionofthesystembycorrelating temporally integrate inputs (Magee, 1999; Polsky the output with the random input stimulus. If the et al., 2004; Williams, 2004; Gasparini and Magee, unknown systemishighlynonlinear, however,this 2006; Nevian et al., 2007), and to reveal the extent approach is much more difficult. oflocaldendriticcomputation(Mel,1993;Hausser One difficulty of describing the input/output and Mel, 2003; Williams and Stuart, 2003). function of a single neuron is that we lack precise Although these past studies have shed light on statisticaldescriptionsoftheinputsneuronsreceive many aspects of single-neuron computation, most over time. Given that a typical pyramidal neuron studieshave focused onquiescentneurons invitro. hasovertenthousandsynapticcontacts,onemight A common experimental technique is to observe reasonably estimate that an input arrives on the how dendrites process brief ‘‘single-shock’’ inputs, dendrites every millisecond or less, producing either a single EPSP or the equivalent dendritic membrane fluctuationsthat are constantly varying 3 in time. Thus, using a white-noise input has two electrodeandmeasuredthemembranepotentialat advantages: (1) it affords the use of quantitative the soma (V) with a second electrode. The ampli- s methods for identifying the dendrite input/output tude distribution of the injected current was functionand(2)itmayrepresentastimulusthatis Gaussian with zero mean. Electrode separation statistically closer to the type of input dendrites ranged from 125 to 210mm with the dendrite elec- receive in vivo. trodeplacedonthemainproximalapicaldendritic We applied a system-identification approach to branch. Figure 1 illustrates a short segment of reveal the input/output function of hippocampal the white-noise stimulus and the corresponding CA1 pyramidal neurons in vitro (Fig. 1). We used somatic membrane potentials. standard techniques to perform dual whole-cell To examine how the input/output function patchclamprecordingsinbrainslices(Colbertand changed with different input conditions, we alter- Pan, 2002). More specifically, we injected 50s of nately changed the variance of the input current white-noisecurrent(I )intothedendriteswithone between low and high values. The low-variance d input produced small subthreshold fluctuations in the somatic membrane potential. In contrast, the high-variance input produced large fluctuations that caused the neurons to fire action potentials with an average rate of 0.9spikes/s. This rate of firingwaschosenbecauseitissimilartotheaverage firing rate of CA1 hippocampal neurons in vivo (Markus et al., 1995; Yoganarasimha et al., 2006). Thus, we examined the dendrite-to-soma input/ output function under physiologically reasonable subthresholdandsuprathresholdoperatingregimes. The LN model Figure1illustratesourapproachfordescribingthe input/output function of the neuron using an LN model (Hunter and Korenberg, 1986). This is a functionalmodelthatprovidesanintuitivedescrip- tion of the system under study and has been parti- cularly useful for capturing temporal processing in the retina in response to random visual inputs (for Fig.1. Usingasystem-identificationapproachtocharacterize reviews,see Meister and Berry, 1999; Chichilnisky, the dendrite-to-somainput/outputfunction.(A) Fiftyseconds 2001) and the processing of current injected at ofzero-meanGaussiandistributedrandomcurrent(I )wasin- d jected into the proximal apical dendrites of CA1 pyramidal the soma of neurons (Bryant and Segundo, 1976; neuronsandthemembranepotential(Vs)wasrecordedatthe Poliakovetal.,1997;Binderetal.,1999;Sleeetal., soma. The variance of the injected current was switched be- 2005).TheLNmodelisacascadeoftwoprocessing tween low (bottom traces) and high (top traces) on alternate stages:Thefirststageisafilter(the‘‘L’’stage)that trials.Actionpotentialswereproducedwiththehigh-variance linearly convolves the input current I . The output input.(B)AnLNmodelwasfittothesomaticpotential.The d inputtothemodelwastheinjectedcurrentandtheoutputof of the linear filter, F, is the input to the nonlinear the model was the predicted soma potential (V^S). The LN second stage(the‘‘N’’stage)thatconvertstheout- modelwascomposedofalinearfilterthatwasconvolvedwith put of the linear filter into the predicted somatic theinputcurrentfollowedbyastatic-gainfunction.Theoutput potentials (V^ ). This second stage is static and can ofthelinearfilter,F(arbitraryunits),wasscaledbythestatic- S be viewed as capturing the gain of the system. The gainfunctiontoproducethepredictedsomaticpotential.The static-gainfunctionwasmodeledasaquadraticfunctionofF. two stages of the LN model are represented 4 mathematically as input/output function is shown in Fig. 2. For this neuron, the LN model’s predicted somatic F ¼H(cid:1)I d membrane voltage (V^ ; dashed line) almost S V^S ¼GðFÞ ð1Þ perfectly overlapped the neuron’s actual somatic potential (V, thick gray line) for both input condi- where H is a linear filter, * the convolution oper- s tions (Fig. 2A and B). The LN model was able to ator, and G a quadratic static-gain function. fully describe the somatic potentials in response to Havingtwostagesofprocessingisanimportant the random input current with very little error. aspectofthemodelbecauseitallowsustoseparate Computing the Pearson’s correlation coefficient temporal processing from gain control. The linear over the entire 50s of data, the LN model filter describes the temporal processing while the accounted for greater than 97% of the variance of nonlinear static-gain captures amplitude-depend- this neuron’s somatic potential. ent changes in gain. Thus, this functional model Repeating this experiment in 11 CA1 neurons, permits us to describe the neuronal input/output the LN model accounted for practically all of the functionusingquantitativelyprecisetermssuchas somatic membrane potential (average R240.97). filtering and gain control. In contrast, highly de- Both the low and high variance input conditions tailed biophysical models of single neurons, with werecapturedequallywellbytheLNmodel.Thus, their large number of nonlinear free parameters, the LN model is a functional model that describes are less likely to provide such a functionally clear theneuronalinput/outputfunctionoverarangeof description of single-neuron computation. input regimes from low-variance subthreshold to It is important to note that we did not seek to high-variance suprathreshold stimulation. describe the production of action potentials in the dendrite-to-soma input/output function. Action potentialsareextremelynonlineareventsandwould Gain but not filtering adapts to the input variance notbecapturedbytheLNmodel.Weinsteadfocu- sed on explaining the subthreshold fluctuations of The LN model’s linear filters and nonlinear static- the somatic voltage potential. Thus, action poten- gainfunctionsareshownforourexampleneuronin tials were removed from the somatic potential be- Fig. 2C and D. The impulse-response function of forethedatawereanalyzed.Thiswasaccomplished the linear filters (Fig. 2C) for both the low (solid bylinearlyinterpolatingthesomaticpotentialfrom line) and high (dashed line) variance inputs had 1msbeforetheoccurrenceoftheactionpotentialto pronounced negativities corresponding to a band- either5or10msaftertheactionpotential.Because pass in the 1–10Hz frequency range (inset). action potentials make up a very small part of the Although the two input conditions were signifi- 50s of data (typically less than 2%), our results cantly different, the filters for the low- and high- werenotqualitativelyaffectedwhenthespikeswere variance inputs were very similar. Across our pop- left in place during the analysis. ulationofneurons,wefoundnosystematicchange inthelinearfiltersastheinputvariancewasvaried The LN model accounts for the dendrite-to-soma between low and high levels. Therefore, the tem- input/output function poral processing performed by CA1 pyramidal neurons on inputs arriving at the proximal apical Usingstandard techniques,wefittheLNmodelto dendrites does not change with the input variance. reproduce the recorded somatic potential in res- IncontrasttothefilteringpropertiesofCA1neu- ponsetotheinjecteddendriticcurrent(Hunterand rons, the static-gain function changed as a function Korenberg,1986).Wewantedtoknowhowthelow of input variance. Figure 2D illustrates the static- and high variance input conditions affected the gain function for both input conditions. In this components of the LN model. Therefore, these plot,therestingmembranepotentialcorrespondsto conditions were fit separately. An example of 0mVandtheunitsfortheoutputofthelinearfilter the LN model’s ability to account for the neuronal (F) are arbitrary. The static-gain function for the 5 Fig.2. Thedendrite-to-somainput/outputfunctionofaCA1neuroniswelldescribedbytheLNmodel.(A)Exampleof500msofthe input current and somatic potential for the low-variance input. The predicted somatic membrane potential of the LN model (V^ ; S dashedline)overlapstherecordedsomaticpotential(V,thickgrayline).(B)ExampleoftheLNmodel’sfittothehigh-varianceinput. s ActionpotentialswereremovedfromtherecordedsomaticpotentialbeforefittingtheLNmodeltothedata.(C)Theimpulse-response functionofthelinearfiltersfortheoptimizedLNmodelcorrespondingtothelow(solidline)andhigh(dashedline)varianceinputs. Insetisthefrequencyresponseofthefilters.(D)Static-gainfunctionfortheoptimizedLNmodelplottedforthelow(solidline)and high(dashedline)varianceinputs.Theaxesforthehighvarianceinputwereappropriatelyscaledsothattheslopeofbothstatic-gain functionscouldbecompared. low-varianceinputwasastraightlineindicatingthat the resting membrane potential. This reduction in the neuronal input/output function was linear. For gain also increased with both hyperpolarized and the high-variance input, however, the static-gain depolarizedpotentials.Adaptingtothevarianceof function demonstrated two important nonlineari- an input is an important form of gain control ties. First, the static-gain function showed a com- because it ensures that the input stays within the pressive nonlinearity at depolarized potentials. operating range of the neuron. Although a 16% Thus, at large depolarizing potentials, there was a reduction may seem small in comparison to the reduction in the gain of the input/output relation- largechangeintheinput-variance,therearemany ship.Second,therewasageneralreductioninslope instanceswheresmallchanges inneuronalactivity of the static-gain function for high-variance input are related to significant changes in behavior. For comparedwiththelow-varianceslope,indicatingan visual cortical neurons, it has been shown that overall reduction in gain. Thus, for this neuron, in- small changes in spike activity (o5%) are corre- creasing the variance of the input reduced the gain lated with pronounced changes in perceptual oftheinput/outputfunctionatrestthatwasfurther abilities (Britten et al., 1996; Dodd et al., 2001; reduced for depolarizing potentials. Cook and Maunsell, 2002; Uka and DeAngelis, Across our population of 11 neurons, we found 2004; Purushothaman and Bradley, 2005). Thus, that increasing the variance of the input reduced even small modulations of neuronal activity can the gain of CA1 neurons by an average of 16% at have large effects on behavior.