ebook img

Learning in Energy-Efficient Neuromorphic Computing: Algorithm and Architecture Co-Design PDF

297 Pages·2019·8.573 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Learning in Energy-Efficient Neuromorphic Computing: Algorithm and Architecture Co-Design

(cid:2) LearninginEnergy-EfficientNeuromorphicComputing (cid:2) (cid:2) (cid:2) (cid:2) Learning in Energy-Efficient Neuromorphic Computing AlgorithmandArchitectureCo-Design NanZheng TheUniversityofMichigan DepartmentofElectricalEngineeringandComputerScience Michigan,USA PinakiMazumder (cid:2) (cid:2) TheUniversityofMichigan DepartmentofElectricalEngineeringandComputerScience Michigan,USA (cid:2) (cid:2) Thiseditionfirstpublished2020 ©2020JohnWiley&SonsLtd Allrightsreserved.Nopartofthispublicationmaybereproduced,storedinaretrievalsystem,or transmitted,inanyformorbyanymeans,electronic,mechanical,photocopying,recordingorotherwise, exceptaspermittedbylaw.Adviceonhowtoobtainpermissiontoreusematerialfromthistitleisavailable athttp://www.wiley.com/go/permissions. TherightofNanZhengandPinakiMazumdertobeidentifiedastheauthorsofthisworkhasbeenasserted inaccordancewithlaw. RegisteredOffices JohnWiley&Sons,Inc.,111RiverStreet,Hoboken,NJ07030,USA JohnWiley&SonsLtd,TheAtrium,SouthernGate,Chichester,WestSussex,PO198SQ,UK EditorialOffice TheAtrium,SouthernGate,Chichester,WestSussex,PO198SQ,UK Fordetailsofourglobaleditorialoffices,customerservices,andmoreinformationaboutWileyproducts visitusatwww.wiley.com. Wileyalsopublishesitsbooksinavarietyofelectronicformatsandbyprint-on-demand.Somecontentthat appearsinstandardprintversionsofthisbookmaynotbeavailableinotherformats. LimitofLiability/DisclaimerofWarranty Whilethepublisherandauthorshaveusedtheirbesteffortsinpreparingthiswork,theymakeno representationsorwarrantieswithrespecttotheaccuracyorcompletenessofthecontentsofthisworkand specificallydisclaimallwarranties,includingwithoutlimitationanyimpliedwarrantiesofmerchantabilityor fitnessforaparticularpurpose.Nowarrantymaybecreatedorextendedbysalesrepresentatives,written salesmaterialsorpromotionalstatementsforthiswork.Thefactthatanorganization,website,orproductis (cid:2) referredtointhisworkasacitationand/orpotentialsourceoffurtherinformationdoesnotmeanthatthe (cid:2) publisherandauthorsendorsetheinformationorservicestheorganization,website,orproductmayprovide orrecommendationsitmaymake.Thisworkissoldwiththeunderstandingthatthepublisherisnotengaged inrenderingprofessionalservices.Theadviceandstrategiescontainedhereinmaynotbesuitableforyour situation.Youshouldconsultwithaspecialistwhereappropriate.Further,readersshouldbeawarethat websiteslistedinthisworkmayhavechangedordisappearedbetweenwhenthisworkwaswrittenandwhen itisread.Neitherthepublishernorauthorsshallbeliableforanylossofprofitoranyothercommercial damages,includingbutnotlimitedtospecial,incidental,consequential,orotherdamages. LibraryofCongressCataloging-in-PublicationData Names:Zheng,Nan,1989-author.|Mazumder,Pinaki,author. Title:Learninginenergy-efficientneuromorphiccomputing:algorithmand architectureco-design/NanZheng,PinakiMazumder. Description:Hoboken,NJ:Wiley-IEEEPress,[2020]|Includes bibliographicalreferencesandindex. Identifiers:LCCN2019029946(print)|LCCN2019029947(ebook)|ISBN 9781119507383(cloth)|ISBN9781119507390(adobepdf)|ISBN 9781119507406(epub) Subjects:LCSH:Neuralnetworks(Computerscience) Classification:LCCQA76.87.Z47572019(print)|LCCQA76.87(ebook)| DDC006.3/2–dc23 LCrecordavailableathttps://lccn.loc.gov/2019029946 LCebookrecordavailableathttps://lccn.loc.gov/2019029947 Coverdesign:Wiley Coverimage:©monsitj/GettyImages Setin10/12ptWarnockProbySPiGlobal,Chennai,India 10 9 8 7 6 5 4 3 2 1 (cid:2) (cid:2) Learningisthebeginningofwealth. Learningisthebeginningofhealth. Learningisthebeginningofspirituality. Searchingandlearningiswherethemiracleprocessallbegins. –EmanuelJamesRohn InlovingmemoryofmyfatherAnimeshChandraMazumder(1918–2011). (cid:2) (cid:2) (cid:2) (cid:2) vii Contents Preface xi Acknowledgment xix 1 Overview 1 1.1 HistoryofNeuralNetworks 1 1.2 NeuralNetworksinSoftware 2 1.2.1 ArtificialNeuralNetwork 2 1.2.2 SpikingNeuralNetwork 3 1.3 NeedforNeuromorphicHardware 3 1.4 ObjectivesandOutlinesoftheBook 5 (cid:2) References 8 (cid:2) 2 FundamentalsandLearningofArtificialNeuralNetworks 11 2.1 OperationalPrinciplesofArtificialNeuralNetworks 11 2.1.1 Inference 11 2.1.2 Learning 13 2.2 NeuralNetworkBasedMachineLearning 16 2.2.1 SupervisedLearning 17 2.2.2 ReinforcementLearning 20 2.2.3 UnsupervisedLearning 22 2.2.4 CaseStudy:Action-DependentHeuristicDynamicProgramming 23 2.2.4.1 Actor-CriticNetworks 24 2.2.4.2 On-LineLearningAlgorithm 25 2.2.4.3 VirtualUpdateTechnique 27 2.3 NetworkTopologies 31 2.3.1 FullyConnectedNeuralNetworks 31 2.3.2 ConvolutionalNeuralNetworks 32 2.3.3 RecurrentNeuralNetworks 35 2.4 DatasetandBenchmarks 38 2.5 DeepLearning 41 2.5.1 Pre-Deep-LearningEra 41 2.5.2 TheRiseofDeepLearning 41 2.5.3 DeepLearningTechniques 42 (cid:2) (cid:2) viii Contents 2.5.3.1 Performance-ImprovingTechniques 42 2.5.3.2 Energy-Efficiency-ImprovingTechniques 46 2.5.4 DeepNeuralNetworkExamples 50 References 53 3 ArtificialNeuralNetworksinHardware 61 3.1 Overview 61 3.2 General-PurposeProcessors 62 3.3 DigitalAccelerators 63 3.3.1 ADigitalASICApproach 63 3.3.1.1 OptimizationonDataMovementandMemoryAccess 63 3.3.1.2 ScalingPrecision 71 3.3.1.3 LeveragingSparsity 76 3.3.2 FPGA-BasedAccelerators 80 3.4 Analog/Mixed-SignalAccelerators 82 3.4.1 NeuralNetworksinConventionalIntegratedTechnology 82 3.4.1.1 In/Near-MemoryComputing 82 3.4.1.2 Near-SensorComputing 85 3.4.2 NeuralNetworkBasedonEmergingNon-volatileMemory 88 3.4.2.1 CrossbarasaMassivelyParallelEngine 89 3.4.2.2 LearninginaCrossbar 91 3.4.3 OpticalAccelerator 93 (cid:2) 3.5 CaseStudy:AnEnergy-EfficientAcceleratorforAdaptiveDynamic (cid:2) Programming 94 3.5.1 HardwareArchitecture 95 3.5.1.1 On-ChipMemory 95 3.5.1.2 Datapath 97 3.5.1.3 Controller 99 3.5.2 DesignExamples 101 References 108 4 OperationalPrinciplesandLearninginSpikingNeuralNetworks 119 4.1 SpikingNeuralNetworks 119 4.1.1 PopularSpikingNeuronModels 120 4.1.1.1 Hodgkin-HuxleyModel 120 4.1.1.2 LeakyIntegrate-and-FireModel 121 4.1.1.3 IzhikevichModel 121 4.1.2 InformationEncoding 122 4.1.3 SpikingNeuronversusNon-SpikingNeuron 123 4.2 LearninginShallowSNNs 124 4.2.1 ReSuMe 124 4.2.2 Tempotron 125 4.2.3 Spike-Timing-DependentPlasticity 127 4.2.4 LearningThroughModulatingWeight-DependentSTDPinTwo-Layer NeuralNetworks 131 4.2.4.1 Motivations 131 4.2.4.2 EstimatingGradientswithSpikeTimings 131 (cid:2) (cid:2) Contents ix 4.2.4.3 ReinforcementLearningExample 135 4.3 LearninginDeepSNNs 146 4.3.1 SpikeProp 146 4.3.2 StackofShallowNetworks 147 4.3.3 ConversionfromANNs 148 4.3.4 RecentAdvancesinBackpropagationforDeepSNNs 150 4.3.5 LearningThroughModulatingWeight-DependentSTDPinMultilayer NeuralNetworks 151 4.3.5.1 Motivations 151 4.3.5.2 LearningThroughModulatingWeight-DependentSTDP 151 4.3.5.3 SimulationResults 158 References 167 5 HardwareImplementationsofSpikingNeuralNetworks 173 5.1 TheNeedforSpecializedHardware 173 5.1.1 Address-EventRepresentation 173 5.1.2 Event-DrivenComputation 174 5.1.3 InferencewithaProgressivePrecision 175 5.1.4 HardwareConsiderationsforImplementingtheWeight-DependentSTDP LearningRule 181 5.1.4.1 CentralizedMemoryArchitecture 182 5.1.4.2 DistributedMemoryArchitecture 183 (cid:2) 5.2 DigitalSNNs 186 (cid:2) 5.2.1 Large-ScaleSNNASICs 186 5.2.1.1 SpiNNaker 186 5.2.1.2 TrueNorth 187 5.2.1.3 Loihi 191 5.2.2 Small/Moderate-ScaleDigitalSNNs 192 5.2.2.1 Bottom-UpApproach 192 5.2.2.2 Top-DownApproach 193 5.2.3 Hardware-FriendlyReinforcementLearninginSNNs 194 5.2.4 Hardware-FriendlySupervisedLearninginMultilayerSNNs 199 5.2.4.1 HardwareArchitecture 199 5.2.4.2 CMOSImplementationResults 205 5.3 Analog/Mixed-SignalSNNs 210 5.3.1 BasicBuildingBlocks 210 5.3.2 Large-ScaleAnalog/Mixed-SignalCMOSSNNs 211 5.3.2.1 CAVIAR 211 5.3.2.2 BrainScaleS 214 5.3.2.3 Neurogrid 215 5.3.3 OtherAnalog/Mixed-SignalCMOSSNNASICs 216 5.3.4 SNNsBasedonEmergingNanotechnologies 216 5.3.4.1 Energy-EfficientSolutions 217 5.3.4.2 SynapticPlasticity 218 5.3.5 CaseStudy:MemristorCrossbarBasedLearninginSNNs 220 5.3.5.1 Motivations 220 5.3.5.2 AlgorithmAdaptations 222 (cid:2)

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.