VLSI FOR NEURAL NETWORKS AND ARTIFICIAL INTELLIGENCE VLSI FOR NEURALNETWORKSAND ARTIFICIAL INTELLIGENCE Edited by '" JOSE G. DELGADO-FRIAS State University ofNew Yorkat Binghamton Binghamton, New York and WILLIAM R. MOORE Oxford University Oxford, United Kingdom Springer Science+Business Media, LLC Llbrary of Congress Cataloglng-ln-Publlcatlon Data VLSI for neural networks and artifielal intelligenee I edited by Jose G. Delgado-Frias and William R. Moore. p. em. "Proeeedings of an International Workshop on Artifieial Intelligenee and Neural Networks, held September 2-4,1992, in Oxford, United Klngdom"--T.p. verso. Ineludes bibliographieal referenees and index. 1. Neural networks (Computer seienee)--Congresses. 2. Artifieial IntelI1genee--Congresses. 3. Integrated eireuits--Large seale integration--Congresses. I. Delgado-Frias, Jose. 11. Moore, William R. 111. International Workshop on Artlfieial Intelligenee and Neural Networks (1992 , Oxford, England) QA76.87.V58 1994 006.3--de20 94-31811 CIP ISBN978-1-4899-1333-3 ISBN978-1-4899-1331-9(eBook) DOI 10.1007/978-1-4899-1331-9 ProceedingsofanInternationalWorkshoponArtificial IntelligenceandNeuralNetworks, heldSeptember 2-4. 1992.inOxford, UnitedKingdom ©1994SpringerScience+BusinessMediaNewYork OriginallypublishedbyPlenumPress,NewYorkin1994. Softcoverreprintofthehardcover Istedition 1994 Allrightsreserved Nopart ofthis bookmaybereproduced,storedinaretrieval system,ortransmittedinany formorby any means, electronic,mechanical, photocopying,microfilming, recording, orotherwise, without written permission fromthePublisher PROGRAMME COMMITTEE Nikolaos Bourbakis, Stete UniversityofNew YorkatBinghamton, USA Howard Card, UniversityofManitoba,Csnsd« JoseDelgado-Frias, SteteUniversity ofNewYorkatBinghamton, USA Simon Lavington, UniveristyofEssex, UK WillMoore, UniversityofOxford, UK AlanMurray, UniversityofEdinburgh, UK LionelTarassenko, UniversityofOxford, UK Stamatis Vassiliadis, IBM, USA MichelWeinfield, EPParis,France v PREFACE Neural network and artificial intelligence algorithrns and computing have increased not only in complexity but also in the number of applications. This in turn has posed a tremendous need for a larger computational power that conventional scalar processors may not be able to deliver efficiently.These processors are oriented towards numeric and data manipulations. Due to the neurocomputing requirements (such as non-programming and learning) and the artificial intelligence requirements (such as symbolic manipulation and knowledge representation) a different set ofconstraints and demands are imposed on the computerarchitectures/organizations for theseapplications. Research and development ofnew computer architectures and VLSI circuits for neural networks and artificial intelligence have been increased in order to meet the new performance requirements. This book presents novel approaches and trends on VLSI implementations of machines for these applications. Papers have been drawn from a number of research communities; the subjects span analog and digital VLSI design, computer design, computer architectures, neurocomputing and artificial intelligence techniques. This book has been organized into four subject areas that cover the two major categories of this book; the areas are: analog circuits for neural networks, digital implementations of neural networks, neural networks on multiprocessor systems and applications,and VLSI machines forartificial intelligence.The topics that are covered in each area are briefly introduced below. AnalogCircuits for Neural Networks Using analog circuits as a mean to compute neural networks offers a number of advantages such as smaller circuit size, higher computing speed, and lower power dissipation. Neural network leaming algorithms may need be implemented directly in analog hardware in order to reduce their computing time. Cardpresents a number of analog circuits for on-chip supervised and unsupervised leaming. Lsndolthas developed and implemented in analog CMOSa fullKohonennetworkwith learning capability. ValJc et al report an implementation ofa backpropagation learning algorithm along with some simulation results. LafargueetaIaddresstheproblem ofdesigninga mixed analog-digital circuit for the Boltzmann machine leaming rules. Brauseintroduces the building blocks ofa linear neuron that usesanti-Hebb ruleand restrictedweights. RaflO etaIpresent a set of functional primitives that perform synaptic and neural functions which are implemented on an analog circuit. Digital Implementations ofNeural Networks Digital implementations ofneural networks provide some advantages,such as: noise free, programmability, higher precision, reliable storage devices. Delgado-Friaset a/presenta pipelined bit-serial digital organization that is able to implement a backpropagation learning algorithm. FornaciariandSaJicepropose a tree-structure architecture based on a pseudo-neuron approach. Wrcdaz etalintroduce a multi-model neural network computer based on the systolic computing paradigrn. Asonowc et alpresent a fully programmable single chip microprocessor that is intended to be used withina Sun Sparcstation. Aeetal extend the self-organizing system (or Kohonen network) for semantic applications. Hui et aIpresent a probabilistic RAM-based model that incorporates a reward-penalty learning algorithm. Hurdle cl al propose an asynchronous design approach for neuro computing; the CMAC neural modelisused to demonstrate the approach capabilities. Neural Networks on Muitiprocessor Systemsand Applications König andGlesnerpresent an architecture and implementation ofa scalable associative memory system. Ryan et aldescribe how neural network can be mapped onto dataflow computing approach. A custom associative chip that has 64 fully interconnected binary vii neurons with on-chip learning is presented by Gascuel et alThe use ofRISC processors as implementation of neural networksimulationisastudy by Rückort et al; they report results from eight different processors. Kolcz end Allinson have studied the implementation ofCMAC, a perceptron-like computational structure, using conventional RAMs. llimgand Tanghave developed GENET which is a competitive neural network model for Ars constraint satisfaction problems. Luk et al introduce a declarative language, called Ruby, that could be used in developing of neural network hardware. Pelm et al present a study of the integration of connectionist models and symbolic knowledge processing. Styblinski and Minick present two TankIHopfield-like neural network methods for solvinglinearequations. VLSI Machines for Artificial Intelligence Eight papers address a number ofcurrent concems in the hardware support ofartificial intelligence processing. Lavington et al present a SIMD approach for pattern matehing that is often used in production systems. Howe and Asonovicintroduce a system that incorporates l48X36-bit content addressable parallel processors. Cannataro et al explain a message-passing parallel logic machine that can exploit AND/OR parallelism for logic programs. Rodohan and Glover outline an alternative search mechanism that can be implemented on the Associative String Processor. De Gloria et al present two performance analysis ofa VLIW architecture where global compaction techniques are used. Civers etaldiscuss the applicationofa VLSI prolog system to real time navigation system; requirements, such as processes, memory bandwidth, and inferences per second, are presented. Demarchi et alpresent the design and a performance evaluation of a parallel architecture for OR-parallel logic programs. Yokota and Seo present a RISC Prolog processorwhich include compound instructions and dynamic switch mechanisrns. ACKNOWLEDGMENTS This bookisan editedselectionofthe paperspresentedat the Intemational Workshop on VLSIforNeuralNetworksandArtificialInteJJigencewhichwas held atthe Universityof Oxford in September 1992. Our thanks go to all the contributors and especially to the programmecommitteeforalltheirhardwork. Thanksarealsodue totheACM-SIGARCH the IEEEComputerSociety,and thelEEforpublicizingtheeventand tothe Universityof Oxfordand SUNY-Binghamton for theiractive support. We are particularlygrateful to Laura Duffy, Maureen Doherty and Anna Morris for coping with the administrative problems. viii CONTENTS ANALoGCIRCUlTSroaNEURAL NETwoRKS Analog VLSI Neural Learning Circuits -ATutorial Howard C. Card An Analog CMOS Implementation ofa Kohonen Network with Learning Capability 25 Oliver Lsndolt Back-Propagation Leaming Algorithms forAnalog VLSI Implementation 35 Maurizio ValJe, DanieleD. CavigliaandGiaeomoM Bisio An Analog Implementation ofthe Boltzmann Machine with Programmable Learning Algorithms 45 V.Lstargue, P. Garda andE. Belhsire A VLSI Design ofthe Minimum Entropy Neuron 53 Rüdiger lv.Brause A Multi-LayerAnalog VLSI Architecture forTexture Analysis Isomorphie to Cortical Cells in Mammalian Visual System 61 Luigi Raffo, Giaeomo M Bisio,Daniele D. Caviglia, GiaeomoIndiveri andSilvio P. Sabatini DIGITALIMPLEMENfATIONSOF NEURALNETWORKS A VLSI Pipelined Neuroemulator 71 loseG. Delgudo-Ftiss, Stamatis VassiJiadis, GerskiG. Pecbenek, l*ilin, Steven M BerberandHui Ding A Low Latency Digital Neural Network Architecture 81 liflliamFornaciariandFabioSaliee MANTRA: A Multi-Model Neural-Network Computer 93 Mare A Viredaz, Christian Lehmann,Frencois Blayo andPaolo Ienne SPERT: A Neuro-Mieroprocessor 103 Krste Asanovie',1ames Beck,BrisnE. D.Kingsbury, Phil Kohn, Nelson Morgan andlohn llilwrzynek Design ofNeural Self-Organization Chips for Semantie Applications 109 TadashiAc, Reiji Aibara andKasumas«Kioi VLSI Implementation ofa Digital Neural Networkwith Reward-Penalty Learning 119 TerenceHui,PaulMorgan,Hamid Bolouri andKevin Gurney Asynchronous VLSI Design for Neural System Implementation 129 lohn F.Hurdle,ErikL Brunvandandliililosephson NEURALNETWORKSONMULTIPROCESSOR SYSTEMSANDAPPUCATIONS VLSI-Implementation ofAssociative Memory Systems for Neural Information Processing 141 AndreasKönigandMentcod Glesner A Dataflow Approach for Neural Networks 151 ThomasF. R]11n, loseG. Delgado-Frias, Stamatis VassiJiadis, GerskiG.PechsnekandDouglasM Green A Custom Associative Chip Usedas a 3uilding Block fora Software Reconflgurable Multi-Network Simulator 159 lean-Dominique Gescuel, Brie Delaunay, LionelMontoliu, BahramMoobed andMichel Weinfeld ix Parallel Implementation of NeuralAssociativeMemories on RISC Processors 167 U.Riickert, S.Riiping andE. Naroska Reconfigurable Logic Implementation ofMemory-Based Neural Networks: A Case Study ofthe CMAC Network 177 AJeksanderR. KolczandNigelM AlJinson A Cascadable VLSI Design forGENET 187 ChangJ. WingandEdwardP. K Tsang Pammetrised Neural Network Design and Compilation into Hardware 197 WiJ'11e Luk, Adrian Lawrence, WncentLok,lan Page andRichardStamper Knowledge Processing in Neural Architecture 207 G.Palm,A Ultsch,K Goser and U.Rückort Two Methods for Solving LinearEquations Using NeuralNetworks 217 M A StybJinskiandJiJJR.Minick VLSI MACHlNESFORARTIFICIALINTELUGENCE Hardware Support forData Parallelism in Production Systems 231 S.H Lavington, C.J. Wing, N. KasabovandS.Lin SPACE: Symbolic Processing in Associative Computing Elements 243 DenisB.Howe andKrste Assnovic' PALM: A Logic Progmmming SystemonaHighly Pamllel Architecture 253 Msrio Cannataro, Giandomenico Spezzano andDomenico TaJia A Distributed Parallel Associative Processor (DPAP) fortheExecution of Logic Programs 265 Darren P. Rodohan andRaymondJ. Glover Performance Analysis ofa Parallel VLSIArchitecture forProlog 275 AJessandro De Gloria, Paolo Faraboschi and Mauro Olivieri A Prolog VLSI System forReal TimeApplications 285 Pier Luigi Civets,Guido Masere andMassimoRuo Roch An Extended WAM Based Architecture forOR-ParallelProlog Execution 297 DaniloDemarchi, Gianluca Piccinini and Maurizio Zsmboni Architecture and VLSI Implementation ofa Pegasus-lI Prolog Processor 307 Tsksshi lOkotaandKszuo Seo CONTRIBUfORS 317 ~EX 319 x ANALOG VLSI NEURAL LEARNING CIRCUITS . A TUTORIAL Howard C. Card INTRODUCTION This paper explains the various models of learning in artificial neural networks which are appropriate for implementation as analog VLSI circuits andsystems. We do notcover the widertopicofanalogVLSIneuralnetworksingeneral,butrestrietthepresentationtocircuits which perform in situ learning, Both supervised and unsupervised learning models are included. There aretwoprincipal classesofartificial neural network(ANN) rnodels,asshown in Fig. I.These may be described as feedforward and feedback models. In the feedforward case, during theclassification of an input pattern, information flows unidirectionally from input to output through the various network layers. The best known example of a feedforward ANN is the multilayer perceptron employing the backpropagation oferrors, otherwiseknownasthegeneralizeddeltalearningrule(Rumelhartetal 1986).Similarly,the best known feedback model is the Hopfield network (Hopfield 1984), which generally employs some version of a Hebbian (Hebb, 1949) learning rule. A variation on the feedforward case allows for crosstalk connections within a given layer. This intermediate layermayitselfberegardedasafeedbacknetworkwhichisembeddedintheotherwise feed forward structure. Networks of this type include competitive leaming models (Von der Malsburg 1973) and Kohonen maps (Kohonen 1990). Feedback or recurrent networks exhibit dynamics, and there is a relaxation period after the presentation of a new input pattern. These ANNmodels and their variationsare described in detail in twoexcellent Figure1Two c1assesofartificial neuralnetwork (ANN)modelsarefeedforward(left)andfeedback (middle) models.The feedforward case with crosslalk connectionswithin alayer (right) isemployedincompetitive learningmodels.Thecrosslalklayermayberegardedasafeedbacknetworkwithinthe!argernetwork, VLSIforNeuralNetworksantiArtificialIruelligence EditedbyI.G.Delgado-Frias,Plenum Press,NewYork,1994
Description: