ebook img

VLSI for Artificial Intelligence and Neural Networks PDF

410 Pages·1991·31.331 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview VLSI for Artificial Intelligence and Neural Networks

VLSI FOR ARTIFICIAL INTELLIGENCE AND NEURAL NETWORKS VLSI FOR ARTIFICIAL INTELLIGENCE AND NEURAL NETWORKS Edited by JOSE G. DELGADO-FRIAS State University of New York at Binghamton Binghamton, New York and WILLIAM R. MOORE Oxford University Oxford, United Kingdom SPRINGER SCIENCE+BUSINESS MEDIA, LLC Library of Congress Cataloging-in-Publication Data International Workshop on VLSI for Artirlcial In1elligence and Neural Net.orks 11990 Oxford. EnglandJ VlSI for artiflClal Intel 1 1gence and neural networks ! edited by Jose G. De 1g ado-Frl as and WI Il i am R. Moare. p. cm. "Proceedings of the International Workshop an VLSI for Artificial Iotelilgence and loJeura' Networks. held September 5-7. 1990. in Oxford. Un!ted Klngdom"--T.p. verse. Includes blbl10graphlcal references and lndex. ISBN 978-1-4613-6671-3 ISBN 978-1-4615-3752-6 (eBook) DOI 10.1007/978-1-4615-3752-6 1. Artificial Intelligence--Congresses. 2. Neural networks (Computer science)--Congresses. 3. Integrated circuits--Very large sca19 ~F"!!eg""?'Ţ~c~-·-Cor.;;r9sse~. D::'~g~=::-!="; ~s, Jas~ G. II. Moare. W,II R. III. Tit le. 0335.1575 1990 006.3--dc20 91-24010 CIP Proceedings of the International Workshop on VLSI for Artificial Intelligence and Neural Networks, held September 5-7, 1990, in Oxford, United Kingdom ISBN 978-1-4613-6671-3 © 1991 Springer Science+Business Media New York Origina11y published by Plenum Press in 1991 Softcover reprint of the hardcover 1s t edition 1991 AII rights reserved No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, microfilming, recording, or otherwise, without written permission from the Publisher PROGRAMME COMMITTEE Igor Aleksander, Imperial CoJlege, UK Howard Card, University ofM anitoba, Canada Jose Delgado-Frias, SUNY-Binghamton, USA Richard Frost, UniversityofWinsoT, Canada Peter Kogge, IBM, USA Will Moore, University ofO xford, UK Alan Murray, University ofE dinburgh, UK John Oldfield, Syracuse University, USA Lionel Tarassenko, UniversityofOxford, UK Philip Treleaven, University CoJlege London, UK Benjamin Wah, University ofIJlinois, USA Michel Weinfield, EP Paris, France v PREFACE This book is an edited selection of the papers presented at the International Workshop on VLSI for Artifidal Intelligence and Neural Networks which was held at the University of Oxford in September 1990. Our thanks go to all the contributors and especially to the programme committee for all their hard work. Thanks are also due to the ACM-SIGARCH, the IEEE Computer Society, and the lEE for publicizing the event and to the University of Oxford and SUNY-Binghamton for their active support. We are particularly grateful to Anna Morris, Maureen Doherty and Laura Duffy for coping with the administrative problems. Jose Delgado-Frias Will Moore April 1991 vii PROLOGUE Artificial intelligence and neural network algorithms/computing have increased in complexity as well as in the number of applications. This in tum has posed a tremendous need for a larger computational power than can be provided by conventional scalar processors which are oriented towards numeric and data manipulations. Due to the artificial intelligence requirements (symbolic manipulation, knowledge representation, non-deterministic computations and dynamic resource allocation) and neural network computing approach (non-programming and learning), a different set of constraints and demands are imposed on the computer architectures for these applications. Research and development of novel machines for artificial intelligence and neural networks has increased in order to meet the new performance requirements. This book presents novel approaches and future trends to VLSI implementations of machines for these applications. For the time being these architectures have to be implemented in VLSI technology with all the benefits and constraints that this implies. Papers have been drawn from a number of research communities; the subjects span VLSI design, computer design, computer architectures, artificial intelligence techniques and neural computing. This book has five chapters that have been grouped on two major categories: computer architectures for artificial intelligence and hardware support for neural computing. The topics covered here range from symbolic manipulation to connectionism and from programmed systems to learning systems. Computer architectures for artificial intelligence In this category there are two chapters, Chapter I deals with artificial intelligence impact on computer architecture issues and Chapter 2 covers novel implementations of machines for Prolog processing. Issues such as architectural approaches for artificial intelligence processing are covered first. There is a wide variety of approaches that range from a co-processor (§ 1.5) to alleviate the load of the main processor to multiple processor architectures such as single instruction multiple data stream (§1.7), multiple data multiple data stream (§l.l, 1.3, 1.8) and hybrid MIMD/SIMD machines (§1.2). Other issues that are usually found in hardware implementations of AI are also addressed. These topics include: garbage collection (§1.4), parallel matching (§ 1.6) and path planning (§ 1.9). The use of Prolog as a language for artificial intelligence has increased in recent years. As the AI applications grow in complexity, high performance Prolog machines become an urgent requirement. Chapter 2 provides a number of alternatives for Prolog implementations. The alternatives range from extensions to processors (§2.1, 2.2) through specialized VLSI hardware for unification (§2.3) to parallel implementations (§2.4, 2.5, 2.6, 2.7). Hardware support for neural computing There are three chapters under this heading; each chapter reflects a different trend in the implementation of neural networks. Chapter 3 deals with pulse stream neural networks that tend to mimic more closely the behavior of biological neurons (§3.1, 3.2, 3.5, 3.6, 3.7) and analogue approaches which may reduce the size of the circuitry involved in the neural network realizations (§3.3, 3.4, 3.8). ix Prologue Alternatively digital implementation of neural networks hold a promise of higher precision. The approaches, presented in Chapter 4, include: associative methods (§4.2, 4.3), systolic implementations (§4.8, 4.10), bit-serial streams (§4.4), binary networks (§4.5), sparse connections (§4.6) and neuron-parallel layer-serial (§4.7). Neural network computing is frequently modelled as a matrix/vector computation. Processor arrays for this type of manipulation may be appropriate and five different nsp implementations are presented in Chapter 5. These implementations are like processors (§5.l), a toroidal ring (§5.2), a cellular array (§5.3) and associative processing (§5.4, 5.5). x CONTENTS 1 ARCHITECTURE AND HARDWARE SUPPORT FOR AI PROCESSING l.l VLSI Design of a 3-D Highly PamUel Message-Passing Architecture 1 Jean-Luc Bechennec, Christophe Cbanussot, Vincent Neri and Daniel Etiemble 1.2 Arcbitectuml Design of the Rewrite Rule Machine Ensemble 11 Hitoshi Aida, Sany Leinwand and Jose Meseguer 1.3 A Dataflow Architecture for AI 23 Jose Delgado-Frias, Ardsher Ahmed and Robert Payne 1.4 Incremental Garbage Collection Scheme in KLI and Its Architectural Support of 33 PIM Yasunori Kimura, Takasbi Chikayama, Tsuyosbi Shinogi and Atsubiro Goto 1.5 COLmRI: A Coprocessor for USP based on RISC 47 Christian Hafer, JosefPlankl and Franz Josef Schmitt 1.6 A CAM Based Architecture for Production System Ma. ... 57 l1l1'!S Pratibha and P. Dasiewicz 1.7 SIMD Parallelism for Symbol Mapping 67 Chang Jun Wang and Simon H. Lavington 1.8 Logic Flow in Active Data 79 Peter S. Sapaty 1.9 Parallel Analogue Computation for Real-Time Path Planning 93 Lionel Tarassenko, Gillian Marshall, Felipe Gomez-Castaneda and Alan Murray 2 MACHINES FOR PROLOG 2.1 An Extended Prolog Instruction Set for RIse Processors 101 Andreas Krall 2.2 A VLSI Engine for Structured Logic Programming 109 Pierluigi Civera, Evelina Lamma, Paola Mello, Antonio Natali. GianJuca Piccinini and Maurizio Zamboni 2.3 Performance Evaluation of a VLSI Associative Unifier in a WA M Based 121 Environment P. L. Civera, G. MaseJa, G. L. Piccinini, M. Ruo Roch and M. Zamboni 2.4 A Parallel Incremental Architecture for Prolog Program Execution 133 Alessandro De Gloria, Paolo Faraboschi and Elio Guidetti xi Contents 2.5 An Architectural Characterization of Prolog Execution 143 Mark A. Friedman and Gurindar S. Sohi 2.6 A Prolog Abstract Machine for Content-Addressable Memory 153 HamidBacha 2.7 A Multi-Transputer Architecture for a Parallel Logic Machine 165 Mario Cannataro, Giandomenico Spezzano and Domenico TaJia 3 ANALOGUE AND PULSE STREAM NEURAL NETWORKS 3.1 Computational Capabilities of Biologically-Realistic Analog Processing 175 Elements Chris Fields, Mark De Yong and Randall Findley 3.2 Analog VLSI Models of Mean Field Networks 185 Christian Schneider and Howard Card 3.3 An Analogue Neuron Suitable for a Data Frame Architecture 195 W.A.f. Waller, D.L. Bisset and P.M. Daniell 3.4 Fully Cascadable Analogue Synapses Using Distributed Feedback 205 Donald f. Baxter, Alan F. Murray and H Martin Reekie 3.5 Results from Pulse-Stream VLSI Neural Network Devices 215 Michael Brownlow, Lionel Tarassenko and Alan Murray 3.6 Working Analogue Pulse-Firing Neural Network Chips 225 Alister Hamilton, Alan F. Murray, H Martin Reekie and Lionel Tarassenko 3.7 Pulse-Firing VLSI Neural Circuits for Fast Image Pattern Recognition 235 Stephen Churcher, Alan F. Murray and H Martin Reekie 3.8 An Analog Circuit with Digital ItO for Synchronous Boltzmann Machines 245 Patrick Garda and Eric Belhaire 4 DIGITAL IMPLEMENTATIONS OF NEURAL NETWORKS 4.1 The VLSI Implementation of the E Architecture 255 S. R. Williams and 1. G. Cleary 4.2 A Cascadable VLSI Architecture for the Realization of Large Binary Associative 265 Networks Werner Poechmueller and Manfred Glesner 4.3 Digital VLSI Implementations of an Associative Memory Based on Neural 275 Networks Ulrich Ruckert, Christian Kleemaum and Karl Goser xii Contents 4.4 Probabilistic Bit Stream Neural Chip: Implementation 285 Max van Daalen, Pete Jeavons and John Shawe-Taylor 4.5 Binary Neural Network with Delayed Synapses 295 Tadashi Ae, Yasuhiro Mitsui, Satoshi Fujita and Reiji Aibam 4.6 Syntactic Neural Networks in VLSI 305 Simon Lucas and Bob Damper 4.7 A New Architectural Approach to Flexible Digital Neural Network Chip 315 Systems Torben Markussen 4.8 A VLSI Implementation of a Generic Systolic Synaptic Building Block for 325 Neural Networks Christian Lehmann and Franfois BJayo 4.9 A Learning Circuit That Operates by Discrete Means 335 Paul Cockshott and George Milne 4.10 A Compact and Fast Silicon Implementation for Layered Neural Nets 345 F. Distante, M.G. Sami, R. Stefanelli and G. Storti-Gajani 5 ARRAYS FOR NEURAL NETWORKS 5.1 A Highly Parallel Digital Architecture for Neural Network Emulation 357 Dan Hammerstrom 5.2 A Delay-Insensitive Neural Network Engine 367 C. D. Nielsen, Jorgen Staunstrop and S. R. Jones 5.3 A VLSI Implementation of Multi-Layered Neural Networks: 2-Performance 377 Bemard Faure and Guy Mazare 5.4 Efficient Implementation of Massive Neural Networks 387 James Austin, Tom Jackson and Alan Wood 5.5 Implementing Neural Networks with the Associative String Processor 399 AnaLgyros Krikelis and Michael Grozinger CONTRIBUTORS 409 INDEX 411 xiii

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.