ebook img

Plausible Neural Networks for Biological Modelling PDF

263 Pages·2001·8.601 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Plausible Neural Networks for Biological Modelling

Plausible Neural Networks for Biological Modelling MATHEMATICAL MODELLING: Theory and Applications VOLUME 13 This series is aimed at publishing work dealing with the definition, development and application of fundamental theory and methodology, computational and algorithmic implementations and comprehensive empirical studies in mathematical modelling. Work on new mathematics inspired by the construction of mathematical models, combining theory and experiment and furthering the understanding of the systems being modelled are particularly welcomed. Manuscripts to be considered for publication lie within the following, non-exhaustive list of areas: mathematical modelling in engineering, industrial mathematics, control theory, operations research, decision theory, economic modelling, mathematical programmering, mathematical system theory, geophysical sciences, climate modelling, environmental processes, mathematical modelling in psychology, political science, sociology and behavioural sciences, mathematical biology, mathematical ecology, image processing, computer vision, artificial intelligence, fuzzy systems, and approximate reasoning, genetic algorithms, neural networks, expert systems, pattern recognition, clustering, chaos and fractals. Original monographs, comprehensive surveys as well as edited collections will be considered for publication. Editor: R. Lowen (Antwerp, Belgium) Editorial Board: I.-P. Aubin (Universite de Paris IX, France) E. Jouini (University of Paris 1 and ENSAE, France) GJ. Klir (New York, U.S.A.) J.-L. Lions (Paris, France) P.G. Mezey (Saskatchewan, Canada) F. Pfeiffer (Munchen, Germany) A. Stevens (Max Planck Institute for Mathematics in the Sciences, Leipzig, Germany) H.-I. Zimmerman (Aachen, Germany) The titles published in this series are listed at the end of this volume. Plausible Neural Networks for Biological Modelling Ediled by Henk A.K. Mastebroek Depannu!nt 0/ Neurobiophysics and Biomedical Engineering, Physics Wb., Uni~rsjry 0/ Gronillgen, The Nether/onds Johan E. Yos Depanmem 0/ DeveJopmemoJ Neurology. MedicaJ Physiology, Universiry ofGroningen, The Nethulands .. SPRINGER-SCIENCE+BUSINESS MEDIA, B.Y. A C.I.P. Catalogue record for this book is available from the Library of Congress. ISBN 978-94-010-3864-5 ISBN 978-94-010-0674-3 (eBook) DOI 10.1007/978-94-010-0674-3 Printed on acid-free paper All Rights Reserved © 2001 Springer Science+Business Media Dordrecht Originally published by Kluwer Academic Publishers in 2001 Softcover reprint ofthe hardcover 1st edition 2001 No part of the material protected by this copyright notice may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying, recording or by any information storage and retrieval system, without written permission from the copyright owner. CONTENTS Preface I PART I Fundamentals 1 Biological Evidence for Synapse Modification Relevant for Neural Network Modelling J.E. Vos 1. Introduction 7 2. The Synapse 11 3. Long Term Potentiation 13 4. Two Characteristic Types of Experiment 15 4.1 Food Discrimination Learning in Chicks 15 4.2 Electrical Stimulation of Nervous Cell Cultures 18 5. Conclusion 19 References and Further Reading 20 - - 2 What ~ Different with Spiking Neurons? Wulfram Gerstner 1. Spikes and Rates 23 1.1 Temporal Average - Spike Count 24 1.2 Spatial Average - Population Activity 26 1.3 Pulse Coding - Correlations and Synchrony 27 2. 'Integrate and Fire' Model 28 3. Spike Response Model 30 4. Rapid Transients 33 5. Perfect Synchrony 36 6. Coincidence Detection 38 7. Spike Time Dependent Hebbian Learning 39 8. Temporal Coding in the Auditory System 42 9. Conclusion 43 References 45 3 Recurrent Neural Networks: Properties and Models Jean-Philippe Draye 1. Introduction 49 2. Universality of Recurrent Networks 52 2.1 Discrete Time Dynamics 52 2.2 Continuous Time Dynamics 54 3. Recurrent Learning Algorithms for Static Tasks 56 3.1 Hopfield Network 56 3.2 Boltzmann Machines 58 3.3 Recurrent Backpropagation Proposed by Fernando Pineda 60 4. Recurrent Learning Algorithms for Dynamical Tasks 63 4.1 Backpropagation Through Time 63 vi 4.2 Jordan and Elman Networks 64 4.3 Real Time Recurrent Learning (RTRL) 65 4.3.1 Continuous Time RTRL 65 4.3.2 Discrete Time RTRL 66 4.3.3 Teacher Forced RTRL 67 4.3.4 Considerations about the Memory Requirements 67 4.4 Time Dependent Recurrent Backpropagation (TDRBP) 68 5. Other Recurrent Algorithms 69 6. Conclusion 70 References 72 4 A Derivation of the Learning Rules for Dynamic Recurrent Neural Networks Henk A.K. Mastebroek 1. A Look into the Calculus of Variations 75 2. Conditions of Constraint 77 3. Applications in Physics: Lagrangian and Hamiltonian Dynamics 78 4. Generalized Coordinates 80 5. Application to Optimal Control Systems 82 6. Time Dependent Recurrent Backpropagation: Learning Rules 85 References 88 PART II Applications to Biology 5 Simulation of the Human Oculomotor Integrator Using a Dynamic Recurrent Neural Network Jean-Philippe Draye and Guy Cheron 1. Introduction 92 2. The Different Neural Integrator Models 95 3. The Biologically Plausible Improvements 99 3.1 Fixed Sign Connection Weights 100 3.2 Artificial Distance between Inter-Neurons 101 3.3 Numerical Discretization of the Continuous Time Model 101 3.4 The General Supervisor 102 3.5 The Modified Network 103 4. Emergence of Clusters 104 4.1 Definition 105 4.2 Mathematical Identification of Clusters 106 4.3 Characterization of the Clustered Structure 106 4.4 Particular Locations 110 5. Discussion and Conclusion 110 References 112 vii 6 Pattern Segmentation in an Associative Network of Spiking Neurons Raphael Ritz 1. The Binding Problem 117 2. Spike Response Model 118 3. Simulation Results 121 3.1 Pattern Retrieval and Synchronization 123 3.2 Pattern Segmentation 124 3.3 Context Sensitive Binding in a Layered Network with Feedback 126 4. Related Work 129 4.1 Segmentation with LEGION 129 4.2 How about Real Brains? 130 References 131 7 Cortical Models for Movement Control Daniel Bullock 1. Introduction: Constraints on Modeling Biological Neural Networks 135 2. Cellular Firing Patterns in Monkey Cortical Areas 4 and 5 137 3. Anatomical Links between Areas 4 and 5, Spinal Motoneurons, and Sensory Systems 140 4. How Insertion of a Time Delay can Create a Niche for Deliberation 141 5. A Volition-Deliberation Nexus and Voluntary Trajectory Generation 142 6. Cortical-Subcortical Cooperation for Deliberation and Task- Dependent Configuration 146 7. Cortical Layers, Neural Population Codes, and Posture-Dependent Recruitment of Muscle Synergies 150 8. Trajectory Generation in Handwriting and Viapoint Movements 151 9. Satisfying Constraints of Reaching to Intercept or Grasp 155 10. Conclusions: Online Action Composition by Cortical Circuits 156 References 157 8 Implications of Activity Dependent Processes in Spinal Cord Circuits for the Development of Motor Control; a Neural Network Model J.J. van Heijst and J.E. Vos 1. Introduction 164 2. Sensorimotor Development 165 3. Reflex Contributions to Joint Stiffness 166 4. The Model 167 4.1 Neural Model 168 4.2 Musculo-Skeletal Model 170 4.3 Muscle Model 172 4.4 Sensory Model 173 4.5 Model Dynamics 174 5. Experiments 174 viii 5.1 Training 176 5.2 Neural Control Properties 177 5.3 Perturbation Experiments 179 6. Discussion 182 References 185 9 Cortical Maps as Topology-Representing Neural Networks Applied to Motor Control: Articulatory Speech Synthesis Pietro Morasso, Vittorio Sanguineti and Francesco Frisone 1. Lateral Connections in Cortical Maps 190 2. A Neural Network Model 191 3. Spatial Maps as Internal Representations for Motor Planning 193 3.1 Dynamical Behavior of Spatial Maps 194 3.2 Function Approximation by Interconnected Maps 196 3.3 Dynamical Inversion 199 4. Application of Cortical Maps to Articulatory Speech Synthesis 200 4.1 Cortical Control of Speech Movements 202 4.2 An Experimental Study 203 4.2.1 The Training Procedure 204 4.2.2 Field Representation of Phonemic Targets 208 4.2.3 Non-Audible Gestures and Compensation 211 4.2.4 Generation of VVV ... Sequences 211 5. Conclusions 215 References 216 10 Line and Edge Detection by Curvature-Adaptive Neural Networks Jacobus H. van Deemter and Johannes M.H. du Buf 1. Introduction 220 2. Biological Constraints 223 3. Construction of the Gabor Filters 224 4. The One-Dimensional Case 224 5. The Two-Dimensional Case 225 6. Simple Detection Scheme 225 7. An Extended Detection Scheme 226 8. Intermezzo: A Multi-Scale Approach 230 9. Advanced Detection Scheme 231 10. Biological Plausibility of the Adaptive Algorithm 233 11. Conclusion and Discussion 235 References 238 11 Path Planning and Obstacle Av oidance Using a Recurrent Neural Network Erwin Mulder and Henk A.K. Mastebroek 1. Introduction 241 2. Problem Description 242 3. Task Descriptions 243 IX ix 3.1 Representations 243 3.2 Fusing the Representations into a Neuronal Map 245 3.3 Path Planning and Heading Decision 246 4. Results 248 5. Conclusions 251 References 253 Index 255 Preface The expression 'Neural Networks' refers traditionally to a class of mathematical algorithms that obtain their proper performance while they 'learn' from examples or from experience. As a consequence, they are suitable for performing straightforward and relatively simple tasks like classification, pattern recognition and prediction, as well as more sophisticated tasks like the processing of temporal sequences and the context dependent processing of complex problems. Also, a wide variety of control tasks can be executed by them, and the suggestion is relatively obvious that neural networks perform adequately in such cases because they are thought to mimic the biological nervous system which is also devoted to such tasks. As we shall see, this suggestion is false but does not do any harm as long as it is only the final performance of the algorithm which counts. Neural networks are also used in the modelling of the functioning of (sub systems in) the biological nervous system. It will be clear that in such cases it is certainly not irrelevant how similar their algorithm is to what is precisely going on in the nervous system. Standard artificial neural networks are constructed from 'units' (roughly similar to neurons) that transmit their 'activity' (similar to membrane potentials or to mean firing rates) to other units via 'weight factors' (similar to synaptic coupling efficacies). Often, the units are only connected in a forward manner, recurrent connections are avoided in these cases because they would make the algorithm much more complicated. The weight factors are modified during the learning process according to some overall optimisation criterion, taking into account the whole network's performance at once without regard as to 'why' one weight factor can be aware of what all the other units are doing at that moment. Biological networks (which we prefer to call neuronal networks) on the other hand can not function precisely like this. Their connectivity is extremely complex and certainly not only feedforward but can be in part - or even completely - recurrent. The processing of information between neurons in a neuronal network will not be instantaneous: transmission time delays will become larger with increasing distances between neurons in the H.A.K. Mastebroek and J.E. Vos (eds.), Plausible Networks for Biological Modelling, 1-6. © 2001 Kluwer Academic Publishers.

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.