ebook img

Computing with T.Node Parallel Architecture PDF

263 Pages·1991·10.528 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Computing with T.Node Parallel Architecture

Computing with T.Node Parallel Architecture EURO C O U R S ES A series devoted to the publication of courses and educational seminars organized by the Joint Research Centre Ispra, as part of its education and training program. Published for the Commission of the European Communities, Directorate- General Telecommunications, Information Industries and Innovation, Scientific and Technical Communications Service. The EUROCOURSES consist of the following subseries: - Advanced Scientific Techniques - Chemical and Environmental Science - Energy Systems and Technology - Environmental Impact Assessment - Health Physics and Radiation Protection - Computer and Information Science - Mechanical and Materials Science - Nuclear Science and Technology - Reliability and Risk Analysis - Remote Sensing - Technological Innovation COMPUTER AND INFORMATION SCIENCE Volume 3 The publisher will accept continuation orders for this series which may be cancelled at any time and which provide for automatic billing and shipping of each title in the series upon publication. Please write for details. Computing with T.Node Parallel Architecture Edited by D. Heidrich and J. C. Grossetie Commission of the European Communities, Joint Research Centre, Institute for Systems Engineering and Informatics, Ispra, Italy SPRINGER-SCIENCE+BUSINESS MEDIA, B.V. Based on the lectures given during the Eurocourse on 'Architecture, Programming Environment and Application of the Supernode Network of Transputers' held at the Joint Research Centre, Ispra, Italy, November 4-8,1991 Library of Congress Cataloging-in-Publication Data ISBN 978-94-010-5546-8 ISBN 978-94-011-3496-5 (eBook) DOI 10.1007/978-94-011-3496-5 Publication arrangements by Commission of the European Communities Directorate-General Telecommunications, Information Industries and Innovation, Scientific and Technical Communication Unit, Luxembourg EUR 13975 ©1991 Springer Science+Business Media Dordrecht Originally published by Kluwer Academic Publishers in 1991 Softcover reprint of the hardcover 1st edition 1991 LEGAL NOTICE Neither the Commission of the European Communities nor any person acting on behalf of the Commission is responsible for the use which might be made of the following information. Printed on acid-free paper All Rights Reserved No part of the material protected by this copyright notice may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying, recording or by any information storage and retrieval system, without written permission from the copyright owner. TABLE OF CONTENTS Preface vii Parallel Programming Jean Cholley Architecture, Programming Environment and Application of the Supemode Network of Transputers Daniele Marini A Survey of Parallel Architecture 13 Y. Langue, N. Gonzalez, T. Muntean and I. Sakho An Introduction to Parallel Operating Systems 23 Keld Kondrup Jensen Decoupling of Computation and Coordinating in Linda 43 J.M.A. Powell Helios - A Distributed Operating System for MIMD Computers 63 Image Synthesis Christian Schormann, Ulrich Domdorf and Hugo Burm Porting a Large 3D Graphics System onto Transputers - Experiences from Implementing Mirashading on a Parallel Computer 73 O. Guye and K. Mouton Recursive Parallel Computing with Hierarchical Structured Data on T. Node Computer 87 Transputer Applications H.C. Webber Terrain Modelling Tools on the Supemode Architecture 101 A. Pinti Real Time Acquisition and Signal Processing on Transputers Application to Electroencephalography 115 V. Mastrangelo, D. Gassilloud, D. Heidrich and F. Simon Stochastic Modelisation and Parallel Computing 135 I.St. Doltsinis and S. Nolting Finite Element Simulations on Parallel Computer Architectures - Nonlinear Deformation Processes of Solids 163 vi Neural Computing A. Yarfis An Introduction to Neural Networks 197 Ph. Grandguillaume, E. Guigon, L. Boukthil, I. Otto and Y. Burnod Implementation of a General Model of Cooperation between Cortical Areas on a Parallel System based on Transputers 213 M. Duranton, F. Aglan and N. Mauduit Hardware Accelerators for Neural Networks: Simulations in Parallel machines 235 PREFACE Parallel processing is seen today as the means to improve the power of computing facilities by breaking the Von Neumann bottleneck of conventional sequential computer architectures. By defining appropriate parallel computation models definite advantages can be obtained. Parallel processing is the center of the research in Europe in the field of Information Processing Systems so the CEC has funded the ESPRIT Supemode project to develop a low cost, high performance, multiprocessor machine. The result of this project is a modular, reconfigurable architecture based on !NMOS transputers: T.Node. This machine can be considered as a research, industrial and commercial success. The CEC has decided to continue to encourage manufacturers as well as research and end-users of transputers by funding other projects in this field. This book presents course papers of the Eurocourse given at the Joint Research Centre in ISPRA (Italy) from the 4th to 8 of November 1991. First we present an overview of various trends in the design of parallel architectures and specially of the T.Node with it's software development environments, new distributed system aspects and also new hardware extensions based on the !NMOS T9000 processor. In a second part, we review some real case applications in the field of image synthesis, image processing, signal processing, terrain modeling, particle physics simulation and also enhanced parallel and distributed numerical methods on T.Node. Finally, a special section will be dedicated to neural networks. We show here how neural nets can be simulated and put to work on a transputer machine specially with dedicated hardware ac celerator. D.HEIDRICH J.C. GROSSETIE vii ARCHITECTURE"", PROGRAMMING ENVIRONMENT AND APPLICATION uF THE SUPERNODE NETWORK OF TRANSPUTERS JEAN CHOLLEY TELMAT INFORMATIQUE 6 rue de /'Industrie 68360 SOULTZ FRANCE ABSTRACf. THE SUPERNODE RANGE OF PARALLEL COMPUTERS is a reconfigurable network of transputers, scalable from 8 to 1.024 processors. We give a description of its architecture, with the various modules included. The software environment provided (operating system, languages, development tools) is described as well the range of applications for which these machines are used. INTRODUCTION The SUPERNODE is a massively parallel computer based on the transputer, a microprocessor from INMOS, a subsidiary of SGS-Thomson Microelectronics. To this family belong three products: -T.Node (from 8 to 32 transputers) -T.Node tandem (from 40 to 64 transputers) -Mega-Node (from 96 to 1.024 transputers) manufactured by TELMAT INFORMATIQUE. The Supemode family is a fully reconfigurable, modular and scalable transputer network. Dynamic switching devices, integrated in all systems, allow full reconfiguration of any network from 8 to 1.024 transputers. The graph of the network can be dynamically modified by the program, according to the nature of the calculation to be performed. Results can be stored on disks, transferred in memory for graphic systems or used by the host computer, disks and graphic devices. D. Heidrich and J. C. GrosseUe (eds.), Computing with T Node Parallel Architecture, 1-11. @ 1991 ECSC, EEC, EAEC, Brussels and Luxembourg. 2 ARCHITECTURE a) The transputer : The T800 transputer from INMOS is a 32 bits microprocessor, with 4 Kb of on-chip memory and FPU (IEEE 754 standard), delivering a 25 Mips and 2,25 Mflops peak performance. It has been designed in order to provide concurrency, fully exploited by the occam language, and therefore has an integrated micro-coded scheduler which shares the processor time between concurrent processes. Used as a building block in multiprocessor systems, communication between transputers is supported by 4 links which are bi-directional, serial, asynchronous, point to point 20 Mbits/s.connections. Using such a component, a parallel system can easily be designed. Nevertheless, in large arrays, message routing communication introduces important overheads and the alternative solution being circuit switching has been adopted for the T.Node architecture. b) The switching device (figure 1) : All "worker transputer" links are connected to a specific VLSI device, an electronic switch. This switch is organized in a pair of 72 x 72 asynchronous switches, each implemented in 2 components functionally equivalent to a 72 x 36 cross-bar. This switch is controlled by a further transputer: the control transputer. It is able to set any network topology between transputers in a re-arrangeable, non-blocking way. It works in 3 modes: static, pseudo-dynamic and dynamic. In static mode, the network topology in set before runtime, without modification during program execution. In pseudo-dynamic mode, the global network topology (i.e. all the transputers) is modified during run-time, requiring the links to be quiescent. In dynamic switching, ad hoc connection is established in a part of the network, without alteration of the remaining communications. Such an asynchronous device needs system communication in order to synchronize the transputers to be connected. This system communication could be multiplexed with users communication on links, but would introduce overheads. To avoid such multiplexing, a specific feature has been implemented: The control bus system. c) The control bus system: All the transputers are connected to this bus, through a specific component: a memory-mapped gate array, releasing links from system messages. This bus has a master/slave protocol, the master being the control transputer, allows a fast synchronization between transputers. Additional features like selective reset, message broadcast... are also supported by the control bus system. Further, this bus allows the entire network to be brought to a rapid halt, and debugging information to be extracted, without disturbing the state of the transputer links. This is exploited to provide a debugger with breakpoints. d) A hierarchical system : A T.Node system with 16 transputers is characterized by its computing power, its internal 3 and external communication facilities and its supervision system. It can be seen, in a recursive manner, as a building block for a new network at a higher level. So the T.Node is one "Node" of a larger reconfigurable network which is the Mega-Node. Another switch between T.Node tandem, called "Inter-Node switch", controlled by an outer level control transputer, enables the user to modify the topology of the whole network of the Mega-Node. This outer level switch has the same rule as the lower level switch in the T.Node, this allowing the full reconfigurability of the 1.024 transputers network. At this level too, a supervision bus, connected to all control transputers, allows sychronization and interactive debugging. e) Basic Modules : - Worker Module This board is the basic computation element of the T.Node system. A T.Node tandem can include 8 of these boards to provide a 64 transputer system. Different type of boards are available, according to the memory. Every board is equiped of 8 T800 transputers (25 or 30 Mhz), with their own local memory (from 1 to 8 Mbytes of dynamic memory) and the specific component (CGA) for access to the Control Bus System (CBS). - Controller Board This module has a master/slave type interface to the CBS which is used in the case of the T.Node tandem and Mega-Node. The master control board manages the control bus. It also sets the topology of the network by programming the switch. The transputer on these boards is associated with 512 Kbytes of memory, a real time clock and two RS232C ports. This transputer is also able to make a partition of the network independent sub-networks. Several users have thus access to the ressources of the T.Node. Each user can define and modify the topology of its sub-network without disturbing the other users. - Memory Server Module (MSM) This optional one transputer module has 16 or 64 Mbytes of dynamic memory to provide a common data storage capablity for the network. Access to this memory is through the links of the transputer. - Disk Server Module (DSM) On this board, the transputer controls a SCSI (Small Computer System Interface) to connect a 300 or 574 Mbytes or 1 Go winchester disk (5 inch 1/4) and a streamer 150 Mbytes. A Unix-like file system provides access to these peripherals. The 16 Mbytes memory associated to the transputer provides a common data storage capability for the whole network. - Graphical Module This module has two transputers (25 Mhz). one to manage the I/O ressources with 1 to 8 Mb of memory and the other to manage graphic and video with 1 to 16 Mb. The display

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.