ebook img

Algorithm-Structured Computer Arrays and Networks. Architectures and Processes for Images, Percepts, Models, Information PDF

413 Pages·1984·18.59 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Algorithm-Structured Computer Arrays and Networks. Architectures and Processes for Images, Percepts, Models, Information

Algorithm-Structured Computer Arrays and Networks Architectures and Processes for Images, Percepts, Models, Information LEONARD UHR Department of Computer Sciences University of Wisconsin Madison, Wisconsin 1984 ACADEMIC PRESS (Harcourt Brace Jovanovich, Publishers) Orlando San Diego San Francisco New York London Toronto Montreal Sydney Tokyo Säo Paulo COPYRIGHT © 1984, BY ACADEMIC PRESS, INC. ALL RIGHTS RESERVED. NO PART OF THIS PUBLICATION MAY BE REPRODUCED OR TRANSMITTED IN ANY FORM OR BY ANY MEANS, ELECTRONIC OR MECHANICAL, INCLUDING PHOTOCOPY, RECORDING, OR ANY INFORMATION STORAGE AND RETRIEVAL SYSTEM, WITHOUT PERMISSION IN WRITING FROM THE PUBLISHER. ACADEMIC PRESS, INC. Orlando, Florida 32887 United Kingdom Edition published by ACADEMIC PRESS, INC. (LONDON) LTD. 24/28 Oval Road, London NW1 7DX Library of Congress Cataloging in Publication Data Uhr, Leonard Merrick, Date. Algorithm-Structured Computer Arrays and Networks. (Computer science and applied mathematics) Bibliography: p. Includes index. 1. Parallel processing (Electronic computers) 2. Computer networks. 3. Computer architecture. I. Title. II. Series. QA76.6.U37 1984 001.64 82-8878 ISBN 0-12-706960-7 AACR2 PRINTED IN THE UNITED STATES OF AMERICA 84 85 86 87 9 8 7 6 5 4 3 2 1 Preface The traditional single central processing unit (1-CPU) serial computer is only one of a potentially infinite number of possible computers (those with 1, 2, 3, . . . , η processors, configured in all possible ways). It is the simplest, and also the slowest. It will soon reach its absolute "speed-of- light-bound" limit. Yet there will still be many very important problems far too large for it to handle. An amazing variety of new parallel array and network multicomputer structures can now be built. Very large scale integration (VLSI) of many thousands, or millions, of devices on a single chip that costs only a few dollars will soon make feasible many completely new computer architec- tures that promise to be extremely powerful and very cheap. This book examines the new parallel-array, pipeline, and other network multicomputers that today are just beginning to be designed and built, but promise, during the next 10-50 years, to give enormous increases in processing power and to successively transform and broaden our understanding of what a computer is. This book describes and explores these new arrays and networks, both those built and those now being designed or proposed. The problems of developing higher-level languages for such systems and of designing al- gorithm, program, data flow, and computer structure all together, in an integrated way, are considered. The book surveys suggestions people have made for future networks with many thousands, or millions, of computers. It also describes several sequences of successively more xiii xiv Preface general attempts to combine the power of arrays with the flexibility of networks into structures that reflect and embody the flow of information through their processors. The book thus gives a picture of what has been and is being done, along with interesting possible directions for the future. The Necessity of Parallel Computers for Large Real-Time Problems There are many very large problems that no single-processor serial computer can possibly handle. This is especially true for those problems where the computer must solve some problem in "real time" (that is, fast enough to interact with the really changing outside world, rather than fast enough that somebody can pick up the results whenever they happen to get done). These problems include (among many others) the perception of moving objects, the modeling of the atmosphere and the earth's crust, and the modeling of the brain's large network of neurons. Only very large networks of computers, all working together, have any hope of succeeding at such tasks. I approach arrays and networks from a very definite perspective, one that has probably not been the dominant perspective among those who have been exploring networks (usually loosely distributed networks like ETHERNET). My basic premise is that very large networks (that is, ones with at least thousands of computers, and probably, when we learn how to use them—whether in 10 or in 50 years—millions of computers) are needed to handle a number of key problems, and that the network's structure should reflect the problem's structure. I focus on large networks of relatively closely coupled computers whose overall architecture has been designed so that many processors can work together, efficiently, on the same problem. In particular, my personal interests are the modeling of intelligence, with special emphasis on image processing, the perception of patterned objects, and perceptual learning (because I feel that only by having them learn through perceptual-behavioral interactions with an environment rich in information do we have any hope of achieving computers with real intelligence, and of developing powerful models of intelligent thinking). These problem areas are emphasized throughout this book, because they are the most familiar to me and because they are of tremendous potential importance. But there are a number of other problems, some of greater practical and immediate interest, including the processing of a continuing stream of large images (such as those transmitted by satellites) Preface XV and the modeling of three-dimensional masses of matter (as for meteorological, wind-tunnel, or océanographie research) that will benefit tremendously from large networks. Computers have been so successful, in the way they have blossomed almost everywhere in a bewildering variety of applications and in the phenomenal way in which they have increased in power (at the same time decreasing in price) because of a steady and continuing stream of techno- logical advances, that most people are reasonably happy with computers in their present, single-processor-serial guise. This may well mean that parallel multicomputers will have an especially difficult time getting started. A standard rejoinder to the suggestion that a parallel computer would better handle a particular type of problem has been that a suitably bigger serial computer would be better still. Marvin Minsky and Seymour Papert (1971), Eugene Amdahl (1967), and a number of other distinguished computer scientists have long argued against the value of parallel computing. An ironic voice was that of John von Neumann, who, in his seminal (1945) paper sketching out the design of the modern serial single-CPU computer, argued that only one processor should be used, as the best interim compromise, given the current state of failure-prone vacuum-tube technology, between programming ease (in cumbersome machine lan- guages), design of these brand-new computer architectures, and mean time to failure needed to handle the war-motivated numerical problems contemplated. But von Neumann's ultimate goal (1959) was highly parallel nerve-network-like structures. The single-CPU "von Neumann machine" (with what Backus, 1978, named the von Neumann bottleneck) was suggested simply as a beginner's first step. Only time, and much long-avoided hard work, will show whether we are able to uncover the secrets of very fast, very powerful parallel proc- essing that natural evolution achieved millions of years ago and that our largely unconscious thought processes handle with mastery! Computer Architecture : = {Becomes} Algorithm Structure The purpose of computers is to solve problems. Computers may well be the most powerful tool human beings have ever invented. In terms of their profound importance, they rank with fire, wheels, numbers, alphabets, and motors. Books can be written on this issue, even though we are barely beginning to appreciate the importance of computers. But their basic purpose is to serve as a tool; our basic problem is to discover how to use and exploit this unboundedly general, versatile, and powerful tool. xvi Preface The first computer, built in 1945, was a marvel of technology. It used roughly 50,000 vacuum tubes, cost millions of dollars, filled a large room with far more electronic equipment than had ever before been combined into one system, and accurately computed the equations people pro- grammed for it many thousands of times faster than any human being. Today, substantially more powerful computers are built on a single 4-mm- square silicon chip that costs a few dollars. This means we can design computers many thousands of times faster and more powerful, by struc- turing large numbers of chips into networks. Today's large computers are still basically like the first. They have clearly found their niche. But there are many kinds of problems for which they are quite poorly structured, and intolerably slow. To solve these problems, a variety of parallel architectures are needed. This book ex- plores the following approach to the development of multicomputer archi- tectures: (1) Consider a problem domain and try to find or to develop tech- niques for solving the problems it poses. (2a) Try to develop precise computational algorithms. (2A) At the same time, think in terms of the structures of physical computer processors that would best carry out those algorithms. (3) Design, simulate, and build as general and flexible as possible a version of the architecture that seems indicated. (4) Similarly build good programming languages and operating and support systems. (5) Code and run programs. (6) Evaluate, benchmark, and compare. (7) Try to develop tools and theory for conceptualizing, analyzing, and examining these systems. (8) Continue to cycle through all these steps, to improve algorithms and architectures (toward more powerful specialized systems and also more generally usable systems) and our understanding of how to design, build, and use these systems. The Flow of This Book This book should be useful as a textbook or auxiliary textbook in courses on computer architecture, parallel computers, arrays and net- works, and image processing and pattern recognition; but it should be especially appropriate for independent reading. It is the only book of which I am aware that examines: the present arrays and networks (Chapters 2-7, 14); the problems of developing basic algorithms and complex programs, Preface xvii and of programming and using these architectures (Chapters 9-12, 16- 18); comparisons among different networks using a variety of different for- mal, empirical, and structural criteria (Chapters 4, 7, 14); new graph theory results that suggest good new interconnection pat- terns for networks (Chapters 4, 7, 14, Appendix A); network flow modeling techniques for evaluating architectures, pro- grams, and the way programs are mapped onto and executed by hardware (Chapters 2, 7, 8, Appendix A); and architectures that combine arrays and networks into very large, poten- tially very powerful, three-dimensional pipelines that transform informa- tion flowing through them (Chapters 15-19). It attempts to do this in an integrated way, from the perspective that a network's structure should embody the structure of processes effected by the algorithms being executed, and that algorithm, program, language, operating system, and physical network should all be designed in a contin- uing cycle that attempts to maximize the correspondence between pro- gram structure and multicomputer structure. Background and review chapters and appendixes cover much of the material on computers, graphs, logic design, VLSI components and their costs, and other topics needed to follow the book's exploration of arrays and networks, languages and techniques for using them, and some of their future possibilities. The introduction examines some of the problems that can be handled only with large networks and gives a brief idea of the astonishing pro- gress we can expect to see during the next 20 or so years in the fabrication of more and more sophisticated chips with millions of devices packed on each. Chapter 1 describes traditional single-CPU computers, the general-pur- pose Turing machine that they embody in much more powerful form, several formalisms for describing and analyzing computers, and some of the supercomputers and distributed networks of today. Appendix A briefly looks at graph theory as applied to computer networks, and Appendix Β describes how computers are built from switches that execute logical operations. Chapter 2 explores network flow models for representing and evaluat- ing the flow of information through a multicomputer network as that network executes programs mapped onto it. It surveys key arrays, networks, logic-in-memory computers, and supercomputer systems designed or built in the pre-LSI era that ended in the early 1970s. And it describes how computers are coordinated by clocks, controllers, and operating systems. xviii Preface Chapter 3 examines the very large arrays of literally thousands of small computers that have already been built or designed, and also several powerful pipelines and more or less specialized and special-purpose sys- tems. Chapter 4 surveys the great variety of network architectures being explored today (mostly proposed, a few small ones actually built). Chap- ters 5 and 6 examine several designs that begin to combine promising array and network structures. Chapter 5 suggests the possibility of combining several different kinds of structures, including arrays, pipelines, and other types of networks, into a system where each can be used, as appropriate, and from which we can learn how to use each, and how useful each is—this as a step toward designing better integrated and more useful systems. Chapter 6 begins to examine some possibilities for combining arrays and networks into multilayered converging systems, moving toward a pyramidal shape. Chapter 7 explores some of the similarities among the very large num- ber of different network structures that have been proposed, and are now beginning to be built. The next several chapters focus on issues of software development in conjunction with hardware architecture. These include the development of languages, interactive tools, and more parallel ways of thinking about the flow of data through processes, to encourage the integrated design of parallel algorithms, programs, data flow, and architectures. Chapter 8 explores some of the issues involved in integrating algorithm, program, data flow, and network structure. It examines issues of formu- lating parallel algorithms, several alternative possibilities for future pro- gramming languages, and a variety of specific algorithms. Chapter 9 examines some of the ways in which we might design, pro- gram, and use networks. It focuses on the development of highly structured architecture most appropriate for a particular type of complex algorithm. Chapter 10 examines a wider variety of problem areas and possible algorithms. Chapter 11 surveys the "higher-level" languages that have been devel- oped for networks—almost all for arrays—and describes one example language in some detail. Chapter 12 gives a taste of how to program an array, both in a somewhat lower-level language and in machine language, along with more detail about how one array (CLIP) works. Appendix Ε gives examples of code and programs in several different languages. We then shift perspective back to architecture, looking into the more distant future (but only 5, 10, or 20 years away), starting, in Chapter 13, with interesting speculations by a number of people who are actually building arrays and networks today. Preface xix Chapter 14 examines the various possible sources of parallelism in a multicomputer, and of speedups from parallelism and from hardware and software design. Then it continues the comparison of different network architectures. Chapter 15 explores a variety of issues related to the future design possibilities of VLSI, when the design of formulation, program, and spe- cific algorithms may well become the design of silicon chips. Chapter 16 begins the exploration of possible future designs that is the major theme of the rest of the book. Chapters 16 and 17 attempt to develop a variety of array, pyramid, lattice, torus, sphere, and "whorl" designs for networks, with Appen- dixes C and D examining the types of components and modules such networks might use. Chapter 18 suggests several especially interesting designs, ones that have the kind of regular micromodular structure suggested by VLSI technologies, and offer the possibility of pipeline flow of information through local network structures designed to execute programs efficiently. Chapter 19 recapitulates some of the major possible architectures. Also included: Appendix F on getting groups to work together (whether groups of computers, or groups of neurons, animals, or human beings); Appendix G on explorations of what we should mean by "messages"; and Appendix H on what "real time" "really" means. The glossary defines and describes a number of terms, and the suggested readings and bibliog- raphy complete the book and point to more. Acknowledgments This book has benefited immeasurably from continuing contacts and discussions (both in person and by phone and mail) with many of the people involved in designing, building, developing programming lan- guages for, and using the very large parallel arrays, and also the closely coupled networks being developed to use many processors on a single program (with special emphasis on image processing, pattern recognition, and scene description). I am especially grateful to Michael Duff, the architect of CLIP, Univer- sity College London, for the opportunity to work in his laboratory during my sabbatical in 1978. There I got my first taste of programming parallel arrays (in CLIP3 assembly language), and first began to think seriously about parallel architectures and higher-level languages for parallel com- puters. (I have since formulated and coded a specific language that com- piles to either CLIP3 or CLIP4 code, and two versions of a more general parallel language, one that currently compiles to PASCAL code for a simulation of a parallel machine.) Frequent discussions with Dr. Duff, and also with Terry Fountain, Alan Wood, and other memebers of his group, were, and continue to be, a great source of knowledge and insight. During my period in London, and by correspondence since that time, I have been fortunate to have had a number of very interesting interactions with Stewart Reddaway, the architect of the distributed array processor (DAP), and with Peter Flanders, David Hunt, Robin Gostick, and Colin Aldredge of ICL, and also with Philip Marks, of Queen Mary College, and xxi

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.