Table Of ContentParallel Evolution
of Parallel
Processors
ASSOCIATE COMPUTING: A Programming Paradigm for Massively
Parallel Computers
Jerry L. Potter
INTRODUCTION TO PARALLEL AND VECTOR SOLUTION OF
LINEAR SYSTEMS
James M. Ortega
PARALLEL EVOLUTION OF PARALLEL PROCESSORS
(A book in the SUlVeyS in Computer Science series,
Edited by Larry Rudolph)
Gil Lerman and Larry Rudolph
A Continuation Order Plan is available for this series. A continuation order will bring delivery of each
new volume immediately upon publication. Volumes are billed only upon actual shipment. For further
information please contact the publisher.
Parallel Evolution
of Parallel
Processors
Gil Lerman Larry Rudolph
and
The Hebrew University ofJ erusalem
Jerusalem, Israel
PLENUM PRESS. NEW YORK AND LONDON
Library of Congress Cataloging-in-Publication Data
Lerman, Gi 1.
Parallel evolution of parallel processors / Gil Lerman and Larry
Rudolph.
p. cm. -- (Frontiers of computer science. Surveys in
computer science)
Includes bibliographical references and index.
ISBN 0-30S-44537-9
I. Parallel processing (Electronic computers) I. Rudolph, Larry.
II. Title. III. Series.
QA7S.58.L47 1993
004' .35--dc20 93-33111
CIP
ISBN 0-30644537·9
© 1993 Plenum Press, New York
A Division of Plenum Publishing Corporation
233 Spring Street, New York, N.Y. 10013
All rights reserved
No part of this book may be reproduced, stored in a retrieval system, or transmitted
in any form or by any means, electronic, mechanical, photocopying, microfilming,
recording, or otherwise, without written permisSion from the Publisher
To Ayelet, Our Parents,
Ainat, Hilla, and No ga
Preface
Study the past, if you would divine the future.
-CONFUCIUS
A well written, organized, and concise survey is an important tool in any newly
emerging field of study. This present text is the first of a new series that has
been established to promote the publications of such survey books.
A survey serves several needs. Virtually every new research area has its
roots in several diverse areas and many of the initial fundamental results are
dispersed across a wide range of journals, books, and conferences in many dif
ferent sub fields. A good survey should bring together these results. But just a
collection of articles is not enough. Since terminology and notation take many
years to become standardized, it is often difficult to master the early papers.
In addition, when a new research field has its foundations outside of computer
science, all the papers may be difficult to read. Each field has its own view of el
egance and its own method of presenting results. A good survey overcomes such
difficulties by presenting results in a notation and terminology that is familiar
to most computer scientists. A good survey can give a feel for the whole field. It
helps identify trends, both successful and unsuccessful, and it should point new
researchers in the right direction.
There are two candidates for authorship of a survey - the expert and the
novice. The expert contributes a deep understanding, wide knowledge, motiva
tion, and intuition of the field. The novice, on the other hand, interprets the
results in a fresh, unbiased fashion. Many critical notions are often trivially
obvious to the expert but initially puzzling to the novice. The novice may be
better able explain these notions to the reader.
The ideal solution is for a combined effort in the production of a good survey.
Our academic system, in fact, encourages such a collaboration with the Professor
acting as the expert and the graduate student as the novice. Our series of survey
books expects to capitalize on such a collaborative effort.
The is no limit to what might be relevant to computer scientists and so we
set no limit on the subject matter. We do, however, emphasize newly emerging
fields of research. While many new research fields include the word "computing"
in their title, e.g., optical computing and neurocomputing, there are many others
vii
viii Preface
that are relevant to computer scientists. Some deal in a fundamental way with
information processing, e.g., physics and economics, while others deal with new
technologies or algorithms.
We are launching the series with a textbook surveying three decades of
parallel processors. The survey covers machines that have been built both in
academia and research labs as well as in industry. The initial research was
for a Master's thesis at Hebrew University. We were surprised that this newly
emerging field has had such a long and wide history. Needless to say, even with
the steady progress of technology, it is still possible to repeat the same mistakes
if one ignores history. Many times we were surprised by the trends that became
obvious when taking a broad view and we were encouraged to see that there has
been progress and there has been convergence in the field.
But, even with the data organized, the trends and relationships between the
many aspects of parallel computers emerged only once the different categories
were correlated. Thus, the main body of the text focuses on these correlations.
As each machine is classified according to eight categories, there are 28 corre
lations. We present and attempt to explain each major correlation and hope
that our analysis will be instructive. In addition, the Appendix contains the raw
data, in the form of a brief description, of each machine in the survey. We know
that this will be a useful resource.
After receiving many requests for the work, it was decided to expand and
publish it, all the while fearing that we will inadvertently cause offense by either
leaving out or misrepresenting several projects. We hope the good will outweigh
the bad. As a final note, we hope that you find this work interesting and useful,
and are encouraged to produce works of a similar nature.
Gil Lerman
Larry Rudolph
Jerusalem
Contents
1. Introduction 1
2. Classification of Parallel Processors 5
2.1. A Brief History of Classification Schemes 6
2.2. The Classification Scheme Used in This Work 8
2.3. A Look at the Classification Characteristics 10
2.3.1. Applications ............. 10
2.3.2. Control . . . . . . . . . . . . . . . . 11
2.3.3. Data Exchange and Synchronization 12
2.3.4. Number and Type of Processors .. 12
2.3.5. Interconnection Network. . . . . . . 13
2.3.6. Memory Organization and Addressing 14
2.3.7. Type of Constructing Institution 15
2.3.8. Period of Construction . 15
2.4. Information-Gathering Details. . . 16
2.4.1. Classification Choices ... 16
2.4.2. Qualifications for Inclusion 17
2.4.3. Extent. 18
2.4.4. Sources 18
2.5. An Apology . . 19
3. Emergent Trends 21
3.1. Applications. . . . . . . . . . . . . . . . . . . . . 31
3.1.1. Correlation with Period of Construction . 33
3.1.2. Correlation with Constructing Institution 35
3.1.3. Correlation with the Control Mechanism. 37
3.1.4. Correlation with the Data Exchange and
Synchronization Mechanism . . . . . . . . 39
3.1.5. Correlation with the Number and Type of Processors 41
3.1.6. Correlation with the Interconnection Network. 43
3.1. 7. Correlation with the Memory Organization . 45
3.2. Mode of Control ... . . . . . . . . . . . . . . . . . . 46
3.2.1. Correlation with the Period of Construction. . 48
3.2.2. Correlation with the Type of Constructing Institution 50
ix
x Contents
3.2.3. Correlation with the Data Exchange and
Synchronization Mechanism . . . . . . . . . . . . . . . 53
3.2.4. Correlation with the Number and Type of Processors 55
3.2.5. Correlation with the Interconnection Network. 57
3.2.6. Correlation with the Memory Organization . 59
3.3. Data Exchange and Synchronization . . . . . . . . . . 61
3.3.1. Correlation with the Period of Construction. . 63
3.3.2. Correlation with the Type of Constructing Institution 65
3.3.3. Correlation with the Number and Type of PEs 66
3.3.4. Correlation with the Interconnection Network. 67
3.3.5. Correlation with the Memory Organization . 69
3.4. The Number and Type of PEs ............ 69
3.4.1. Correlation with the Period of Construction . 72
3.4.2. Correlation with the Constructing Institution 73
3.4.3. Correlation with the Interconnection Network. 75
3.4.4. Correlation with the Memory Organization . 77
3.5. Interconnection Network. . . . . . . . . . . . . . . . . 78
3.5.1. Correlation with the Period of Construction. . 80
3.5.2. Correlation with the Type of Constructing Institution 82
3.5.3. Correlation with the Memory Organization . 84
3.6. Memory Organization . . . . . . . . . . . . . . . . . . . . . . 86
3.6.1. Correlation with the Period of Construction. . . . . . 87
3.6.2. Correlation with the Type of Constructing Institution 89
3.7. Type of Constructing Institution . . . . . . . . . 90
3.7.1. Correlation with the Construction Period 91
3.8. Period of Construction . . . . 93
3.9. Summary of the Correlations 94
4. Popular Machine Models 99
4.1. Exposing the Complex Patterns. . . . . . . 99
4.2. General-Purpose Machines. . . . . . . . . . 100
4.2.1. Model I - MIMD, Shared Memory 101
4.2.2. Model I, the High-End, Numeric Variant. 101
4.2.3. Model II - MIMD, Message Passing. . . 102
4.2.4. Model II, the High End . . . . . . . . . . 103
4.2.5. Model III - General Purpose SIMD Machines 104
4.3. Model IV - Image (and Signal) Processing SIMD Machines. 105
4.4. Model V - Database MIMD Machines, Two Variants 107
4.5. Trends in Commercialization .... 107
4.5.1. The Number Crunchers . . . 109
4.5.2. The Multiprocessor Midrange 110
4.5.3. The Hypercube. . . . . 111
5. The Shape of Things to Come? 115
5.1. Underlying Assumptions 115
5.2. Applications. 116
5.3. Control ......... 117
Contents xi
5.4. Data Exchange and Synchronization 118
5.5. Number and Type of PEs 119
5.6. Interconnection Networks 120
5.7. Memory Organization .. 121
5.8. Sources ...... ... . 121
5.9. Classification of Parallel Computers 121
5.10. Summary .............. . 122
Bibliography 123
Appendix: Information about the Systems 145
Index 261