ebook img

Programming Environments for Massively Parallel Distributed Systems: Working Conference of the IFIP WG 10.3, April 25–29, 1994 PDF

417 Pages·1994·12.019 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Programming Environments for Massively Parallel Distributed Systems: Working Conference of the IFIP WG 10.3, April 25–29, 1994

Mon t e Proceedings of the Centro Stefano Franscini Verita Ascona Edited by K. Osterwalder, ETH Zürich Programming Environments for Massively Parallel Distributed, Systems Working Conference of the IFIP WG 10.3, April 25-29, 1994 Edited by K. M. Decker R. M. Rehmann 1994 Springer Basel AG Editors: PD Dr. Karsten M. Decker Dr. Rene M. Rehmann Director of Research and Development Research Scientist Swiss Scientific Computing Center Swiss Scientific Computing Center CSCS ETH-Zürich CSCS ETH-Zürich Via Cantonale Via Cantonale CH-6928 Manno CH-6928 Manno e-mail: [email protected] e-mail: [email protected] A CIP catalogue redord for this book is available from the Library of Congress, Washington D.C., USA Deutsche Bibliothek Cataloging-in-Publication Data Programming environments for massively parallel distributed systems: working conference of the IFIP WG 10.3, April 25 - 29, 1994/ ed. by K. M. Decker; R. M. Rehmann. - Basel; Boston; Berlin : Birkhäuser, 1994 (Monte verita) NE: Decker, Karsten [Hrsg.); Intemational Federation for Information Processing / Working Group on Software Hardware Interrelation ISBN 978-3-0348-9668-9 ISBN 978-3-0348-8534-8 (eBook) DOI 10.1007/978-3-0348-8534-8 This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in other ways, and storage in data banks. For any kind of use, permission of the copyright owner must be obtained. © 1994 Springer Basel AG Originally published by BirkhäuserVerlag Basel, Switzerland in 1994 Camera-ready copy prepared by the editors. Printed on acid-free paper produced from chlorine-free pulp. 987654321 Preface The 1994 working conference on Programming Environments for Massively Parallel Systems was the latest event of the working group WG 10.3 of the International Federation for Information Processing (IFIP). It succeeded the 1992 conference in Edinburgh on Programming Environments for Parallel Computing. The purpose of the conference was to bring together researchers who are working on ways how to help programmers exploit the full potential of massively parallel systems, and to discuss the state-of-the-art of software for massively par allel systems with special attention to programming tools and environments. The conference was held from April 25 to April 29, 1994, at Centro Stefano Franscini (CSF), Monte Verita, located in the hills above Ascona on the banks of Lago Maggiore, in the southern part of Switzerland. It was jointly organized by the Swiss Scientific Computing Center Centro Svizzero di Calcolo Scientijico (CSCS) and CSF. The more than 60 participants at this conference came from both academia and industry in Europe, USA and Japan. The conference was sponsored by the Swiss Federal Institute of Technology Zurich (ETHZ), the Swiss National Science Foundation (SNF), NEC Corporation, and Sun Microsystems, Switzerland. During the five days of the conference, more than 40 scientific papers were presented in 14 sessions on topics covering all aspects of software for massively parallel systems. During a demonstration session, the audience was able to get the 'touch and feel' of several different recently developed tool environments. The technical program was supplemented by a public talk in Italian by Prof. Marco Vanneschi, University of Pisa, Italy. Each day of the conference was devoted to a specific theme in programming of massively parallel systems. Two sessions in the morning and one in the afternoon consisted of in-depth technical presentations. The remaining time was organized in an innovative fashion to serve the purpose of a working conference. First, one of the participants gave a critical review of the state-of-the-art of the day's theme and summarized open questions. After that small working groups were formcd, each fo cusing on one of the most pressing problems of that day's theme. An assessment of possible solutions to that problem was made and the findings presented informally in the late afternoon. Both, the review and the working groups were generally con sidered useful by the participants. This forum not only brought the participants closer together to discuss matters informally, but also led to the presentation of interesting perspectives. Last, but surely not least, I want to express my gratitude to Rene Rehmann and Klara Mafli (CSCS), Katia Bastianelli (CSF JETHZ), and the other staff of SeRD-CSCS and CSF who all did an exceptional job in organizing the conference. Through their dedicated work, they ensured that the conference ran smoothly and created a very pleasant and stimulating atmosphere at Monte Verita. Thanks to them, the conference will surely remain an unforgettable experience to all of uso Manno, May 18, 1994 Karsten M. Deckcr Table of Contents Introduction Karsten M. Decker XI The Cray Research MPP Fortran Programming Model Tom MacDonald and Zdenek Sekera . . . . . . . . . . . . 1 Resource Optimisation via Structured Parallel Programming Bruno Bacci, Marco Danelutto and Susanna Pelagatti . . . . . . 13 SYNAPS/3 - An Extension of C for Scientific Computations V. A. Serebriakov, A. N. Bezdushny and C. G. Belov. . . . . . . 27 The Pyramid Programming System Zheng Lin, Songnian Zhou and Wenfeng Li 37 Intelligent Algorithm Decomposition for Parallelism with Alfer S. N. McIntosh-Smith, B. M. Brown and S. Hurley . . . . . . . .. 47 Symbolic Array Data Flow Analysis and Pattern Recognition in N umerical Codes Christoph W. Keßler. . . . . . . . . . . . . . . . . . . . . . . . .. 57 A GUI for Parallel Code Generation Mark R. Gilder, Mukkai S. Krishnamoorthy and lohn R. Punin. 69 Formal Techniques Based on Nets, Object Orientation and Reusability for Rapid Prototyping of Complex Systems Fabrice Kordon. . . . . . . . . . . . . . . . . . . . . . . . 81 Adaptor - A Transformation Tool for HPF Programs Thomas Brandes and Falk Zimmermann . . . . . . . . . 91 A Parallel Framework for U nstructured Grid Solvers D. A. Burgess, P. I. Crumpton and M. B. Giles. . . . . 97 A Study of Software Development for High Performance Computing Manish Parashar, Salim Hariri, Tomasz Haupt and Geoffrey Fox . 107 Parallel Computational Frames: An Approach to Parallel Application Development based on Message Passing Systems M. Pruscione, P. Flocchini, E. Giudici, S. Punzi and P. Stofella . 117 A Knowledge-Based Scientific Parallel Programming Environment Karsten M. Decker, Jiri J. Dvorak and Rene M. Rehmann . . 127 Parallel Distributed Algorithm Design Through Specification Transformation: The Asynchronous Vision System Didier Buchs, Daniel Monteiro, Fabrice Mourlin and Denis Brunet 139 Steps Towards Reusability and Portability in Parallel Programming Helmar Burkhan and Stephan Gutzwiller ....... . .. 147 An Environment for Portable Distributed Memory Parallel Programming Christi an Clemenr;on, Akiyoshi Endo, Josef Fritscher, Andreas Müller, Roland Rühl and Brian J. N. Wylie . . . . . . .. 159 Reuse, Portability and Parallel Libraries Lyndon J. Clarke, Roben A. Fletcher, Shari M. Trewin, R. Alasdair A. Bruce, A. Gordon Smith and Simon R. Chapple .. 171 Assessing the U sability of Parallel Programming Systems: The Cowichan Problems Gregory V. Wilson . . . . . . . . . . . . . . . . . . . . .. 183 Experimentally Assessing the U sability of Parallel Programming Systems Duane SzaJron and Jonathan Schaeffer. . . . . . . . . . . .. 195 Experiences with Parallel Programming Tools Pritz G. Wollenweber, Saulo Barros, David Dent, Lars fsaksen and Guy Robinson . . . . . . . . . . . . . . . . . . . . . . . . . .. 203 The MPI Message Passing Interface Standard Lyndon Clarke, fan Glendinning and Rolf Hempel. . . . . . . . .. 213 An Efficient Implementation of MPI Hubenus Franke, Peter Hochschild, Pratap Pattnaik and Marc Snir 219 Post: A New Post al Delivery Model Marc Aguilar and Beat Hirsbrunner 231 Asynchronous Backtrackable Communications in the SLOOP Object-Oriented Language N. Signes, J.-P. Bodeveix, D. Plaindoux, F. Cabestre and C. Percebois . . . . . . . . . . . . . . . . . . . . . . . . . . .. 239 VIII A Parallel 1/0 System for High-Performance Distributed Computing Steven A. Moyer and V. S. Sundemm . . . . . . . . . . . . . . . . 245 Language and Compiler Support for Parallel 1/0 Rajesh Bordawekar and Alok Choudhary . . . . . . . . . . . 257 Locality in Scheduling Models of Parallel Computation Peter Thanisch, Michael G. Norman, Cristina Boeres and Susanna Pelagatti . . . . . . . . . . . . . . . . . . . . . 265 A Load Balancing Algorithm for Massively Parallel Systems Mario Cannatam, Giandomenico Spezzano and Domenico Talia. . 275 Static Performance Prediction in PCASE: A Programming Environment for Parallel Supercomputers Yoshiki Seo, Tsunehiko Kamachi, Yukimitsu Watanabe, Kazuhim Kusano, Kenji Suehim and Yukimasa Shimto . .. 287 A Performance Tool for High-Level Parallel Programming Languages R. Bruce Irvin and BaTton P. Miller . . . . . . . . . . . . . . . .. 299 Implementation of a Scalable Trace Analysis Tool Xavier-Fmnr;ois Vigoumux ............. . . . . . 315 The Design of a Tool for Parallel Program Performance Analysis and Tuning Anna Hondmudakis and Rob Pmcter . . . . . . . . . . . . . . . . . 321 The MPP Apprentice Performance Tool: Delivering the Performance of the Cray T3D Winifred Williams, Timothy Hoel and Douglas Pase . . . . 333 Optimized Record-Replay Mechanism for RPC-based Parallel Programming Alain Fagot and Jacques Chassin de Kergommeaux . . . . . . . . . 347 Abstract Debugging of Distributed Applications Thomas Kunz and James P. Black ........ . . 353 Design of a Parallel Object-Oriented Linear Algebra Library F. Guidec and J.-M. Jezequel . . . . . . . . . . . . . . . . . . . . . 359 IX A Library for Coarse Grain Macro-Pipelining in Distributed Memory Architectures F. Desprez. . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . 365 An Improved Massively Parallel Implementation of Colored Petri-Net Specifications Franr;ois Breant and Jean-Franr;ois Pradat-Peyre . . . . . .. 373 A Tool for Parallel System Configuration and Program Mapping based on Genetic Algorithms F. Baiardi, D. CiufJolini, A. M. Lomartire, D. Montanari and G. Pesce . . . . . . . . . . . . . . . . . . . . . . . . . .. 379 Emulating a Paragon XP /S on a Network of Workstations Georg Stellner, Arndt Bode, Stefan Lamberts and Thomas Ludwig 385 Evaluating VLIW-in-the-Iarge B. Bacci, E. Chiti, M. Danelutto and M. Vanneschi 393 Implementing aN-Mixed Memory Model on a Distributed Memory System Vicente Cholvi-Juan and Jose M. Bernabeu-Aubtin . . . . . . . . . 401 Working Group Report: Reducing the Complexity of Parallel Software Development Jonathan SchaefJer . . . . . . . . . . . . . . . . . . . . . . . . . .. 409 Working Group Report: Usability ofParallel Programming Sys tem Gregory V. Wilson . . . . . . . . . . . . . . . . 411 Working Group Report: Skeletons/Templates Marco Danelutto . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415 x Programming Environments for Massively Parallel Distributed Systems, Monte Verita, Switzerland © Birkhäuser Verlag Basel 1994 Introduction Karsten M. Decker The promise of scalable computation and storage space provided by Mas sively Parallel Systems (MPSs) is becoming more and more important for high performance computing. Growing acceptance of MPSs in academia is clearly visible now. In spite of this fact and the widespread view that MPSs represent an im portant technology for progress in science and engineering, and, more generally, for commercial competitiveness, the usage of MPSs in industry is still minimal. One obstacle to higher usage is the fact that the programming of MPSs is still a complex task. Alleviating this software problem is sometimes referred to as one of the biggest challenges of the 1990's. The strategic importance of MPSs for the progress of high-performance com puting is recognized in national and international information technology programs in Europe, USA and Japan. While previous activities in Europe focused on the de velopment of the methodical foundations of massively parallel computing, there is now a strong orientation towards porting real applications to MPSs. In the USA, there is a concentration on the so-called Grand Challenges of science. Japan's recently launched Real World Computing program has a wider scope and inves tigates the general application of massively parallel systems to soft information processing. The acceptance of MPSs on the application user level has grown significantly. But, in spite of all the efforts and achievements, their usage still lags far behind expectations. This is especially true for the pragmatic industrial users. From their perspective, ease of MPS use has not yet been achieved even for applications believed to be well-suited for these architectures. Analyzing the current situation, one of the major reasons for this faHure is the still insufficient level of abstraction provided for programming MPSs. In par ticular, still missing are high-level programming methods and corresponding tools supporting the demanding design phase of parallel applications. Most of the avail able programming tools still do not operate on a sufficiently high level of abstrac tion; often they provide inadequate means to handle the natural level of granular ity of an application. Standard high-level programming languages with language extensions or run-time libraries supporting programming of the distributed ad dress space of MPSs require considerable effort and experience to develop efficient programs. To a lesser extent, the latter is also true for higher-level application oriented programming languages like High Performance Fortran (HPF). Although the underlying programming paradigm of HPF frees the user of programming the .

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.