ebook img

Parallel processing for artificial intelligence PDF

419 Pages·1994·18.897 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Parallel processing for artificial intelligence

Machine Intelligence and Pattern Recognition Volume 14 Series Editors L.N. KANAL and A. ROSENFELD University of Maryland College Park, Maryland, U.S.A. NORTH-HOLLAND AMSTERDAM · LONDON · NEW YORK · TOKYO Parallel Processing for Artificial Intelligence 1 Edited by Laveen N. KANAL University of Maryland College Park, Maryland, U.S.A. Vipin KUMAR University of Minnesota Minneapolis, Minnesota, U.S.A. Hiroaki KITANO Sony Computer Science Laboratory, Japan and Carnegie Mellon University Pittsburgh, Pennsylvania, U.S.A. Christian B. SUTTNER Technical University of Munich Munich, Germany 1994 NORTH-HOLLAND AMSTERDAM · LONDON · NEW YORK · TOKYO ELSEVIER SCIENCE B.V. Sara Burgerhartstraat 25 P.O. Box 211, 1000 AE Amsterdam, The Netherlands Library of Congress Cataloglng-ln-PublIcatIon Data Parallel processing for artificial intelligence / edited by L.N. Kanal, ... [et al.]. p. cm. — (Machine Intelligence and pattern recognition ; v. 14-15) Collection of papers from the IJCAI-93, held In Chambery, France. Includes bibliographical references. ISBN 0-444-81704-2 (v. 1). — ISBN 0-444-81837-5 (v. 2) 1. Parallel processing (Electronic computers) 2. Artificial Intelligence. I. Kanal, Laveen Ν. II. International Joint Conference on Artificial Intelligence (1993 : Chambery, France) III. Series: Machine Intelligence and pattern recognition ; v. 14-<15 >. QA76.58.P37775 1994 006.3—dc20 94-15133 CIP ISBN: 0 444 81704 2 © 1994 Elsevier Science B.V. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior written permission of the publisher, Elsevier Science B.V., Copyright & Permissions Department, P.O. Box 521, 1000 AM Amsterdam, The Netherlands. Special regulations for readers in the U.S.A. - This publication has been registered with the Copyright Clearance Center Inc. (CCC), Salem, Massachusetts. Information can be obtained from the CCC about conditions under which photocopies of parts of this publication may be made in the U.S.A. All other copyright questions, including photocopying outside of the U.S.A., should be referred to the copyright owner, Elsevier Science B.V., unless otherwise specified. No responsibility is assumed by the publisher for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, pro- ducts, instructions or ideas contained in the material herein. This book is printed on acid-free paper. Printed in The Netherlands PREFACE Parallel processing for AI problems is of great current interest because of its potential for alleviating the computational demands of AI procedures. The articles in this book consider parallel processing for problems in several areas of artificial intelligence: image processing, knowledge representation in semantic networks, production rules, mechanization of logic, constraint satisfaction, parsing of natural language, data filtering and data mining. The articles are grouped into six sections. The first four papers address parallel computing for processing and understanding images. Choudhary and Ranka set the stage for this section, titled Image Processing, by examining, as an example, the computationally intensive nature of feature extraction in image processing. They note that vision systems involve low-level, medium-level and high-level operations and require integrating algorithms for image processing, numerical analyis, graph theory, AI and databases. They discuss issues of mesh and pyramid architectures, spatial and temporal parallelism, local and global communication, data decomposition and load balancing and the status of architectures, programming models and software tools in parallel computing for computer vision and image understanding. The second article, by Chu, Gosh, and Aggarwal, focusses on high-level operations for image understanding and reports on a parallel implementation of a rule-based image interpretation system on a distributed memory architecture machine with message passing. The authors discuss the results of their experimental investigation of this implementation from the perspectives of data access locality and task granularity. The next article by Gusciora and Webb presents results of their investigation of several methods for parallel affine image warping on a linear processor array. Evaluating the efficiency, capability, and memory use of the various methods, the authors describe and recommend a method which they call "sweep-based" warping. In the final article of Section I, Jenq and Sahni examine a class of parallel computers called reconfigurable mesh with bus (RMB). They present fast algorithms for template matching, clustering, erosion and dilation, and area and perimeter computation of image components for some members of the RMB family of architectures. The articles in Section II discuss parallel processing for semantic networks. Semantic networks are a widely used means for representing knowledge; methods which enable efficient and flexible processing of semantic networks are expected to have high utility vi for building large-scale knowledge-based systems. Geller presents a method to speed up search in an inheritance tree. The method is based on a parallel class tree representation that uses a list in combination with a preorder number scheme. In addition to standard inheritance, a newly defined operation, upward inductive inheritance, is introduced. The proposed methods exhibit high performance which is largely independent of tree depth and branching factor. The second article in this section, by Evett, Andersen, and Hendler, describes PARKA, a frame-based knowledge representation system implemented on the Connection Machine. PARKA performs recognition queries, finding all objects with a set of specified properties in 0(d+m) time — proportional primarily to the depth d of the knowledge base. They have tested PARKA's performance using knowledge from MCC's Cyc commonsense knowledge base, and have obtained very high performance. Section III deals with the automatic parallel execution of production systems. Production systems are used extensively in building rule-based expert systems. Systems containing large numbers of rules are slow to execute, and can significantly benefit from automatic parallel execution. The article by Amaral and Ghosh provides a survey of the research on parallel processing of production systems. It covers the early systems in which all productions are matched in parallel but only one rule is fired at a time, as well as more recent research in which different rules are fired concurrently. The article concludes with observations on the overall effectiveness of these different approaches and suggestions for further work. The next article, by Schmölze, deals extensively with the problems involved in concurrent firing of rules. The original production systems were designed for executing one rule at a time. If multiple rules are executed simultaneously, then in some cases, the parallel execution may lead to undesirable results. Schmölze presents a framework for guaranteeing the serializable behavior in the parallel execution of productions. Section IV deals with the exploitation of parallelism for the mechanization of logic. While sequential control aspects pose problems for the parallelization of production systems (see Section III), logic has a purely declarative interpretation which does not demand a particular evaluation strategy. Therefore, in this area, very large search spaces provide a significant potential for parallelism. In particular, this is true for automated theorem proving. The three articles in Section IV deal with parallel deduction at various levels, ranging from coarse granular parallel deduction for predicate logic to connectionist processing of propositional logic. vii Suttner and Schumann's article gives a comprehensive survey of the approaches for parallel automated theorem proving for first order logic. It includes both implemented and proposed systems, and briefly describes for each approach the underlying logic calculus, the model of computation, and available experimental results, together with an assessment. In addition the authors present a classification scheme for parallel search based systems which leads to an adequate grouping of the approaches investigated. The orthogonal distinctions proposed are partitioning versus competition parallelization (with further subclasses), and cooperative versus uncooperative parallelization. Together with an extensive list of references, the article provides a classification and overview of the state-of-the-art and the history of parallel theorem proving. The article by Kurfess primarily provides an overview of the potential application of parallelism in logic. The author first describes various levels at which parallelism can be utilized, ranging from the formula, clause, literal, term, and symbol level down to the subsymbolic level. He then focuses on the prospects of massive parallelism for these levels, and discusses their potential regarding computation, communication, synchronization, and memory. In this regard, the author proposes the "term level" as the most promising. Finally, a fine-grain deductive system is described in a connectionist setting, in which parallel unification is based on a matrix representation of terms. Pinkas describes a connectionist approach towards the mechanization of logic. His method is based on a linear time transformation of propositional formulas into symmetric connectionist networks representing quadratic energy functions. The global minima of such a function are equivalent to the satisfying models of the propositional formula. The author presents simulation results for randomly generated satisfiability problems using three algorithms based on Hopfield networks, Boltzmann machines, and Mean-Field networks, respectively. He also describes the application of the approach for propositional inference and the incremental adaptation of energy functions to represent changes in the knowledge base. In the first article in Section V, Zhang and Mackworth consider the problem of constraint satisfaction, which is a useful abstraction of a number of important problems in AI and other fields of computer science. They present parallel formulations of some well-known constraint satisfaction algorithms and analyze their performance theoretically and experimentally. In particular they show that certain classes of constraint satisfaction problems can be solved in polylog time using a polynomial number of processors. In the second article in this section, Lin and Prasanna discuss the technique of consistent labeling as a preprocessing step in the constraint satisfaction problem and present several parallel implementations for the technique. viii Section VI consists of two articles, each on a different, important topic. Palis and Wei discuss parallel formulations for the Tree Adjoining Grammar (TAG), which is a powerful formalism for describing natural languages. The serial complexity of TAG parsing algorithms is much higher than the complexity of simpler grammars such as context-free grammars. Hence it is important to be able to reduce the overall run time using parallel computing. Palis and Wei present two parallel algorithms for TAG parsing, and present theoretical and experimental results. In the final article, Factor, Fertig, and Gelernter discuss the suitability of a parallel programming paradigm called Linda for solving problems in artificial intelligence. They introduce two software architectures, the FGP machine and the Process Trellis. The FGP machine is a software architecture for a data-base driven expert system that learns. Process Trellis is a software architecture for real time heuristic monitors. Both of these systems have been implemented in the language Linda. As is noted in the first article in the book, while some of the individual components of parallelism in vision systems may be understood, the overall process of vision remains very much an open problem. A similar remark applies to knowledge representation, reasoning, problem solving, natural language processing and other capabilities desired for machine intelligence. We think parallel processing will be a key ingredient of (partial) solutions in all the above areas. We thank the authors for the articles included here and hope their work inspires some readers to take on these challenging areas of inquiry. Laveen N. Kanal College Park, MD Vipin Kumar Minnaepolis, MN Hiroaki Kitano Tokyo, Japan Christian Suttner Munich, Germany xi EDITORS Laveen Ν. Kanal is a Professor of Computer Science at the University of Maryland, College Park, and Managing Director of L Ν Κ Corporation, Inc. In 1972 he was elected a Fellow of the IEEE and a Fellow of the American Association for the Advancement of Science. In 1992 he was elected a Fellow of the American Association for Artificial Intelligence and received the King-Sun Fu award of the International Association for Pattern Recognition. Vipin Kumar is currently an Associate Professor in the Department of Computer Science at the University of Minnesota. His co-authored text book, Introduction to Parallel Computing, was published in 1993 by Benjamin Cummings. He is on the editorial board of IEEE Transactions on Data and Knowledge Engineering. Hiroaki Kita no, Ph.D, is with Sony Computer Science Laboratory and Carnegie Mellon University. In 1993 he received the Computers and Thought Award of the International Joint Conference on Artificial Intelligence. Christian Suttner is currently writing his doctoral dissertation and working on tools and methods for the utilization of parallel computers at the TU München. With Laveen Kanal he co-edited the Proceedings of a Workshop on Parallel Processing held in Sydney, Australia in August 1991. xiii AUTHORS J.K. Aggarwal is Cullen Professor of Electrical and Computer Engineering at The University of Texas at Austin and Director of the Computer and Vision Research Center. An IEEE Fellow, he is an Editor of IEEE Transactions on Parallel and Distributed Systems. In 1992, Dr. Aggarwal received the Senior Research Award of the American Society of Engineering Education. José Nelson Amaral is a professor at Pontificia Universidade Catolica do Rio Grande do Sul (PUCRS) - Brazil. He is currently working towards his Ph.D. at the Univ. of Texas at Austin. Alok Choudhary is an associate professor at Syracuse University. His research interests include high-performance parallel and distributed computing, software environments and applications. He received an NSF Young Investigator Award in 1993. Chen-Chau Chu received the Ph.D. degree in electrical and computer engineering from The University of Texas at Austin in 1991. He is currently employed by the Schlumberger Austin Systems Center. His research interests include machine intelligence, computer vision, and data modeling. Matt Evett received his Ph.D. in computer science from the University of Maryland in 1994, and joined the faculty at Florida Atlantic University in Boca Raton. James Geller is an associate professor at the Computer and Information Sciences Department of the New Jersey Institute of Technology. He has worked in artificial intelligence and object-oriented databases and is currently involved in a large research project on distance learning. Joydeep Ghosh is currently an associate professor of electrical and computer engineering at the University of Texas at Austin where he conducts research on parallel, intelligent computer architectures and artificial neural systems. He received the 1992 Darlington Award for best journal paper from the IEEE Circuits and Systems Society. xiv George Gusciora expects to receive the Ph.D.in computer engineering from Carnegie Mellon University in February 1994. He plans to work at the Maui High Performance Computing Center in Hawaii. His research interests include parallel programming tools, parallel computer architectures and parallel algorithms. Jim Hendler is an associate professor and head of the Autonomous Mobile Robots Laboratory at the University of Maryland. He serves as the Artificial Intelligence area editor for the journal Connection Science is an associate editor of the Journal of Experimental and Theoretical AI, and is on the editorial board of Autonomous Robots. Jing-Fu Jeng, Ph.D., University of Minnesota, 1991, is currently an assistant professor of physics, mathematics, and computer science at Tennessee State University in Nashville, Tennessee. His research interests include parallel and distributed processing algorithms, computer architectures and systems, and computer vision. Franz J. Kurfess joined the Neural Information Processing Department at the University of Ulm in 1992 after a stay as postdoctoral fellow with the International Computer Science Institute in Berkeley. His main interest lies in combining artificial intelligence, neural networks and parallel processing techniques, in particular for logic and reasoning. Wei-Ming Lin, Ph.D., University of Southern California, 1991, is currently an assistant professor at the University of Texas at San Antonio. His research interests include parallel and distributed computing, parallel processing for artificial intelligence, image processing and computer vision. Alan Mackworth is a professor of computer science at the University of British Columbia and the Shell Canada Fellow of the AI and Robotics Program of the Canadian Institute for Advanced Research. He currently serves as the Director of the UBC Laboratory for Computational Intelligence. He is known for his work on constraint satisfaction and its applications in perception, reasoning and situated robots Michael A. Palis is currently an associate professor of electrical and computer engineering at the New Jersey Institute of Technology. He is an editorial board member of the IEEE Transactions on Computers, and subject area editor of the Journal of Parallel and Distributed Computing. Gadi Pinkas is currently director of research and development at Amdocs Inc., a senior fellow of the Center for Optimization and Semantic Control and a senior research associate at the Department of Computer Science at Washington University, St. Louis, Missouri.

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.