ebook img

Parallel Algorithms for Machine Intelligence and Vision PDF

444 Pages·1990·13.884 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Parallel Algorithms for Machine Intelligence and Vision

SYMBOLIC COMPUTATION Artificial Intelligence Managing Editor: D.W. Loveland Editors: S. Amarel A. Biermann L. Bole A. Bundy H. Gallaire P. Hayes A. Joshi D. Lenat A. Mackworth E. Sandewall J. Siekmann W. Wahlster R. Reiter Springer Series SYMBOLIC COMPUTATION -Artificial Intelligence N.J. Nilsson: Principles of Artificial Intelligence. XV, 476 pages, 139 figs., 1982 J.H. Siekmann, G. Wrightson (Eds.): Automation of Reasoning 1. Classical Papers on Computational Logic 1957-1966. XII, 525 pages, 1983 J.H. Siekmann, G. Wrightson (Eds.): Automation of Reasoning 2. Classical Papers on Computational Logic 1967-1970. XII, 637 pages, 1983 L. Bole (Ed.): The Design of Interpreters, Compilers, and Editors for Augmented Transition Networks. XI, 214 pages, 72 figs., 1983 M.M. Botvinnik: Computers in Chess. Solving Inexact Search Problems. XIV, 158 pages, 48 figs., 1984 L. Bole (Ed.): Natural Language Communication with Pictorial Information Systems. VII, 327 pages, 67 figs., 1984 R.S. Michalski, J.G. Carbonell, T.M. Mitchell (Eds.): Machine Learning. An Artificial Intelligence Approach. XI, 572 pages, 1984 A. Bundy (Ed.): Catalogue of Artificial Intelligence Tools. Second, Revised Edition. XVII, 168 pages, 1986 C. Blume, W. Jakob: Programming Languages for Industrial Robots. XIII, 376 pages, 145 figs., 1986 J.W. Lloyd: Foundations of Logic Programming. Second, Extended Edition. XII, 212 pages, 1987 L. Bole. (Ed.): Computational Models of Learning. IX, 208 pages, 34 figs., 1987 L. Bole (Ed.): Natural Language Parsing Systems. XVIII, 367 pages, 151 figs., 1987 N. Cercone, G. McCalla (Eds.): The Knowledge Frontier. Essays in the Representation of Knowledge. XXXV, 512 pages, 93 figs., 1987 continued after index Vipin Kumar P. S. Gopalakrishnan Laveen N. Kanal Editors Parallel Algorithms for Machine Intelligence and Vision With 148 Illustrations Springer-Verlag New York Berlin Heidelberg London Paris Tokyo Hong Kong Vipin Kumar P. S. Gopalakrishnan Computer Science Department IBM T.J. Watson Research Center University of Minnesota Yorktown Heights, NY 10598 Minneapolis, MN 55455 USA USA Laveen N. Kanal LNK Corporation Riverdale, MD 20737 USA Library of Congress Cataloging-in-Publication Data Parallel algorithms for machine intelligence and vision! Vipin Kumar, P.S. Gopalakrishnan, Laveen N. Kanal, editors. p. cm. - (Symbolic Computation. Artificial Intelligence) Includes bibliographical references. ISBN-13: 978-1-4612-7994-5 e-ISBN-13: 978-1-4612-3390-9 DOl: 10.1007/978-1-4612-3390-9 I. parallel processing (Electronic computers) 2. Artificial intelligence. 3. Computer vision. I. Kumar, Vipin. II. Gopalakrishnan, P.S. III. Kanal, Laveen N. IV. Series. QA76.5.P31457 1990 004'. 35-dc20 89-77830 Printed on acid-free paper. © 1990 by Springer-Verlag New York Inc. Softcover reprint of the hardcover I st edition 1990 All rights reserved. This work may not be translated or copied at whole or in part without the written permission of the publisher (Springer-Verlag, 175 Fifth Avenue, New York, NY 10010, USA), except for brief excerpts in connection with reviews or scholarly analysis. Use in connection with any form of information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed is forbidden. The use of general descriptive names, trade names, trade marks, etc. in this publication, even if the former are not especially identified, is not to be taken as a sign that such names, as understood by the Trade Marks and Merchandise Marks Act, may accordingly be used freely by anyone. Camera-ready text supplied by the editors using TeX. 9 8 765 432 I ISBN -13: 978-1-4612-7994-5 Preface Many algorithms for solving machine intelligence and vision problems are computationally very demanding. Algorithms used for decision making, path planning, machine vision, speech recognition, and pattern recognition require substantially more power than is available today from commercially feasible sequential computers. Although the speed of sequential computers has been been increasing over time, there are indications that solid state physics will impose limits that cannot be circumvented except through parallel processing. Parallel processing can also be very cost effective, as the advances in VLSI technology have made it easy and inexpensive to construct large parallel pro cessing systems. Hence, there has been a great interest in the development of parallel algorithms for these problems. This volume brings together some of the recent research on parallel algorithms for machine intelligence and vision. It includes papers on subjects such as combinatorial search, problem solving, logic programming, and computer vision. The book begins with several papers that deal with parallel algorithms for state-space search and game tree search. Search permeates all aspects of artificial intelligence (AI) including problem solving, planning, learning, decision making, and natural language understanding. Even though knowledge is often used to reduce search, the complexity of many AI programs can be attributed to large potential solution spaces that have to be searched. Hence there is a great need for implementing search on parallel hardware. Search problems contain control-level parallelism as opposed to data-level parallelism, and hence are more difficult to parallelize. This has led many researchers to incorrectly believe that search problems have only a limited amount (less than one order of magnitude) of parallelism. The research reported in the next four papers indicates that it is feasible to exploit large scale parallelism in search problems. Kumar and Rao present a parallel formulation of depth-first search which retains the storage efficiency of sequential depth-first search and can be im plemented on any MIMD parallel processor. The authors provide a thorough experimental evaluation of the technique in the context of the I5-puzzle prob lem. At the heart of the formulation is a work distribution method that divides the work dynamically among different processors. The authors investigate a number of different work-distribution methods and evaluate them experimen tally and analytically on different architectures. Using a metric called isoef ficiency function, the authors determine that many of the work-distribution techniques introduced by them are highly scalable and also almost optimal for many interesting parallel architectures. Powley, Ferguson, and Korf present two approaches to parallel heuristic search. The first approach that is studied in this chapter is tree decomposition in which different processors explore different parts of the search space. The scheme is applicable to both single-agent tree search and two-player game tree search. The authors discuss several processor allocation strategies that are useful for different search spaces. A parallel alpha-beta search algorithm is vi developed using this scheme and is evaluated experimentally and analytically. The second approach discussed in this chapter is parallel window search in which each processor searches the whole tree but with different cost bounds. The authors also discuss certain node ordering strategies that can be combined with parallel window search to enhance its effectiveness. The overall speedup that can be obtained with this approach is limited if one is looking for an optimal solution. But this approach can be used to find a good suboptimal solution quickly on a parallel processor. Another approach to parallel game tree search is presented by Feldmann, Monien, Mysliwietz, and Vornberger in the next chapter. Parallel implemen tations of alpha-beta pruning algorithms used for searching game trees suffer from search overheads and communication overheads. Search overhead is the extra work done by the parallel algorithm because bound information is un available. Communication overhead is the time spent in sharing information between processors. The authors introduce two new concepts designed to mini mize these overheads. They present experimental evidence showing impressive speedups using their algorithms for searching chess game trees. The chapter by Wah, Li, and Yu ties together several approaches to parallel combinatorial search. The authors attempt to identify the functional require ments of various search algorithms with the objective of assessing whether a general purpose architecture is suitable for some search problem and develop ing efficient mappings of the algorithms to architectures. They also discuss special purpose architectures for combinatorial search problems. Three dif ferent representations for search problems are studied: AND trees, OR trees, and AND/OR trees. The authors describe a multiprocessor architecture for solving branch-and-bound search problems (OR tree representations). They discuss certain anomalies that arise in parallel search algorithms of this type and present necessary and sufficient conditions for eliminating such anomalies. Developing a parallel algorithm and appropriately mapping it to a paral lel architecture seems intrinsically harder than writing a sequential program for most practical problems. Ideally, a programmer should be able to write a program in a high-level language and the system should exploit the inherent parallelism in the program automatically. A possible option, especially in the context of AI applications, is to write the program in a logic programming lan guage such as Prolog and exploit the parallelism automatically. Prolog type languages are especially suited for AI problems, as they can embody paral lelism due to problem reduction (AND-parallelism) as well as nondeterminism (OR-parallelism). The next three papers deal with this topic. The paper by Kale presents an overview of the author's research on parallel problem solving. The author discusses the strong relationships among prob lem solving, theorem proving and logic programming, and describes a parallel execution scheme for logic programs. The author also provides an overview of a runtime support system called Chare Kernel that runs on shared-memory as well as distributed-memory systems. Giuliano, Kohli, Minker and Durand provide an overview of research on parallel logic programming with the PRISM system. The PRISM system pro- vii vides an experimental tool for designing and evaluating control strategies and program structures to be used in parallel problem solvers. The system has been implemented and experimentally evaluated on a IOO-processor BBN Butter fly shared-memory multiprocessor as well as a 16-processor McMob (a ring connected message passing multicomputer). The authors provide details ofthe design philosophy, experimental set up, and various experiments performed on the Butterfly and the McMob. The chapter by Hopkins, Hirschman, and Smith reports results of a se ries of simulation experiments to automatically exploit parallelism in natural language parsing. The parsing program is written in Prolog without consid ering the fact that it may be executed on a parallel processor. The execution system exploits OR-parallelism in the Prolog program automatically. In the context of parsing, this means that whenever more than one grammar rule is applicable, the alternatives are pursued simultaneously. The results of the ex periments indicate that it is possible to obtain substantial speedups in realistic settings. Also if a sufficient number of processors is available, then the parse time increases only linearly as opposed to O(N3). The next set of papers deal with parallel algorithms for problems in com puter vision. A good overview of this entire area is provided by Chaudhary and Aggarwal who present a survey of current research results on parallel implementations of computer vision algorithms. Vision algorithms are usu ally classified into three levels: low, intermediate, and high. Low level vision tasks include clustering, smoothing, convolution, histogram generation, thin ning, and template matching. Intermediate level tasks include region labeling, stereo, motion, and relaxation, and tasks such as object recognition are classi fied as high level vision problems. A high degree of data parallelism is evident in low level tasks and several novel parallel algorithms have been developed by many researchers to solve such problems on a variety of architectures. In termediate and high level problems are relatively harder to parallelize. This chapter presents a comprehensive survey of parallel algorithms for problems in these three different areas. The relative merits and shortcomings of various algorithms are discussed and an extensive list of references is provided for the reader who is interested in obtaining further details. Verghese, Gale, and Dyer address an important problem in image analysis, the tracking of 3-dimensional motion from the motion of 2-dimensional features in a sequence of images. In order to reconstruct a continuous signal in both space and time it is assumed that the sampling rates in space and time are high. Thus, the rate at which images are available over time is high and since memory buffer space is usually limited, this requires high throughput in processing these images. The authors address this by designing a parallel algorithm and implementing it on two tightly coupled multiprocessors, the Aspex Pipe and the Sequent Symmetry. Two general solution paradigms are implemented, and the performance of each is analyzed. In the next chapter, Stewart and Dyer present a parallel simulation of a connectionist stereo algorithm on a shared-memory multiprocessor. The con nectionist model of computation has been found to be suitable for a number viii of vision problems including one for matching a pair of stereo images. In the absence of neural hardware in which a very large number of simple comput ing units are interconnected, the neural computation has to be simulated on a von Neuman computer. In such a simulation, the computation to be per formed by each neural unit has to be done serially on the sequential computer. Since there is a lot of regularity in these computations, and many of these computations are independent, they can also be done on parallel hardware. Stewart and Dyer present results of an implementation of such a system on a commercially available shared-memory parallel computer, and discuss possible implementations on distributed-memory multiprocessors. The next two chapters present analyses of the complexity of several par allel algorithms based on asymptotic arguments, a very powerful tool in un derstanding the limits of speedup achievable using parallel algorithms. Ranka and Sahni review some efficient algorithms for image template matching on various architectures. They present algorithms for systolic arrays, meshes, pyramids, and hypercube machines assuming that each processor has only a small, fixed amount of memory. They also present algorithms on medium grain machines assuming that each processor's memory is proportional to the number of pixels in the image. Besides presenting novel algorithms and the oretical bounds on performance this chapter contains extensive experimental data from implementations on an NCUBE parallel computer and a CRAY2 supercomputer. The chapter by Ezhaghian and Prasanna Kumar is of a theoretical and exploratory nature. They present a novel architecture for image processing problems. Several parallel machine architectures have been studied for com puter vision tasks. Many of them suffer from communication delays resulting from limited connectivity between processors. The authors propose a new ar chitecture based on free space optics that facilitates unit time interconnects between processors and show efficient parallel solutions to several problems in image processing. They present the optical machine model, introduce possible physical realizations and present algorithms for finding connected components, determining the convex hull, and nearest neighboring figures. The research reported in this volume demonstrates that substantial par allelism can be exploited in various machine intelligence and vision problems. Some of these papers as well as other recent research indicate that the early pessimism on the usefulness of parallel processing to AI was unfounded. This pessimism was partly due to Minsky's conjecture that the speedup obtained using a parallel computer increases as the logarithm of the number of process ing elements. It appears that substantial parallelism in AI problems can be exploited even for higher level knowledge representations and structural rela tionships. Progress is also being made is designing fast parallel algorithms for more problems in lower level analysis of data such as in machine vision and pattern recognition. But, indeed, much remains to be done. We hope that the work reported here will help in stimulating additional research on these topics. The editors thank the authors for their cooperation in preparing their chap- ix ters and revising them. Each chapter was reviewed by the editors as well as by other anonymous referees. We are grateful to the reviewers, Jake Aggarwal, F. Warren Burton, Vipin Chaudhary, Chris Ferguson, Joydeep Ghosh, Sanjay Kale, Richard Korf, Burkhard Monien, Peter Mysliweitz, V.N. Rao, Boaz Su per, and Benjamin Wah for their invaluable assistance in our attempt to make this book of value to the community of researchers, students and teachers interested in parallel algorithms for machine intelligence and computer vision. We would like to thank the management at IBM Research for providing the opportunity and encouragement to the second editor to work on this book. We also thank L.N.K. Corporation for administrative assistance in the initial stages of planning and correspondence for the book. VipinKumar P.S. Gopalakrishnan Laveen N. Kanal Contents Preface................................................................................................ v Scalable Parallel Formulations of Depth-First Search .................. 1 VIPIN KUMAR and V. NAGESHWARA RAo Parallel Heuristic Search: Two Approaches ... ............................... 42 CURT POWLEY, CHRIS FERGUSON, and RICHARD E. KORF Distributed Game Tree Search... ................. ................................... 66 R. FELDMANN, B. MONIEN, P. MYSLIWIETZ, and O. VORNBERGER Multiprocessing of Combinatorial Search Problems ..................... 102 BENJAMIN W. WAH, GUO-.TIE LI, and CHEE-FEN Yu Parallel Problem Solving .................................................................. 146 L.V. KALE PRISM: A Testbed for Parallel Control..... ................ .................... 182 MARK E. GUILIANO, MADHUR KOHLI, JACK MINKER, and IRENE DURAND Or-Parallelism in Natural Language Parsing .............................. 232 WILLIAM C. HOPKINS, LYNETTE HIRSCHMAN, and ROBERT C. SMITH Parallelism in Computer Vision: A Review... ............................... 271 VIPIN CHAUDHARY and J.K. AGGARWAL Real-Time, Parallel Motion Tracking of Three Dimensional Objects from Spatiotemporal Sequences.................. 310 GILBERT VERGHESE, KAREY LYNCH GALE, and CHARLES R. DYER Parallel Simulation of a Connectionist Stereo Algorithm on a Shared-Memory Multiprocessor ........................... 340 CHARLES V. STEWART and CHARLES R. DYER Parallel Algorithms for Image Template Matching ..................... 360 SANJAY RANKA and SARTAJ SAHNI

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.