ebook img

Distributed Artificial Intelligence PDF

370 Pages·1987·7.836 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Distributed Artificial Intelligence

Edited by Michael N Huhns Microelectronics and Computer Technology Corporation Austin, Texas Distributed Artificial Intelligence Pitman, London Morgan Kaufmann Publishers, Inc., Los Altos, California PITMAN PUBLISHING 12» Long Acre, London WC2E 9AN © Michael N Huhns (Editor) 1987 Copyright of the volume as a whole is the Editor's, although copyright of an individual paper within the volume is retained by its author(s). First published 1987, reprinted 1988 Available in the Western Hemisphere from MORGAN KAUFMANN PUBLISHERS, INC., 2929 Campus Drive, San Mateo, California 94403 ISSN 0268-7526 British Library Cataloguing in Publication Data Distributed artificial intelligence.— (Research notes in artificial intelligence, ISSN 0268-7526) 1. Artificial intelligence—Data processing 2. Electronic data processing —Distributed processing I. Huhns, Michael N. II. Series 006.3 Q336 ISBN 0-273-08778-9 Library of Congress Cataloging in Publication Data Huhns, Michael N. Distributed artificial intelligence. (Research notes in artificial intelligence) Bibliography: p. Includes index. 1. Artificial intelligence. 2. Electronic data processing—Distributed processing.3. Problem solving—Data processing. I. Title. II. Series: Research notes in artificial intelligence (London, England) Q335.H77 1987 006.3 86-33259 ISBN 0-934613-38-9 All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording and/or otherwise, without either the prior written permission of the Publishers or a licence permitting restricted copying in the United Kingdom issued by the Copyright Licencing Agency Ltd, 33-34 Alfred Place, London WC1E 7DP. This book may not be lent, resold, hired out or otherwise disposed of by way of trade in any form of binding or cover other than that in which it is pubHshed, without the prior consent of the publishers. Produced by Longman Group (FE) Ltd Printed in Hong Kong Foreword This monograph presents a collection of papers describing the current state of research in distributed artificial intelligence (DAI). DAI is concerned with the cooperative solu­ tion of problems by a decentralized group of agents. The agents may range from simple processing elements to complex entities exhibiting rational behavior. The problem solv­ ing is cooperative in that mutual sharing of information is necessary to allow the group as a whole to produce a solution. The group of agents is decentralized in that both control and data are logically and often geographically distributed. The papers describe architectures and languages for achieving cooperative problem solving in a distributed en­ vironment, and include several successful applications of distributed artificial intelligence in manufacturing, information retrieval, and distributed sensing. Appropriateness of DAI As the first available text on distributed artificial intelligence, this book is intended to fulfill several needs. For researchers, the research papers and descriptions of applications are representative of contemporary trends in distributed artificial intelligence, and should spur further research. For system developers, the applications delineate some of the possibilities for utilizing DAI. For students in artificial intelligence and related disciplines, this book may serve as the primary text for a course on DAI or as a supplementary text for a course in artificial intelligence. A preliminary version of the book has already been used successfully as a text for an industrial short course and for a graduate seminar on DAI. Finally, this book is intended as a reference for students and researchers in other disciplines, such as psychology, philosophy, robotics, and distributed computing, who wish to understand the issues of DAI. There are five primary reasons why one would want to utilize and study DAI. 1. DAI can provide insights and understanding about interactions among humans, who organize themselves into various groups, committees, and societies in order to solve problems. v 2. DAI can provide a means for interconnecting multiple expert systems that have dif­ ferent, but possibly overlapping expertise, thereby enabling the solution of problems whose domains are outside that of any one expert system. 3. DAI can potentially solve problems that are too large for a centralized system, because of resource limitations induced by a given level of technology. Limiting factors such as communication bandwidths, computing speed, and reliability result in classes of problems that can be solved only by a distributed system. 4. DAI can potentially provide a solution to a current limitation of knowledge engineer­ ing: the use of only one expert. If there are several experts, or several nonexperts that together have the ability of an expert, there is no established way to use them to engineer a successful system. 5. DAI is the most appropriate solution when the problem itself is inherently dis­ tributed, such as in distributed sensor nets and distributed information retrieval. Distributed artificial intelligence also provides the next step beyond current expert systems. It suggests the following approach to their development: build a separate sub­ system for each problem domain and based on the ability of each expert, and then make these subsystems cooperate. This approach potentially has the following additional ad­ vantages: 1. Modularity: The complexity of an expert system increases rapidly as the size of its knowledge base increases. Partitioning the system into N subsystems reduces the complexity by significantly more than a factor of N. The resultant system is easier to develop, test, and maintain. 2. Speed: The subsystems can operate in parallel. 3. Reliability: The system can continue to operate even if part of it fails. 4. Knowledge acquisition: It is easier to find experts in narrow domains. Also, many problem domains are already partitioned or hierarchical—why not take advantage of this? 5. Reusability: A small, independent expert system could be a part of many dis­ tributed expert systems—its expertise would not have to be reimplemented for each. vi Research Directions in DAI Current research in DAI can be characterized and understood in the context of the following paradigm: there is a collection of agents (logically-distinct processing elements) which are attempting to solve a problem. Although there are identifiable global states and goals of the problem-solving, each agent has only a partial and inexact view of these.1 Each agent attempts to recognize, predict, and influence the global state such that its local view of the goals is satisfied. The methods it has available are to compute solutions to subproblems, which are in general interdependent, and to communicate results, tasks, goals, estimates of the problem-solving state, etc. Research is underway on all aspects of this paradigm and a cross-section of it—organized into the three parts Theoretical Issues, Architectures and Languages, and Applications and Examples—is presented in this book. The first part, Theoretical Issues, addresses ways to develop 1) control abstractions that efficiently guide problem-solving, 2) communication abstractions that yield coop­ eration, and 3) description abstractions that result in effective organizational structure. In Chapter 1, Ginsberg describes principles of rationality by which agents solving inter­ dependent subproblems can achieve their goals without communicating explicit control information among themselves. Another way in which an agent can achieve its goals is to influence a future global state by reorganizing and managing other agents: these then work to achieve the global state desired by the first. This is sometimes done by transmit­ ting tasks, and possibly the knowledge needed to solve them, to the other agents. These organizational issues are addressed in the paper by Durfee et al. in Chapter 2; further, Chapter 3 presents a language for describing, forming, and controlling organizations of agents. The second part of this book describes architectures for developing and testing DAI systems. Bisiani et al, Gasser et a/., and Green have each constructed generic environ­ ments within which DAI systems can be developed (Chapters 4-6). These are medium- scale systems which could involve up to several hundred agents. By contrast, the system described by Shastri in Chapter 7 potentially could utilize the processing power of thou­ sands of agents in a connectionist encoding of the inheritance and categorization features of semantic nets. In Chapter 8, Sridharan presents a way in which various algorithms can be executed efficiently and naturally on large collections of processors. The applications described in the third part of the book, Chapters 9-11, comprise manufacturing, office automation, and man-machine interactions. In addition, the anno­ tated bibliography in Chapter 12 should provide a useful guide to the remainder of the 1The global state may not be determinate if the distributed system is asynchronous. vii field. To enable individual research efforts (like those described in this book) to be related to the field as a whole, participants at the Sixth Workshop on DAI identified eight dimensions by which DAI systems could be classified.2 A version of these dimensions and the attributes that define a spectrum of possible values for each dimension are shown in the following table: Table 1: Dimensions for Categorizing DAI Systems Dimension Spectrum of Values System Model Individual Committee Society Grain Fine Medium Coarse System Scale Small Medium Large Agent Dynamism Fixed Programmable . . . Teachable .. Autodidactic Agent Autonomy Controlled Interdependent , ..Independent Agent Resources Restricted Ample Agent Interactions Simple Complex Result Formation By Synthesis By Decomposition In this table, grain is the level of decomposition of the problem, system model refers to the number of agents used to solve the problem, and system scale refers to the number of computing elements used to implement the agents. Some of the attributes apply to an entire DAI system, while the rest are used to characterize the individual agents in the system. Classifying the papers in this volume according to these dimensions yields the following results: Table 2: Categorizing the DAI Systems in this Book Dimension Spectrum of Values System Model 7 1 11 2,3,4,5,6,9,10 Grain 7 2,3,8,9 10,4,5,6,1,11 System Scale 1 2,3,4,5,10 6,8,9,11 7 Agent Dynamism 8 7,11 1 2,9,10 Agent Autonomy 8 2,3,11 1,7,9,10 Agent Resources 7,10,11 2,3 1,9 1 Agent Interactions 7 1,9,10 2,3,4,5,6,11 Result Formation 2,3,6,7,9,11 1 10 The numbers in the table refer to chapters in this book where the DAI systems are described. Not all dimensions are relevant to all systems. In particular, some chapters describe general-purpose architectures and languages for DAI that would be characterized by a 2N. S. Sridharan, "Report on the 1986 Workshop on Distributed Artificial Intelligence," to appear in AI Magazine, 1987. Vlll range of possible attribute values for some of the dimensions. However, the table shows the broad coverage of DAI which this book presents. Acknowledgements I would like to thank Michael Genesereth and Matthew Ginsberg for organizing the Fifth Workshop on Distributed Artificial Intelligence, held at Sea Ranch, California in December 1985. The papers in this monograph were first presented there, and the format of this book is based on their collection and categorization of these papers. The papers themselves represent the contributions of some of the leading researchers in the field. I am grateful to Microelectronics and Computer Technology Corporation for providing the environment and support for artificial intelligence research which made this compilation possible. I would also like to thank Dr. N. S. Sridharan for encouraging and inspiring me to edit this monograph. —Michael N. Huhns IX Chapter 1 Decision Procedures Matthew L. Ginsberg Abstract Distributed artificial intelligence is the study of how a group of individual intelligent agents can combine to solve a difficult global problem; the usual approach is to split the original problem into simpler ones and to attack each of these independently. This paper discusses in very general terms the problems which arise if the subproblems are not independent, but instead interrelate in some way. We are led to a single assumption, which we call common rationality, that is provably optimal (in a formal sense) and which enables us to characterize precisely the communication needs of the participants in multiagent interactions. An example of a distributed computation using these ideas is presented. 1.1 Introduction The thrust of research in distributed artificial intelligence (DAI) is the investigation of the possibility of solving a difficult problem by presenting each of a variety of machines with simpler parts of it. The approach that has been taken has been to consider the problem of dividing the original problem: what subtasks should be pursued at any given time? To which available machine should a given subtask be assigned? The question of how the individual machines should go about solving their subproblems has been left to the non-distributed AI community (or perhaps to a recursive application of DAI techniques). The assumption underlying this approach—that each of the agents involved in the solution of the subproblems can proceed independently of the others—has recently been called into question [2,3,6,7,10]. It has been realized that, in a world of limited resources, 3 it is inappropriate to dedicate a substantial fraction of those resources to each processor. The increasing attractiveness of parallel architectures in which processors share memory is an example of this: memory is a scarce resource. Automated factories must inevitably encounter similar difficulties. Are the robots working in such factories to be given distinct bins of component parts, and nonoverlap- ping regions in which to work or to travel from one area of the factory to another? It seems unlikely. My intention in this paper is to discuss these issues at a very basic (i.e., formal) level. I will be interested in situations where: 1. A global goal has been replaced by a variety of local ones, each pursued by an individual agent or process, and 2. The actions of the individual agents may interact, in that success or failure for one such agent may be partially or wholly contingent upon an action taken by another. The first of these is a massive disclaimer. I will not be concerned with the usual problem of dividing the global goal, but will assume that this has already been done. My intention is merely to remove the constraint that the subproblems or subagents cannot interact; the problem of subdividing problems in the absence of this constraint is outside the intended scope of this paper. The second remark above explicitly allows for "partial" success or failure—correspond­ ing, for example, to the speed with which a given subgoal is achieved. In the case where the agents do not interact, we will assume that each knows enough to evaluate the results of its possible actions. For agent z, this evaluation will be incorporated in a payoff function p which assigns to any alternative m the value of that course of action. Thus if M represents the set of all of z's possible courses of action, p is simply a function p:M—>IR. (1.1) That the range of p is ]R as opposed to {0,1} reflects our allowing for a varying degree of success or failure. Determining the function p is a task that lies squarely within the province of non- distributed AI research. Given such a function, selecting the alternative m which maxi­ mizes p{m) is straightforward. In the absence of interaction, the problems of the individual agents would now be solved. If the success or failure of some agent depends on actions taken by another, however, this is not the case: the function p in Equation 1.1 will have as its domain the set of actions available to all of the agents, as opposed to a single one. 4 This sort of problem has been discussed extensively by game theorists. They do, however, generally make the assumption that the agents have common knowledge of each others' payoff functions. This need not be valid in an AI setting. We will address this problem in due course; our basic view is that the fundamental role of communication between interacting intelligent agents is to establish an agreed payoff function as described in the last paragraph. Before turning to this, let us examine in an informal way some situations in which interaction is important. The first is one which we will refer to as coordination. Suppose that two robots, one in Boston and the other in Palo Alto, decide to meet in Pittsburgh to build a widget. Now, building a widget is a complicated procedure unless you happen to have both a zazzyfrax and a borogove, although either in isolation is useless. Zazzyfraxen are available only in New York, and borogoves only in San Francisco; should the robots stop on their travels to acquire their respective components? The answer is clearly that they should; note, however, that for either robot to do so involves a great many assumptions about the other robot. The Palo Alto robot needs not only to know about the availability of zazzyfraxen in New York—he must also assume that the Boston robot knows about the borogoves in San Francisco, and that the Boston robot knows the Palo Alto robot knows about the zazzyfraxen, and so on. Halpern and Moses [8] have described this sort of situation as common knowledge. As we remarked earlier, common knowledge of this sort is presumed by the game theorists in their assumption of common knowledge of the payoff function. But even this is not enough to ensure coordination of the robots' actions: the Palo Alto robot must assume that the Boston robot is sensible enough to stop in New York, which implies that the Boston robot knows he is sensible enough to stop in San Francisco, and so on. So even common knowledge of the payoff function is not enough: some sort of common knowledge of the methods by which the robots select their actions is also required. The game theorists deal with this last requirement by fiat, assuming [13] that any "rational" agents will coordinate their actions in such a situation. This seems unneces­ sarily ad hoc, and also appears to conflict with the philosophy that each agent strives to achieve a purely local goal. Our robots somehow overcome these difficulties, starting a successful widget business in Pittsburgh. At some point, however, their zazzyfrax breaks, and they arrange with the New York factory to exchange five widgets for a replacement. Since the robots need the zazzyfrax as quickly as possible and the New York factory is equally anxious to get its widgets, it is decided that each group will ship their contribution to the exchange immediately, trusting the other to act similarly. 5

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.