ebook img

Cognitive Carpentry : A Blueprint for How to Build a Person PDF

278 Pages·2002·2.13 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Cognitive Carpentry : A Blueprint for How to Build a Person

Page iii Cognitive Carpentry: A Blueprint for How to Build a Person John L. Pollock A Bradford Book The MIT Press Cambridge, Massachusetts London, England © 1995 Massachusetts Institute of Technology All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher. This book was printed and bound in the United States of America. Library of Congress Cataloging-in-Publication Data Pollock, John L. Cognitive carpentry: a blueprint for how to build a person / John L. Pollock. p. cm. "A Bradford book." Includes bibliographical references and index. ISBN 0-262-161524 (alk. paper) 1. Reasoning. 2. Cognition. 3. OSCAR (Computer file) 4. Pollock, John L. How to build a person. 5. Artificial intelligence—Philosophy. 6. Machine learning. I. Title. BC177.P599 1995 006.3'3—dc20 9448106 CIP For Cynthia— the mother of OSCAR Contents Preface xi 1 Rational Agents 1. Two Concepts of Rationality 1 2. The Goal of Rationality 6 3. Precursors of Rationality 8 4. Mechanisms of Cognition 9 5. The Logical Structure of Practical Rationality 12 6. An Overview of Practical Rationality 32 7. Interfacing Epistemic and Practical Cognition 36 8. Reasons, Arguments, and Defeasibility 38 9. Reflexive Cognition 43 10. Interest-Driven Reasoning 46 11. The Project 49 2 Epistemology from the Design Stance 1. Epistemology and Cognition 51 2. Perception 52 3. Justification and Warrant 58 4. The Statistical Syllogism 59 5. Generalizations of the Statistical Syllogism 67 6. Direct Inference and Definite Probabilities 72 7. Induction 77 8. Other Topics 83 3 The Structure of Defeasible Reasoning 1. Reasons and Defeaters 85 2. Arguments and Inference Graphs 86 3. Defeat among Inferences—Uniform Reasons 91 Page viii 4. Taking Strength Seriously 93 5. Other Nonmonotonic Formalisms 104 6. Computing Defeat Status 109 7. Self-defeating Arguments 115 8. A New Approach 120 9. The Paradox of the Preface 125 10. Justification and Warrant 132 11. Logical Properties of Ideal Warrant 138 12. Conclusions 140 4 An Architecture for Epistemic Cognition 1. Criteria of Adequacy for a Defeasible Reasoner 141 2. Building a Defeasible Reasoner 147 3. The Possibility of a D.E.-Adequate Reasoner 149 4. An Interest-Driven Monotonic Reasoner 151 5. An Interest-Driven Defeasible Reasoner 162 6. An Architecture for Epistemic Cognition 166 7. Conclusions 170 5 Plan-Based Practical Reasoning 1. Practical Cognition 175 2. The Inadequacy of the Decision-Theoretic Model 179 3. Integrating Plans into the Decision-Theoretic Model 184 4. Warrant versus Reasoning 199 5. Resurrecting Classical Decision Theory 203 6. Simplifying the Decision Problem 205 7. Conclusions 212 6 The Logical Structure of Plans 1. Plans, Desires, and Operations 213 2. Plans as Programs 217 3. Plans as Graphs 221 4. The Expected-Value of a Plan 229 Page ix 7 An Architecture for Planning 1. Practical Reasoning 235 2. Planning 237 3. Scheduling 241 4. The PLANNING-INITIATOR 244 5. Complex Desires 251 6. The PLAN-SYNTHESIZER 259 7. The PLAN-UPDATER 265 8. An Architecture for Planning 275 9. The Doxastification of Planning 277 8 Acting 1. Reactive Agents and Planning Agents 289 2. The PLAN-EXECUTOR 290 3. The ACTION-INITIATOR 296 9 OSCAR 1. Constructing an Artilect 299 2. The General Architecture 305 3. Examples 327 4. The Status of the OSCAR Project 358 References 363 Index 373 Page xi Preface OSCAR is an architecture for an autonomous rational agent; it is the first such architecture capable of sophisticated rational thought. This book presents the theory underlying OSCAR and gives a brief description of the actual construction of OSCAR. That construction is presented in full detail in The OSCAR Manual, available, with the LISP code for OSCAR, by anonymous FTP from aruba.ccit.arizona.edu, in the directory /pub/oscar. Help in obtaining the files or running OSCAR can be secured by sending an email message to The task of providing reason-schemas sufficient to allow OSCAR to perform various kinds of reasoning—for instance, perceptual reasoning, inductive reasoning, probabilistic reasoning, or inference to the best explanation—is essentially the same task as the traditional one of giving an epistemological analysis of such reasoning. One of the exciting features of OSCAR is that a user can experiment with accounts of such reasoning without being a programmer. OSCAR is designed with a simple user interface that allows a would-be cognitive carpenter to type in proposed reason-schemas in a simple format and then investigate how OSCAR treats problems using those reasonschemas. Thus, if one accepts the general picture of rationality underlying OSCAR, one can use OSCAR as an aid in investigating more specific aspects of rationality. In a somewhat different vein, there is also interdisciplinary work underway at the University of Arizona to use OSCAR as the basis for a system of language processing. This work is being performed jointly by cognitive psychologists, computational linguists, and philosophers. OSCAR is based upon simple and familiar philosophical principles. Cognition is divided loosely into practical cognition and epistemic cognition. Practical cognition aims at choosing plans for acting, and plans are compared in terms of their expected values. Practical cognition presupposes epistemic cognition, which supplies the beliefs used by practical cognition in its search for good plans. Epistemic cognition begins with perception (broadly construed) and forms further beliefs Page xiii through various forms of defeasible reasoning. The details of both the theory of defeasible reasoning and the theory of plan search and plan selection are novel, but the general outline of the theory will be familiar to all philosophers. OSCAR is not incompatible with most of the work that has been done in AI on problems like planning, automated theorem proving, and probabilistic reasoning via Bayesian networks. Most of that work can be incorporated into OSCAR in the form of Q&I modules or reason-schemas. OSCAR supplies a framework for integrating such work into a single architecture. Much of the material presented in this book has been published, in preliminary form, in a other places, and I thank the publishers of that material for allowing it to be reprinted here in modified form. Chapter 1 is based upon material originally published in Cognitive Science ("The phylogeny of rationality") and The Journal of Experimental and Theoretical Al ("OSCAR: a general theory of rationality"). Chapter 2 summarizes material from a number of my books on epistemology, the most noteworthy being Contemporary Theories of Knowledge (Princeton) and Nomic Probability and the Foundations of Induction (Oxford). Chapter 3 presents the theory of defeasible reasoning developed in a series of earlier articles ("Defeasible reasoning'' in Cognitive Science, "A theory of defeasible reasoning" in The International Journal of Intelligent Systems, "Self-defeating arguments" in Minds and Machines, "How to reason defeasibly" in Artificial Intelligence, and "Justification and defeat" in Artificial Intelligence). Chapter 4 contains material from "How to reason defeasibly" and "Interest-driven suppositional reasoning" (The Journal of Automated Reasoning). Chapter 5 is based upon "New foundations for practical reasoning" (Minds and Machines). Some of the material in chapters 7 and 8 was published in "Practical Reasoning in OSCAR" (Philosophical Perspectives). Some of chapter 9 is taken from "OSCAR—a general purpose defeasible reasoner" (Journal of Applied Non-Classical Logics). The rest of the material is seeing the light of day for the first time in this book. I also thank the University of Arizona for its steadfast support of my research, and Merrill Garrett in particular for his continued enthusiasm for my work and the help he has provided in his role as Director of Cognitive Science at the University of Arizona. I am indebted to numerous graduate students for their unstinting constructive criticism, and to my colleagues for their interactions over the years. In the latter connection, I have profited more than I can say from years of discussion with Keith Lehrer. Page 1 Chapter 1 Rational Agents 1. Two Concepts of Rationality There are two ways of approaching the topic of rationality. The philosophical tradition has focused on human rationality. As human beings, we assess our own thought and that of our compatriots as rational to varying degrees. In doing this, we apply "standards of rationality", and our judgments appear to be normative. We compare how people think with how they should think. Many philosophers have thus concluded that epistemology and the theory of practical rationality are normative disciplines-—they concern how people should reason rather than how they do reason. But the source of such normativity becomes problematic. What could make it the case that we should reason in a certain way? I have proposed a naturalistic theory of epistemic norms [1986] that I now want to apply more generally to all of rationality. This account begins with the observation that rules are often formulated in normative language, even when the rules are purely descriptive. For example, in describing a physical system that has been built according to a certain design, we may observe that whenever one state of the system occurs, another state "should" follow it. Thus, I might observe that when I turn the steering wheel of my truck, it should change direction. The use of normative language reflects the fact that we are formulating rules governing the operation of the system (functional descriptions, in the sense of my [1990]), but such functional descriptions do not formulate invariant generalizations. These rules describe how the system "normally" functions, but it is also possible for the system to behave abnormally. For instance, my truck may skid on an icy road, in which case it may fail to change direction despite my turning the steering wheel. In a sense made precise in my [1990], functional descriptions are descriptions of how a system will function under certain conditions, where these conditions are both common and stable. Applying these observations to rationality, my proposal was that we know how to reason. In other words, we have procedural knowledge for how to do this. As with any other procedural knowledge, we must make a competence/performance distinction. Although we know Page 2 how to do something, we may fail to do it in a way conforming to the rules describing our procedural knowledge. The most familiar example in cognitive science is linguistic knowledge, but the same point applies to knowledge of how to reason. So my proposal is that rules for rationality are descriptive of our procedural knowledge for how to reason (or perhaps more generally, for how to think). The use of normative language in the formulation of these rules reflects no more than the fact that we do not always act in accordance with our procedural knowledge. An important fact about procedural knowledge is that although we are rarely in a position to articulate it precisely, we do have the ability to judge whether, in a particular case, we are conforming to the rules describing that knowledge. Thus, in language processing, although I cannot articulate the rules of my grammar, I can tell whether a particular utterance is grammatical. Similarly, in swinging a golf club, although I may be unable to describe how to do it properly ("properly" meaning "in accordance with the rules comprising my procedural knowledge"), I can nevertheless judge in particular cases that I am or am not doing it properly. It just "feels right" or "feels wrong". And similarly, in reasoning, although we may be unable to articulate the rules for reasoning with any precision, we can still recognize cases of good or bad reasoning. These "intuitive'' or "introspective" judgments provide the data for constructing a theory about the content of the rules governing our reasoning. In this respect, the construction of philosophical theories of reasoning is precisely parallel to the construction of linguistic theories of grammar. In each case, our data are "intuitive", but the process of theory construction and confirmation is inductive, differing in no way from theory construction and confirmation in any other science. This account has the consequence that when we describe human rationality, we are describing contingent features of human beings and so are really doing psychology by non-traditional means. The fact that we are employing non-traditional means rather than conventional experimental methods does not seem to be particularly problematic. Again, compare linguistics. In principle, standard psychological experimental methods are applicable in either case, but at this stage of investigation it is difficult to see how to apply them. The only reasonably direct access we have to our norms is through our introspection of conformity or non- conformity to the rules. We cannot simply investigate what people do when they reason, because people do not always reason correctly. The current methodology of cognitive psychology is Page 3 too blunt an instrument to be applied to this problem with much success. No doubt, this will eventually change, and when it does, philosophers must be prepared to have one more field of inquiry pulled out from under their feet, but in the meantime standard 1 philosophical methodology is the best methodology we have for addressing these problems. I have been urging that the task of eliciting the human norms for rationality is essentially a psychological one. However, this leads to a perplexing observation. Some features of human rationality may be strongly motivated by general constraints imposed by the design problem rationality is intended to solve, but other features may be quite idiosyncratic, reflecting rather arbitrary design choices. This leads to an anthropocentric view of rationality. These idiosyncratic design features may prove to be of considerable psychological interest, but they may be of less interest in understanding rationality per se. These considerations suggest a second way of approaching the topic of rationality, which may ultimately prove more illuminating. This is to approach it from the "design stance", to use Dennett's [1987] felicitous term. We can think of rationality as the solution to certain design problems. I will argue that quite general features of these problems suffice to generate much of the structure of rationality. General logical and feasibility constraints have the consequence that there is often only one obvious way of solving certain design problems, and this is the course taken by human rationality. Approaching rationality from the design stance is more common in artificial intelligence than it is in philosophy. This methodology will suffice to generate much of the general architecture of rational thought, but as the account becomes increasingly detailed, we will find ourselves making arbitrary design decisions or decisions based upon marginal comparisons of efficiency. At that stage it becomes 1 Some psychological investigations of reasoning provide convincing evidence for conclusions about the rules governing human reasoning competence. For example, I find the evidence that humans do not employ modus tollens as a primitive rule quite convincing [see Wason 1966 and Cheng and Holyoak 1985]). But most psychological work that aims to elicit the content of these rules is less convincing. For example, most uses of protocol analysis assume that humans have access to what rules govern their reasoning, an assumption that seems obviously wrong in the light of the difficulties philosophers have had in articulating those rules. See the discussion in Smith, Langston, and Nisbett [1992]. Furthermore the results of protocol analysis often seem crude when compared to familiar philosophical theories about the structure of particular kinds of reasoning. Page 4 somewhat problematic just what our topic is. We can either take the traditional philosophical/psychological course and try to give an accurate description of human cognition, or we can take the engineering approach of trying to describe some system or other that will solve the design problem, without requiring that it be an exact model of human cognition. I find both approaches interesting. My overarching goal in this book is to give a general description of rational thought applicable to all rational agents. But a constraint on this enterprise must be that it is possible for a rational agent to function in the way described. The only way to be certain that this constraint is satisfied is to provide a sufficiently precise and detailed account that it becomes possible to build an agent implementing it. This book will attempt to provide large parts of such an account, although by no means all of it. No claim will be made that the rational agent described is in every respect an accurate model of human rationality. On the other hand, I will be guided in my theory construction by the general desire not to stray too far from the human exemplar, partly because I want this theory to throw some light on why human cognition works in the way it does and partly because this is a good engineering strategy. In trying to solve a design problem, it can be very useful to look at a functioning system that already solves the problem, and in the case of rationality the only such system to which we currently have access is human beings. To ensure that a rational agent could actually function in the way described, I will impose the general requirement on my theory that it be implementable. The result of such implementation will be part of an artificial rational agent. In this connection, it is to be emphasized that a rational agent consists of more than an intellect. It must also have a perceptual system, a system of extensors for manipulating its environment, and, for many purposes, linguistic abilities. It is possible that linguistic abilities will come free given the rest of the system, but the received view in contemporary linguistics is that language requires special subsystems dedicated to language processing. All of these topics must be addressed before a complete artificial rational agent can be produced. However, this investigation will focus exclusively on the intellect. The objective is to construct a theory of rationality of sufficient precision 2 to produce an artificial intellect—an artilect for short. Not all of artificial intelligence has so grand an objective as that of 2 The term "artilect" was coined by de Garis [1989]. Page 5 building an artilect. Most work in AI is directed at more mundane problems of data processing and decision making. But even such projects have an intimate connection with the theory of rationality. A general constraint on any such system is that the conclusions drawn and decisions made be "reasonable" ones. To a certain extent, we can judge the reasonableness of conclusions just by using our intuitions. But as the systems get more complex and the questions harder, we outstrip our ability to make such judgments without the backing of a general theory of rationality. Ultimately, all work in AI is going to have to presuppose a theory of rationality. My aim is to construct a sufficiently precise theory of rationality that it can be used to provide the basis for constructing an artilect. That is the objective of the OSCAR project, and OSCAR is the computer system that results. Implementing a theory of cognition on a computer can be regarded as either an exercise in computer modeling or an attempt to build a machine that thinks. This is the difference between weak and strong AI. I favor the latter construal of the enterprise and have defended it at length in my book How to Build a Person. The present book is the sequel to that book. In How to Build a Person, I defended what might be called the "metaphysical foundations" of the project. These consist of three theses: agent materialism, according to which persons are physical objects; token physicalism, according to which mental states are physical states; and strong AI, in the particular form claiming that a system that adequately models human rationality will be a person and have the same claim to thoughts and feelings as a human being. This book takes its title from the last chapter of How to Build a Person, entitled "Cognitive Carpentry" and comprising an initial sketch of the theory of rationality envisioned by the OSCAR project. This book aims to redeem the promissory notes proffered in the earlier book. Although this book does not complete the OSCAR project, I believe that it does establish that it can be completed. This has important implications throughout philosophy and cognitive science. First, it should lay to rest the numerous claims that "symbol- processing AI" is impossible. It must be possible, because this book does it. Second, the theory underlying OSCAR is based upon my own somewhat old-fashioned epistemological theories. Although I prefer not to call them "foundations theories", they are the foundations-like theories defended in my [1974] and [1986] under the label "direct realism". This book establishes that these theories can provide the epistemological basis for a rational agent. Providing such a basis becomes a severe challenge

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.