What Computers Still Can't Do : A Critique title: of Artificial Reason author: Dreyfus, Hubert L. publisher: MIT Press isbn10 | asin: 0262540673 print isbn13: 9780262540674 ebook isbn13: 9780585330136 language: English subject Artificial intelligence. publication date: 1992 lcc: Q335.D74 1992eb ddc: 006.3 subject: Artificial intelligence. Page iii What Computers Still Can't Do A Critique of Artificial Reason Hubert L. Dreyfus Page iv To my parents Sixth printing, 1999 ©1972, 1979, 1992 Hubert L. Dreyfus All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher. Printed and bound in the United States of America. Library of Congress Cataloging-in-Publication Data Dreyfus, Hubert L. What computers still can't do : a critique of artificial reason / Hubert L. Dreyfus. p. cm. Rev. ed. of: What computers can't do, 1979. Includes bibliographical references and index. ISBN 0-262-04134-0. ISBN 0-262-54067-3 (pbk.) 1. Artificial intelligence. I. Title. Q335.D74 1992 006.3dc20 92-27715 CIP Page v The difference between the mathematical mind (esprit de géométrie) and the perceptive mind (esprit de finesse): the reason that mathematicians are not perceptive is that they do not see what is before them, and that, accustomed to the exact and plain principles of mathematics, and not reasoning till they have well inspected and arranged their principles, they are lost in matters of perception where the principles do not allow for such arrangement. . . . These principles are so fine and so numerous that a very delicate and very clear sense is needed to perceive them, and to judge rightly and justly when they are perceived, without for the most part being able to demonstrate them in order as in mathematics; because the principles are not known to us in the same way, and because it would be an endless matter to undertake it. We must see the matter at once, at one glance, and not by a process of reasoning, at least to a certain degree. . . . Mathematicians wish to treat matters of perception mathematically, and make themselves ridiculous . . . the mind . . . does it tacitly, naturally, and without technical rules. , Pensées PASCAL Page vii CONTENTS Introduction to the MIT Press Edition ix Acknowledgments liii Introduction to the Revised Edition (1979) 1 Introduction 67 Part I. Ten Years of Research in Artificial Intelligence (1957 1967) 1. Phase I (1957 1962) Cognitive Simulation 91 I. Analysis of Work in Language Translation, Problem Solving, and Pattern Recognition II. The Underlying Significance of Failure to Achieve Predicted Results 2. Phase II (1962 1967) Semantic Information Processing 130 I. Analysis of Semantic Information Processing Programs II. Significance of Current Difficulties Conclusion 149 Part II. Assumptions Underlying Persistent Optimism Introduction 155 3. The Biological Assumption 159 4. The Psychological Assumption 163 Page viii I. Empirical Evidence for the Psychological Assumption: Critique of the Scientific Methodology of Cognitive Simulation II. A Priori Arguments for the Psychological Assumption 5. The Epistemological Assumption 189 I. A Mistaken Argument from the Success of Physics II. A Mistaken Argument from the Success of Modern Linguistics 6. The Ontological Assumption 206 Conclusion 225 Part III. Alternatives to the Traditional Assumptions Introduction 231 7. The Role of the Body in Intelligent Behavior 235 8. The Situation: Orderly Behavior Without Recourse to 256 Rules 9. The Situation as a Function of Human Needs 272 Conclusion 281 Conclusion: The Scope and Limits of Artificial Reason The Limits of Artificial Intelligence 285 The Future of Artificial Intelligence Notes 307 Index 346 Page ix Introduction to the MIT Press Edition This edition of What Computers Can't Do marks not only a change of publisher and a slight change of title; it also marks a change of status. The book now offers not a controversial position in an ongoing debate but a view of a bygone period of history. For now that the twentieth century is drawing to a close, it is becoming clear that one of the great dreams of the century is ending too. Almost half a century ago computer pioneer Alan Turing suggested that a high-speed digital computer, programmed with rules and facts, might exhibit intelligent behavior. Thus was born the field later called artificial intelligence (AI). After fifty years of effort, however, it is now clear to all but a few diehards that this attempt to produce general intelligence has failed. This failure does not mean that this sort of AI is impossible; no one has been able to come up with such a negative proof. Rather, it has turned out that, for the time being at least, the research program based on the assumption that human beings produce intelligence using facts and rules has reached a dead end, and there is no reason to think it could ever succeed. Indeed, what John Haugeland has called Good Old-Fashioned AI (GOFAI) is a paradigm case of what philosophers of science call a degenerating research program. A degenerating research program, as defined by Imre Lakatos, is a scientific enterprise that starts out with great promise, offering a new approach that leads to impressive results in a limited domain. Almost inevitably researchers will want to try to apply the approach more Page x broadly, starting with problems that are in some way similar to the original one. As long as it succeeds, the research program expands and attracts followers. If, however, researchers start encountering unexpected but important phenomena that consistently resist the new techniques, the program will stagnate, and researchers will abandon it as soon as a progressive alternative approach becomes available. We can see this very pattern in the history of GOFAI. The program began auspiciously with Allen Newell and Herbert Simon's work at RAND. In the late 1950s Newell and Simon proved that computers could do more than calculate. They demonstrated that a computer's strings of bits could be made to stand for anything, including features of the real world, and that its programs could be used as rules for relating these features. The structure of an expression in the computer, then, could represent a state of affairs in the world whose features had the same structure, and the computer could serve as a physical symbol system storing and manipulating such representations. In this way, Newell and Simon claimed, computers could be used to simulate important aspects of intelligence. Thus the information-processing model of the mind was born. Newell and Simon's early work was impressive, and by the late 1960s, thanks to a series of micro-world successes such as Terry Winograd's SHRDLU, a program that could respond to English-like commands by moving simulated, idealized blocks (see pp. 12 13), AI had become a flourishing research program. The field had its Ph.D. programs, professional societies, international meetings, and even its gurus. It looked like all one had to do was extend, combine, and render more realistic the micro-worlds and one would soon have genuine artificial intelligence. Marvin Minsky, head of the M.I.T. AI project, announced: "Within a generation the problem of creating 'artificial intelligence' will be substantially solved."
Description: