Table Of ContentCOMPUTERS AND COGNITION:
WHY MINDS ARE NOT MACHINES
STUDIES IN COGNITIVE SYSTEMS
VOLUME 25
EDITOR
James H. Fetzer, University of Minnesota, Duluth
ADVISORY EDITORIAL BOARD
Fred Dretske, Stanford University
Charles E. M. Dunlop, University of Michigan, Flint
Ellery Eells, Univeristy of Wisconsin, Madison
Alick Elithom, Royal Free Hospital, London
Jerry Fodor, Rutgers University
Alvin Goldman, University ofA rizona
Jaakko Hintikka, Boston University
Frank Keil, Cornell University
William Rapaport, State University of New York at Buffalo
Barry Richards, Imperial College, London
Stephen Stich, Rutgers University
Lucia Vaina, Boston University
Terry Winograd, Stanford University
COMPUTERS AND COGNITION:
WHY MINDS ARE NOT MACHINES
by
JAMES H. FETZER
University 0/ Minnesota, Duluth, U.S.A.
SPRINGER-SCIENCE+BUSINESS MEDIA, B.V.
A c.I.P. Catalogue record for this book is available from the Library of Congress.
ISBN 978-1-4020-0243-4 ISBN 978-94-010-0973-7 (eBook)
DOI 10.1007/978-94-010-0973-7
Transferred to Digital Print 2001
Printed on acid-free paper
All Rights Reserved
© 2001 Springer Science+Business Media Dordrecht
Originally published by Kluwer Academic Publishers in 2001
No part of the material protected by this copyright notice may be reproduced or
utilized in any form or by any means, electronic or mechanical,
incJuding photocopying, recording or by any information storage and
retrieval system, without written permission from the copyright owner.
To
Janice E. Morgan
CONTENTS
Series Preface ix
Foreword xi
Acknowledgments xxi
PROLOGUE
1. Minds and Machines: Behaviorism, Dualism and Beyond 3
PART I: SEMIOTIC SYSTEMS
2. Primitive Concepts: Habits, Conventions, and Laws 25
3. Signs and Minds: An Introduction to the Theory of Semiotic Systems 43
4. Language and Mentality:
Computational, Representational, and Dispositional Conceptions 73
PA~T II: COMPUTERS AND COGNITION
5. Mental Algorithms: Are Minds Computational Systems? 101
6. What Makes Connectionism Different?
A Criticial Review of Philosophy and Connectionist Theory 131
7. People are Not Computers:
(Most) Thought processes are Not Computational Procedures 153
PART III: COMPUTER EPISTEMOLOGY
8. Program Verification: The Very Idea 183
9. Philosophical Aspects of Program Verification 221
10. Philosophy and Computer Science:
Reflections on the Program Verification Debate 247
viii CONTENTS
EPILOGUE
11. Computer Reliability and Public Policy:
Limits of Knowledge of Computer-Based Systems 271
Index of Names 309
Index of Subjects 313
SERIES PREFACE
This series will include monographs and collections of studies devoted to the
investigation and exploration of knowledge, information, and data-process sing sys
tems of all kinds, no matter whether human, (other) animal, or machine. Its scope is
intended to span the full range of interests from classical problems in the philoso
phy of mind and philosophical psychology through issues in cognitive psychology
and sociobiology (concerning the mental capabilitiess of other species) to ideas
related to artificial intelligence and to computer science. While primary emphasis
will be placed upon theoretical, conceptual, and epistemological aspects of these
problems and domains, empirical, experimental, and methodological studies will
also appear from time to time.
The domination of cognitive science by the computational conception of mind
has been so complete that some theoreticians have even gone so far as to define
"cognitive science" as the study of computational models of mind. The arguments
presented in this volume clearly demonstate-definitively, in my judgment-that
that paradigm is indefensible, not just in its details, but in its general conception.
Minds are not digital machines and people are not computers. The conception of
minds as semiotic systems appears to explain the phenomena--of consciousness,
cognition, and thought-far more adequately than its predecessor ever could. The
scope and limits of digital machines are delineated here. Others must judge if the
time has come for a new paradigm of the mind.
J. H. F.
FOREWORD
Contemporary research in artificial intelligence and cognitive science has been domi
nated by the conception that minds either are computers or at least operate on the basis
of the same principles that govern digital machines. This view has been elaborated and
defended by many familarfigures within these fields, includingAUen Newell and Herbert
Simon, John Haugeland, Jerry Fodor and Zenon Pylyshyn, Margaret Boden, Daniel
Dennett, David J. Chalmers, Philip Johnson-Laird, Steven Pinker and scores unnamed,
who have committed themselves to its defense. If general agreement were the measure
of truth, it would be difficult to deny that computationalism, thus understood, appears
to be true.
Indeed, even most of those who disagree with the computational conception tend to
accept some of its elements. The alternative known as connectionism, for example,
which has been endorsed by David Rumelhart, Paul Churchland, and Paul Smolensky,
among others, retains the conception of cognition as computation across representa
tions, even while it rejects the contention that computationalism has properly under
stood the nature of representations themselves. The difference between them turns out
to be that connectionists hold that representations are distributed as patterns of activa
tion across sets of neural nodes, where cognition is now taken as computation across
distributed representations.
Even granting the difference between nodular and distributed forms of representa
tions, computationalists and connectionists can agree on many aspects of cognition.
The classic formulation of the computational conception, for example, has been ad
vanced by John Haugeland, who has suggested that thinking is reasoning, that reason
ing is reckoning, that reckoning is computation, that computation is cognition, and that
the boundaries ofc omputability define the boundaries oft hought. As long as the differ
ences between nodular and distributed representations are acknowledged, there are no
reasons-no evident reasons, at least-why a connectionist should not adopt these
computationalist contentions.
xii FOREWORD
In order to establish whether minds are or operate on the basis of the same prin
ciples that govern computing machines, however, it is necessary to accomplish
three tasks. First, discover the principles that govern computing machines. Second,
discover the principles that govern human minds. And, third, compare them to as
certain whether they are similar or the same. That much should be obvious. But
while leading computationalists have shown considerable ingenuity in elaborating
and defending the conception of minds as computers, they have not always been
attentive to the study of thought processes themselves. Their underlying attitude
has been that no theoretical alternative is possible.
Part I: Semiotic Systems.
The essays collected here are intended to demonstrate that this attitude is no
longer justified. The conception of minds as semiotic (or "sign-using") systems
should lay this myth to rest, once and for all. According to this approach, which
builds on the theory of signs advanced by Charles S. Peirce (1839-1914), minds are
the kinds of things that are able to use signs, where signs are things that stand for
other things (in some respect or other). Since there are at least three different ways
in which things can stand for other things, there tum out to be at least three different
kinds of minds, which use different kinds of signs.
"Semiotic systems" as systems that can use signs ("minds"), of course, differ
from "semiotic systems" as systems of signs ofk inds that semiotic systems can use
("signs"). The meaning of a sign for a system is understood on the basis of its
effects on (actual or potential) behavior, which requires a dispositional conception
of meaning that is non-behavioristic, non-extensional, and non-reductionistic. The
semiotic conception thus satisfies Fodor's condition that a theory of cognition ought
to connect the intensional properties of mental states with their causal influence on
behavior. It would be a blunder to dismiss the semiotic account merely on the basis
of mistaken preconceptions about dispositions.
Thus, the three essays that appear in Part I provide an introduction to the theory
of semiotic systems that includes a discussion of the theory of signs and of the
kinds of minds that are distinguishable on that basis. Since there are at least three
kinds ofs igns-namely, iconic, indexical, and symbolic-which have the ability to
use things as signs on the basis of their relations of resemblance, of cause-or-effect,
or of habitual association to other things, there also appear to be at least three kinds
of minds-namely, iconic, indexical, and symbolic-which have the ability to use