ebook img

The neural basis of human belief systems PDF

181 Pages·2013·3.901 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview The neural basis of human belief systems

The neural basis of human belief systems Is the everyday understanding of belief susceptible to scientific investigation? Belief is one of the most commonly used, yet unexplained terms in neuro-science. Beliefs can be seen as forms of mental representations and belief as one of the building blocks of our conscious thoughts. This book provides an interdisciplinary overview of what we currently know about the neural basis of human belief systems, and how different belief systems are implemented in the human brain. The chapters in this volume explain how the neural correlates of beliefs mediate a range of explicit and implicit behaviors ranging from moral decision making to the practice of religion. Drawing inferences from philosophy, psychology, psychiatry, religion, and cognitive neuroscience, the book has important implications for understanding how different belief systems are implemented in the human brain, and outlines the directions which research on the cognitive neuroscience of beliefs should take in the future. The Neural Basis of Human Belief Systems will be of great interest to researchers in the fields of psychology, philosophy, psychiatry, and cognitive neuroscience. Frank Krueger is Assistant Professor of Cognitive Neuroscience in the Molecular Neuroscience Department and the Department of Psychology at George Mason University. As the Chief of the Evolutionary Cognitive Neuroscience Laboratory and Co-Director of the Center for the Study of Neuroeconomics, Dr. Krueger studies human social cognition and brain functions by applying structural and functional neuroimaging, neuropsychological testing, and molecular neurogenetics. Jordan Grafman, Ph.D., is Director of the Traumatic Brain Injury Research Laboratory at the Kessler Foundation in West Orange, New Jersey, USA. Dr. Grafman conducts patient and neuroimaging studies to examine the functions of the human prefrontal cortex and the rules governing neuroplasticity in the human brain. He has a particular interest in the abilities that differentiate humans from other animals. First published in 2013 by Psychology Press 27 Church Road, Hove, East Sussex, BN3 2FA Simultaneously published in the USA and Canada by Routledge 711 Third Avenue, New York, NY 10017 Psychology Press is an imprint of the Taylor & Francis Group, an informa business © 2013 Psychology Press The right of the editor to be identified as the author of the editorial material, and of the authors for their individual chapters, has been asserted in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this book may be reprinted or reproduced or utilized in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers. Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library Library of Congress Cataloging-in-Publication Data The neural basis of human belief systems / edited by Frank Krueger and Jordan Grafman. p. cm. Includes bibliographical references and index. 1. Belief and doubt. 2. Cognitive neuroscience. I. Krueger, Frank. II. Grafman, Jordan. BF773.N477 2012 153.4--dc23 2012006929 ISBN: 978-1-84169-881-6 (hbk) ISBN: 978-0-20310-140-7 (ebk) CONTENT List of contributors List of figures and tables Preface 1. What are beliefs? PATRICIA S. CHURCHLAND AND PAUL M. CHURCHLAND 2. The neuropsychology of belief formation ROBYN LANGDON AND EMILY CONNAUGHTON 3. A multiple systems approach to causal reasoning RICHARD PATTERSON AND ARON K. BARBEY 4. The neural bases of attitudes, evaluation, and behavior change EMILY B. FALK AND MATTHEW D. LIEBERMAN 5. Interpersonal trust as a dynamic belief EWART DE VISSER AND FRANK KRUEGER 6. The neural bases of moral belief systems RICARDO DE OLIVEIRA-SOUZA, ROLAND ZAHN, AND JORGE MOLL 7. Neuroscientific approaches to ‘mens rea’ assessment ULLRICH WAGNER AND HENRIK WALTER 8. The neural structure of political belief LAURA MORETTI, IRENE CRISTOFORI, GIOVANNA ZAMBONI, AND ANGELA SIRIGU 9. The neural basis of religion JOSEPH BULBULIA AND UFFE SCHJOEDT 10.The neural basis of abnormal personal belief VAUGHAN BELL AND PETER W. HALLIGAN 11.I believe to my soul FRANK KRUEGER AND JORDAN GRAFMAN Index 0. Preface Frank Krueger and Jordan Grafman Belief is one of the most commonly used, yet consistently unexplained, terms in neuroscience. Beliefs can be seen as forms of mental representations and they are one of the building blocks of our conscious thoughts. This volume investigates whether the everyday understanding of beliefs is valid, such that it can be used for scientific investigations. If a neuropsychological explanation of this phenomenon exists, then functional neuroimaging, the lesion method, and examining neurophysiological correlates of belief states provide valid approaches for studying the neural basis of human belief systems. This volume will give an overview on how the neural signatures of beliefs mediate a range of explicit and implicit behaviors ranging from moral decision making to the practice of religion. We hope that this volume will have important implications for understanding how different belief systems are implemented in the human brain. The volume comprises eleven chapters and draw inferences from philosophy, religion, neuropsychology, cognitive neuroscience, and psychiatry. Agreement on a proper definition of belief is one of the most debated issues in philosophy and has obvious implications for any cognitive neuroscience approach to a neural basis of human belief systems. In the chapter “What are beliefs?” Patricia S. Churchland and Paul M. Churchland give an overview of philosophical theories of beliefs. Beliefs, according to the tradition in philosophy, are states of mind that have the property of being about things—things in the world, things in the mind, as well as abstract things, events in the past and things only imagined. A central problem is to explain how physical states of the brain can be about things; that is, what it is for brain states to represent. This is a puzzle not only for beliefs, but for mental states more generally, such as fears, desires, and goals. In analytic philosophy, the main focus has been on language as the medium for beliefs, and on the differences between various kinds of beliefs. Although this approach produced some useful logical distinctions, it made little progress in solving the central problem. A newer approach starts from the perspective of the brain and its capacity for adaptive behavior. The basic aim is to address aboutness in terms of complex causal and mapping relations between the brain and the world, as well as among brain states themselves, which result in brains’ capacities to represent things. Belief formation is a complex process and is likely be supported by a number of psychological processes in the brain. The chapter “The neuropsychology of belief formation” by Robyn Langdon and Emily Connaughton provides a model of belief formation with regard to studying individuals with delusions as an informative approach to understanding normal belief processing. Based on a cognitive neuropsychiatric perspective, two distinct factors contribute in combination to the explanation of a delusion: the first factor explains why a patient generates a particular implausible thought which seeds a delusion, whereas the second factor explains why the patient accepts the thought as true rather than rejects it as implausible. Neuroimaging studies of belief processing in healthy individuals and clinical studies suggest that the specific region in the most evolved part of the brain, the prefrontal cortex, may mediate three components of normal belief processing: a deliberative process of “doxastic inhibition” to reason about a belief as if it might not be true; an intuitive “feeling of rightness” about the truth of a belief; and an intuitive “feeling of wrongness” (or warning) about out-of-the-ordinary belief content. Causal knowledge enables the formation of belief systems, representing dependency relations that structure and organize elements of human thought. In the chapter “A multiple systems approach to causal reasoning” Richard Patterson and Aron K. Barbey provide a multidisciplinary framework for understanding causal knowledge (the semantics of cause, enable, and prevent) and inference (drawing new conclusions from existing causal knowledge) based on converging evidence from philosophy, psychology, artificial intelligence, and cognitive neuroscience. Evidence from philosophy and artificial intelligence will be reviewed establishing normative models of causal reasoning on the basis of modal logic, probability theory, and physics. Continuity between these normative domains and current psychological theories of causal reasoning is illustrated, reviewing cognitive theories based on mental models, causal models, and force dynamics. The neurobiological predictions of each framework are assessed and evaluated in light of emerging neuroscience research investigating the perceptual, social, and moral foundations of causal knowledge. Conclusions concerning the cognitive and neural representation of causal knowledge will be drawn, assessing their role in contemporary ethical and legal theories of causal attribution and integrating these findings with emerging research exploring the evolutionary origins of human belief systems. Our understanding of the neural basis of belief systems can be enriched by incorporating neuropsychological theories on attitudes; complex mental states that involve beliefs, values, and dispositions to behave in certain ways. The chapter “The neural bases of attitudes, evaluation, and behavior change” by Emily B. Falk and Matthew D. Lieberman explores how implicit and explicit attitude mechanisms are processed in the brain and how these mechanisms interact with one another from which different types of belief systems may arise. Attitudes encompass our evaluations of people, places, and ideas, and may influence a range of behaviors, including those that directly impact health, intergroup relations, and other important phenomena. The study of attitudes and attitude change has captivated thinkers for centuries, and scientists for decades, and suggested that understanding attitudes would allow us to understand not only the preferences and behaviors of individuals, but would also provide broader insight into the actions of groups and cultures. The recent advance of neuroimaging technologies has opened new possibilities to examine the multiple psychological processes involved in the formation and change of attitudes as well as their neural underpinnings, including the neural signatures of persuasion, dissonance-induced attitude change, and attitude induction as compared to attitude change. Interpersonal trust as a dynamic belief pervades nearly every social aspect of our daily lives, from personal relationships to organizational interactions encompassing social, economic, and political exchange. It permits reciprocal behavior fostering mutual advantages for cooperators and maximizes their evolutionary fitness. In the chapter “Interpersonal trust as a dynamic belief” Ewart de Visser and Frank Krueger propose an integrative cognitive neuroscience framework to understand how interpersonal trust emerges from the interplay of three systems: a cognitive system acting as an evaluation system that enables inferences about the psychological perspective of others (e.g., desires, feelings, or intentions); a motivational system acting as a reinforcement learning system helping to produce states associated with rewards and to avoid states associated with punishments; and an affective system as a social approach and withdrawal system encompassing both basic and moral emotions. By drawing the recent neuroscience findings in the field of cognitive social neuroscience together into a coherent picture, one might gain a better understanding of the underlying dynamic neural architecture of trust, which operates within the immediate spheres of nature and nurture and determines which forms of social, economic, and political institutions develop within social groups. Moral beliefs are central motivational forces for moral perceptions and decisions in everyday life. Although neuroscience cannot answer philosophical questions about what is morally right or wrong, it can address the question of how our brains support actions in agreement with or counter to what society regards as morally right or wrong under given circumstances. The chapter “The neural bases of moral belief systems” by Ricardo de Oliveira-Souza, Roland Zahn, and Jorge Moll aims to elucidate the psychological and neural underpinnings that underlie moral belief systems drawn from recent functional imaging studies in healthy individuals and clinical evidence from patients with brain dysfunction. The organization of the neurobiological and psychological bases of moral motivations, social semantics, and moral actions into a coherent model are pointed out and how those components contribute to human moral and “immoral” nature. The uniqueness of the human brain from both a phylogenetic and a cultural point of view will be emphasized, while stressing the idea of culture as the main element separating human and non-human morality. Also emphasized are the practical and theoretical relevance of studying moral belief systems and how societies can potentially profit from this knowledge. In criminal law of modern Western countries, “mens rea” (“guilty mind”) is a necessary element of a crime. Therefore, legal blame for a criminal act (“actus reus”) is not possible if it was not committed deliberately. A critical aspect of criminal proceedings is therefore the correct evaluation of the beliefs and intentions of the defendant, in order to specify whether his or her mind was “guilty.” The chapter “Neuroscientific approaches to ‘mens rea’ assessment” by Ullrich Wagner and Henrik Walter examines what neuroscience can contribute to this legal process of “mens rea” assessment based on the current relevant empirical findings from social cognitive neuroscience studies. Two aspects are considered: first, how neuroscientific tools can be used to directly find indicators of “mens rea” in the brain of a culprit (including neuro-diagnostic tools to reveal brain abnormalities as evidence in “insanity defenses” and the use of fMRI for lie detection); and second, how functional imaging is used to reveal the neural underpinnings of cognitive processes that are critical when judges or jurors assess “mens rea” in a culprit (including belief attribution in moral judgments and assignment of punishment). This research, belonging to the new research field of “neurolaw,” is still in its infancy, but courts are now beginning to take neuroscientific evidence into account in their decisions. Political beliefs can be powerful forces for influencing perception and motivating action such as voting behavior. Politics as a social phenomenon about social relationships and hierarchical organizations refers to the set of beliefs, behaviors, and rules through which humans cooperate and debate to reach a consensus on action affecting social relationships and hierarchical organizations over long durations of time. The chapter “The neural structure of political belief” by Laura Moretti, Irene Cristofori, Giovanna Zamboni, and Angela Sirigu reveals how political belief systems modulate neural activity by facilitating the interplay between implicit emotional and explicit cognitive processes. The integration of neuroscience and political psychology has fostered a new field of research known as neuropolitics. From a social cognitive neuroscience perspective, aspects of complex political beliefs are discussed focusing on the association between brain regions and specific political behaviors by adopting party or ideological affiliation as a criterion to classify either experimental stimuli or subjects. The existence of a multidimensional political belief system is stressed, one that evolved from more basic social phenomena and engages an extended neural network for social cognition known to be important in self-other processing, reward prediction, and social decision making in ambivalent situations. Neuroscience has largely avoided dealing directly with aspects of religious beliefs. Religious experiences are brain-based phenomena similar to other human experiences. At the core of these experiences is a belief system that relies on a variety of contextual and developmental factors. The chapter “The neural basis of religion” by Joseph Bulbulia and Uffe Schjoedt gives an overview of the neuroscience of unusual and extraordinary religious experiences such as pathological aspects of religion in schizophrenics or temporal lobe epileptics, but also the processing of religious context in the brains of people with regular religiosity or those who do not explicitly claim to be religious. Overall, recent research has mainly focused on subjective experiences of the supernatural in various forms of meditation, mystical experience, glossolalia, and prayer, phenomena which are highly specific forms of religious practice that are only widespread in some religions. Advocating a social cognitive and affective neuroscience of religion, careful applications of evolutionary theory combined with modern techniques of neuroimaging are most likely to bring the next wave of refinement in understanding how religion relates to neuroscience and how religious practices modify ordinary states of awareness. The study of belief pathology such as delusions as false beliefs is likely to be a useful and productive approach in understanding the neural correlates of “normal” belief. The chapter “The neural basis of abnormal personal belief” by Vaughan Bell and Peter W. Halligan describes how cognitive neuropsychiatry attempts to understand psychiatric disorders such as delusions as disturbances in normal cognitive functioning and seeks to find possible links to relevant brain structures and their pathology. Considerable evidence exists that reasoning, attention, meta-cognition, and attribution biases contribute to what are typically considered abnormal beliefs. Findings from this growing scientific study of psychopathology have informed a number of cognitive models (e.g., multi-factor models, motivational and “defense” models, self- monitoring models, and hemispheric asymmetry models) aiming to explain delusion formation, maintenance, and content. Although delusions are commonly conceptualized as beliefs, not all concepts reference models of normal belief formation and in this chapter only those models will be considered that explain delusions as a breakdown of normal belief formation together with approaches that consider delusions as one end of a distribution of anomalous mental phenomena (the continuum view). The final chapter “I believe to my soul” by Frank Krueger and Jordan Grafman will give the reader both a brief summary of the book and some suggestions regarding the directions (both promising and perilous) the new emerging field of cognitive neuroscience of beliefs should take in the future. There is much fascinating work ahead. We hope this volume provides a representative overview of what we currently know about the neural basis of human belief systems. We believe that this contemporary, interdisciplinary collection will appeal to researchers in the fields of psychology, philosophy, psychiatry, and cognitive neuroscience, as well as to a wider audience since we have made a special effort to insure that this volume avoids the conspicuous use of jargon. 1. What are Beliefs? Patricia S. Churchland and Paul M. Churchland Introduction Beliefs, according to the tradition in philosophy, are states of mind that have the property of being about things – things in the world, as well as abstract things, events in the past and things only imagined. A central problem is to explain how physical states of the brain can be about things; that is, what it is for brain states to represent. This is a puzzle not only for beliefs, but also for mental states more generally, such as fears, desires, and goals. In analytic philosophy, the main focus has been on language as the model for beliefs and for the relations among various kinds of beliefs. Although the linguistic approach produced some useful logical distinctions, little progress was made in solving the central representational problem. A newer approach starts from the perspective of the brain and its capacity for adaptive behavior. The basic aim is to address aboutness in terms of complex causal and mapping relations between the brain and world, as well as among brain states themselves, which result in the brain’s capacity to represent things. Beliefs: The Philosophical Background According to the conventional wisdom in philosophy, beliefs are states of the mind that can be true or false of the world, and whose content is specified by a proposition, such as the belief that the moon is a sphere or that ravens are black. The sentence in italics is the proposition that specifies what the belief is about, and conveniently also specifies what would make it true – the moon’s being a sphere, for example. Because specificity concerning what is believed requires picking out a proposition, beliefs are called propositional attitudes. The “attitude” part concerns one of many “attitudes” a person might have in relation to the proposition: believing it, or doubting it, or hoping that it is true, and so on. The class of propositional attitudes generally, therefore, includes any mental state normally identified via a proposition, complete with subject and predicate, perhaps negation and quantifiers. Included in this class are some thoughts (Smith thinks that the moon is a sphere, but notSmith thought about life), some desires (Smith wants that he visits Miami, but not Smith wants love), some intentions (Smith intends that he makes amends, but not Smith intends to play golf), some fears (Smith fears that the tornado will tear apart his house, but not Smith fears spiders), perhaps even some sensory perceptions, such as seeing that the tree fell on the roof, but not seeing a palm tree. These contrasts are worthy of notice because by and large philosophers have focused almost exclusively on the propositional attitudes, and have neglected closely related states that are not propositionally specified. This neglect turns out to be a symptom of a fixation with language-like (linguaform) structures as the essence of beliefs, and of many cognitive functions more generally. A useful and uncontroversial background distinction contrasts beliefs that one is currently entertaining (e.g., the mailman is approaching) and beliefs that are part of background knowledge and are not part of the current processing (e.g., wasp stings hurt). The latter is stored information about the way the world is, and can be retrieved, perhaps implicitly, when the need arises. Some philosophers have been puzzled about whether we can also be said to believe something that is inferrable from other propositions we do believe, but which has not been explicitly inferred.1 For example, until this moment, I have not considered the proposition that wasps do not weigh more than 100 pounds, but it does follow from other things I do believe about the wasps. Do I count it as among the beliefs I held yesterday? While the status of obviously inferrable propositions (are they really in my belief set or not?) is perhaps a curiosity, it is not particularly pressing. A more pressing question concerns how beliefs are stored as background information, and what is represented as we learn a skill by repeated practice, such as golfing or hunting or farming. Like other propositional attitudes, beliefs have the property that philosophers refer to as intentionality. Appearances notwithstanding, intentionality has nothing special to do with intending. Franz Brentano, in his 1874 book, Psychology from an Empirical Standpoint, adopted the expression and characterized three core features of intentionality: (1) the object of the propositional attitude need not exist (e.g., Smith believes that the Abominable Snowman is ten feet tall.). In contrast, a person cannot kick a nonexistent thing such as the Abominable Snowman. (2) A person can believe a false proposition, such as that the moon has a diameter of about twenty feet. Finally, (3) a person can believe a proposition P but yet not believe a proposition Q to which P is actually equivalent. Hence Smith may believe that Horace is his neighbor, yet not believe that The Night Stalker in his neighbor, even though Horace is one and the same man as The Night Stalker. This is because Smith might not know that Horace is The Night Stalker. By contrast, if Smith shot The Night Stalker he ipso facto shot Horace, whether he knew it or not. To take a different example, this time involving the proposition’s predicate, Jones might believe the proposition Sam smokes marijuana without believing the proposition Sam smokes cannabis sativa. But if Sam smokes marijuana, he ipso facto smokes cannabis sativa. Brentano’s choice of the term “intentional,” though it may seem perverse to contemporary ears, was inspired by the Latin word, tendere, meaning to point at, to direct toward. Brentano chose a word that would reflect his preoccupation, namely, that representations are about things; they point beyond themselves. Needless to say, his word choice has been troublesome owing to its phonetic similarity to intentions to do something, which may or may not be intentional in the Brentano sense. More recent research has sometimes avoided the inevitable confusion by just abandoning the word “intentional” in favor of “aboutness” or “directedness.” Brentano was convinced that these three features of intentionality were the mark of the mental, meaning that these features demarcated an unbridgeable gulf between purely physical states, such as kicking a ball, and mental states such as wanting to kick the Abominable Snowman. As Brentano summed it up (1874, p. 89): This intentional inexistence is characteristic exclusively of mental phenomena. No physical phenomenon exhibits anything like it. We can, therefore, define mental phenomena by saying that they are those phenomena which contain an object intentionally within themselves. For Brentano, the solution to the problem of how states can represent is that they are mental, not physical, and the mental is just like that. He forthrightly accepted the Cartesian hypothesis according to which the world of the mental is completely different from the world of the physical. Unable to ignore developments in the biological sciences, philosophers in the second half of the twentieth century found themselves in a difficult dilemma. On the one hand, they accepted Brentano’s threefold characterization of intentionality, but on the other hand, science had rendered unacceptable the idea of a mental stuff that supposedly confers the “magic” of intentionality. If mental states are in fact states of the physical brain, then a core thesis of Brentano was wrong: some physical phenomena – some patterns of brain activity, for example – do exhibit intentionality. So philosophers sought a coherent story according to which intentionality is the mark of beliefs, but beliefs are not states of spooky stuff. Beliefs as Linguaform Structures How can neuronal activity be about anything? Roughly, one popular answer is to say that neuronal activity per se cannot be about anything. Nevertheless, thoughts, formulated in sentences of a language, can be. How does this work? Fodor (1975) provided an elaborate defense of the widely accepted idea that beliefs are linguaform structures in the language of thought. Since many cognitive scientists in the 1980s assumed a symbol-processing model of cognition, and since language-use was assumed to be essentially symbol manipulation, the language-of-thought hypothesis was an appealing platform to support the prevailing assumptions. Whence the intentionality of thoughts? Semantics, and representational properties in general, were supposed to be the outcome, somehow, of the complex syntax governing the formal symbols. As some philosophers summed up the idea, if the syntax of a formal system is set up right, the semantics (the aboutness) will take care of itself (Haugeland 1985). How would that syntax come to be set up just right? As Fodor saw it, Mother Nature provided the innate language-of-thought, and thus intentionality came along with the rest of our genetically specified capacities. Does this not mean that at some level of brain organization intentionality is explained in terms of neural properties? Surprisingly, philosophers here chorused a resounding “No.” The grounds for blocking any possible explanation were many and various, but they all basically boiled down to a firm conviction that cognitive operations are analogous to running software on a computer. Just as no one explains in terms of hardware how a mail application works, language-use cannot be explained in terms of biological properties of the nervous system. Software, the story went, can be run on many different kinds of hardware,2 so what we want to know is the nature of the software. The hardware – the details of how the brain implements the software – is largely irrelevant because hardware and software levels are independent. Still a popular analogy, the hardware/software story encouraged philosophers to say that neural properties are sheerly causal mechanisms that run intentional software; they are not themselves intentional devices. From another angle, the point was that neurobiological explanations cannot be sensitive to the logical relations between cognitive states or to meaning or “aboutness.” They capture only causal properties. (For criticism of this perspective, see Rumelhart, Hinton, and McClelland 1986; P.M. Churchland 1989; Churchland and Sejnowski 1989.) Dualism, pushed out of one place, in effect resurfaced in another. The Cartesian dualism of substances was replaced by a dualism of “properties” – the idea that propositional attitudes are at “the software level” and as such they cannot be explained by neurobiology. Property dualism, resting on a dubious hardware/software analogy, was shopped as scientifically more respectable than substance dualism. In sum, on Brentano’s view, the “magic” of intentionality was explained by the hypothesis that the nonphysical mind is in the “aboutness” business. For the property dualists, the magic of intentionality was passed on to the “aboutness” of sentences, either in the language-of-thought, or, failing that, in some learned language. Postulating an essential dependency between beliefs and language-use spawned its own range of intractable puzzles (P.S. Churchland 1986). One obvious problem concerns nonverbal humans and other animals. If having beliefs requires having a language, then preverbal and nonverbal humans, as well as nonverbal animals, must be unable to have beliefs, or only have beliefs in a metaphorical, “as if,” sense. (For a defense of this view, see a highly influential philosopher, Donald Davidson 1982 and 1984. See also Brandom 1994 and Wettstein 2004.) The idea that only verbal humans have beliefs has been difficult to defend, especially since nonverbal children and animals regularly display knowledge of such things as the whereabouts of hidden or distant objects, about what can be seen from another’s point of view and what others may know (Akhtar and Tomasello 1996; Call 2001; Tomasello and Bates 2001). Human language may be a vehicle for conveying signals to others about these states, but beliefs in general are probably not unavoidably linguaform (Tomasello 1995). From the broader perspective of animal behavior, linguistic structures such as subject-predicate propositions are not the most promising model for representations in general, and not even for beliefs in particular. Most probably, linguaform structures are one means – albeit one impressively flexible and rich means – whereby those representations can be cast into publicly accessible form (Lakoff 1987; Langacker 1990; Churchland and Sejnowski 1992; P.M. Churchland 2012). A related problem is that even in verbal humans, some deep or background beliefs may be expressible in language only roughly or approximately. For example, background beliefs about social conventions or complex skills, though routinely displayed in behavior, may well be difficult to articulate in language. Social conventions about how close to stand to a new acquaintance or how much to mimic his gestures can be well understood, but may be followed nonconsciously (Dijksterhuis, Chartrand, and Aarts 2007; Chartrand and Dalton 2008). A further difficulty for the “beliefs are linguaform” approach is that it embraces a basic discontinuity between propositional attitudes (such as beliefs) on the one hand, and other mental representations (feeling hungry, seeing a bobcat, wanting water) on the other. Having embraced such a discontinuity requires postulating special mechanisms to account for such ordinary processes as how we acquire beliefs about the world from perceptual experience, and how feelings, motives, and emotions influence beliefs. A slightly different approach, favored mainly by Dennett (1987), that avoids some of these problems, is called interpretationalism (see also Davidson 1982, 1984). The core of his idea is that if I can explain and predict the behavior of a system by attributing to it beliefs and other propositional attitudes, suffice to say it actually has those representational states. To adopt such an interpretation, according to Dennett, is to “take the intentional stance” towards the creature, and there is nothing more to intentionality than being the target of the intentional stance. Consistent with this view, Dennett opined that nothing in the device needs to correspond to the structural features of the proposition specifying the belief (that is, the subject, predicate, quantifiers, and so forth), since, after all, your belief is just my interpretation of your behavior that is predictively more successful than any other strategy I might have used. Dennett emphasizes that there are differences in the degree of sophistication of beliefs, and hence a human’s beliefs can safely be assumed to be more sophisticated than those of a leech. As he sees it, exactly how a device must be structured and organized to have behavior that invites interpretation in terms of very fancy beliefs is a question for empirical research, but not one that will yield any new insights into the nature of intentionality. A major shortcoming of Dennett’s approach is that it does not address, except in the most general terms, how internal states come to represent the external world or the brain’s own body. It typically considers such details as irrelevant to the problem since whatever the brain’s (or computer’s) inner structure, if I can best predict its behavior by attributing beliefs, then beliefs it has. For Dennett, intentionality is a software issue, not a neurobiological issue; if I have a language and a modicum of rationality, then my language-using software handles the problem of belief-attribution. Another puzzle arises about the business of interpretation itself. Is that not itself an intentional function that needs explaining? What about my own beliefs and desires? Dennett’s answer here is that I adopt the intentional stance with respect to myself. If I can best predict and explain my own behavior by attributing beliefs to myself, then that is reason enough to say I have beliefs and to say what they are. Seeking the reality of my beliefs in terms of brain events would be, in his view, to misunderstand the sort of things beliefs are. Attributing beliefs is like using an instrument to solve a certain problem, and hence nothing is implied about the neural reality. For this reason, Dennett’s view is often called “instrumentalism,” and is in keeping with an older tradition of instrumentalism with respect to unobservable phenomena in physics and chemistry. Instrumentalists in physics, for example, consider descriptions of protons or electric forces not to be about some unobservable reality, but merely as verbal instruments for interpreting observable phenomena. Contrary to Dennett’s instrumentalism, it seems likely that brains do build models of the world they inhabit, and that these models depend on neuronal organization to reflect such things as similarity (the taste of ice cream is more similar to the taste of cheese than to the taste of bacon), class membership (carrots and onions are vegetables but ants are not), exclusion (choke cherries are not tasty), symmetric and asymmetric relations (has the same rank as, is lower ranked than), and so forth. The spatial relationships of the body, for example, can be characterized in terms of a somatosensory map, wherein neighborhood relationships among parts of the body are reflected in neighborhood relationships of neurons responding to stimuli on those parts of the body. Beliefs about the location of a pain do in fact depend on the way the body is modeled in the brain. Which implies that a belief about the location of a pain is dependent on the way the body is represented and re-represented in the brain. It is not just a hardware-irrelevant interpretation that happens to be largely successful. As we shall suggest below, understanding the details of how nervous systems construct and update their models of the world is probably crucial if we are to understand the nature of representing in nervous systems. Is Information Theory the Right Model for Beliefs? The powerful mathematical resources of information theory (IT) as used by communications engineers have often seemed the most promising tools for characterizing precisely the relationship between neuronal responses and effective stimuli (Rieke et al. 1997; Borst and Theunissen 1999; Dayan and Abbott 2001). What some theorists hoped was that these resources could be extended to encompass representations in general, including beliefs (see, for example, Dretske 1981, 1988). In what follows, we shall outline the approach, and then briefly discuss its limitations. Mutual information is a measure of the degree of statistical dependence between two random variables, such as between thunder and lightning, or between smoking and lung cancer. The concept is rooted in probability theory, and is useful because it will measure any deviation from independence, whether owed to linear correlation or to some non-linear dependency. If the probability of a response Rj occurring, given that stimulus condition Sk obtains (i.e., P Rj | Sk) equals the probability of Rj occurring all by itself, then the mutual information is 0; i.e., the occurrence of Rj does not carry any information about whether Sk occurred or not. When Rj’s occurrence is dependent on Sk’s occurrence, then we can talk about Rj as carrying information about Sk (as opposed to some other stimulus, Si). Thus, if R will not occur unless S occurs, this means that if we know that R did occur, we learn something about whether S occurred. As applied to neurons and what they code, the idea is to determine the dependence of a neuron’s spiking responses on the presence of a specific stimulus by observing the neuron’s behavior in a range of stimulus conditions. The observed data allow you to calculate the probability of a particular response Rj given a particular stimulus Sk, as compared to the probability of Rj happening all by itself (the unconditional probability of Rj). This means we can talk about Rj coding for Sk. A slightly different but related approach suggests that we take the “brain’s eye view” calculating the probability of the occurrence of Sk, given that the neuron is responding with Rj. In other words, we can say a neuron’s response Rj carries information about stimulus Sk if and only if Sk is predictable from Rj (where predictable can be defined rigorously in mathematical terms). The reconstructive or “brain’s eye view” approach has many variations, and many names: “optimal estimation,” “Bayesian classification,” “Bayesian estimation,” and “ideal observer analysis” (IOA) (Thomson and Kristan 2005). For convenience, we shall focus on IOA. Though there are significant differences between IT and IOA, for the purposes of this discussion these differences do not matter. The background hope is that just as neuroscientists have probed further and further in from the sensory periphery of nervous systems, so IT and IOA may be applied at deeper and deeper levels, ultimately to yield a more comprehensive theory of representation and hence of beliefs. As we discuss below, the hope that this strategy is suitably extendable to representations in general may be problematic. In particular, we shall draw attention to the fact that some important representational functions are stimulus independent to varying degrees. Limits to the Information Theory Approach (a) Representing relevant things: Hopfield (2002) has made the point that we cannot understand biological computation unless we understand how the brain ignores information irrelevant to its needs, but invests in behaviorally relevant information. The IT/IOA conception of information so far lacks the resources to deal with relevance-to-need, and Turing-style computation lacks the means to accommodate needs and motivations. Brains, by contrast, are need-driven and relevance-sensitive. (b) Representing absent things: psychological experiments with humans and other animals show that in fact brains represent absent objects – not just currently existing stimuli. Absent things represented include cached food, missing group members, and goals – one’s own as well as those of others. They include distant spatial locations, past events, and future events. Goal representations and predictive representations are paradigmatic examples in which the standard causal relationships between external stimulus and neuronal representation characterized by the IT/IOA model break down. A host of related problems arise when we consider certain sensory illusions, particularly those where the nervous system constructs a feature, representing it as part of the perceived object in order to make sense of the sensory signals. Subjective contours – contours that do not exist in the stimulus but are perceived – have been well-studied by psychologists (see Figure 1.1). Figure 1.1 Kaniza Figure in which a White Triangle Appears to be Imposed on Black Circles. Offset pairs of purely subjective contours can even be used in the perceptual construction of a three- dimensional structure. Moreover, physiologists have discovered that neurons that normally respond to a real line (a line actually in the stimulus) respond comparably when conditions are right for perception of a subjective contour. Apparent motion is likewise common, and likewise problematic for the IT/IOA approach. When the experimental set up allows for bistable apparent motion, the motion seen is either vertical for both pairs, or horizontal for both pairs; there is never a mix of horizontal and vertical. This finding is important because it shows that the direction of apparent motion is coordinated across fairly large areas of the visual field. For videos of several different examples of apparent motion, see http://psy2.ucsd.edu/~sanstis/motion.html (c) Representing categories: although it is easiest to see why absent objects are problematic for the classical approach to representing, general categories are also tricky. Consider, first, common general categories, such as friend, kin, ripe, home, territory, lost, as well as categories for activities (eating, hiding, mating, threatening, calling), spatial relations (under, over, at home, inside, outside), and relative terms (bigger, smaller, easier). An instance of a general category can be a stimulus on a given occasion, but by definition, a category itself is not singular, but general – it is not the stimulus. A category can be true of many individuals, or apply to many individuals, and we may acquire the category via encounters with individuals. But an individual stimulus it is not. Relative terms are especially puzzling for the IT/IOA

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.