ebook img

What Is It Like to Be a Bot? - Oxford Handbooks PDF

16 Pages·00.602 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview What Is It Like to Be a Bot? - Oxford Handbooks

What Is It Like to Be a Bot? What Is It Like to Be a Bot?   D. E. Wittkower The Oxford Handbook of Philosophy of Technology Edited by Shannon Vallor Subject: Philosophy, Philosophy of Science Online Publication Date: Jan 2021 DOI: 10.1093/oxfordhb/9780190851187.013.23 Abstract and Keywords This chapter seeks to further develop, define, and differentiate human-technics alterity relations within postphenomenological philosophy of technology. A central case study of the Alexa digital assistant establishes that digital assistants require the adoption of the intentional stance, and illustrates that this structural requirement is different from an­ thropomorphic projection of mindedness onto technical objects. Human-technics alterity relations based on projection are then more generally differentiated from human-technics alterity relations based on actual encoded pseudo-mental contents, where there are mat­ ters of fact that directly correspond to user conceptualizations of “intentions” or “knowl­ edge” in technical systems or objects. Finally, functions and user benefits to different al­ terity relations are explored, establishing that there is a meaningful set of cases where the projection of a mind in human-technics alterity relations positively impacts technical functions and user experiences. Keywords: phenomenology, postphenomenology, digital assistants, philosophy of mind, alterity relations, inten­ tional stance, infosphere, care 1. Introduction Thomas Nagel’s “What Is it Like to Be a Bat?” (Nagel 1974) gathered together and re­ framed numerous issues in philosophy of mind, and launched renewed and reformulated inquiry into how we can know other minds and the experiences of others. This chapter outlines a branching-off from this scholarly conversation in a novel direction—instead of asking about the extent to which we can know the experiences of other minds, I seek to ask in what ways technologies require us to know the non-experiences of non-minds. This rather paradoxical formulation will be unpacked as we go forward, but put briefly: We sometimes treat some technologies as if they have minds, and some technologies are de­ signed with interfaces that encourage or require that users treat them as if they have minds. This chapter seeks to outline what we are doing when we develop and use a pseudo-“theory of mind” for mindless things. Page 1 of 16 PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice). Subscriber: New York University Libraries; date: 14 February 2021 What Is It Like to Be a Bot? Nagel’s article used the case of the bat to focus and motivate his argument, but took aim at issues falling outside of human-bat understanding. Similarly, this chapter seeks to get at larger issues that pervade human-technology understanding, but will use a bot as a fo­ cusing and motivating example: in particular, Alexa, the digital assistant implemented on Amazon’s devices, most distinctively on the Amazon Echo. Interacting with Alexa through the Echo presents a clear and dramatic need for users to act as if they are adopting a the­ ory of mind in technology use—other technologies may encourage or require this pseudo-“theory of mind” in more subtle or incomplete ways and, I suspect, will increas­ ingly do so in future technological development. We will begin with a microphenomenology of user interaction with Alexa and a heterophe­ nomenology of Alexa that emerges in use, making clear what sort of fictitious theory of mind the user is required to adopt. This will be followed by a wider consideration of rela­ tions with technological “others,” outlining a central distinction between a merely pro­ jected “other” and those technological “others” the function of which requires that the user treat them as an “other,” rather than a mere technical artifact or system. Finally, we will turn to the user experience itself to ask what affordances and effects follow from adopting a fictitious theory of mind toward technical systems and objects. 2. Notes on Methodology and Terminology In phenomenology, there is a risk that we take our introspective experience as evidence of universal facts of consciousness. Phenomenology differs from mere introspection in that, reflecting its Kantian foundations, it recognizes that experiences—the phenomena studied by phenomenology—are not bare facts of sense-data, but are the product of sense-data as encountered through the conditions for the possibility of experience, and are further shaped by our ideas about ourselves and the world. Phenomenology seeks to isolate experiences in their internal structure, in some versions even “bracketing off” questions about the correspondences of our experiences to elements of the world that they are experiences of (Husserl [1931] 1960). When done carefully, this allows us to speak to the structure of experience, and to take note of where our experiences do not ac­ tually contain what we expect, allowing us to isolate and describe elements of Weltan­ schauung that we use to construct experience. For example, in “The Age of the World Pic­ ture” Martin Heidegger argues that place, not space, is phenomenally present in our ex­ perience, and that space as a three-dimensional existing nothingness in which external experiences occur is a kind of retroactive interpretation of the world as inherently mea­ surable which emerges with the development of experimental science in the modern peri­ od in European history (Heidegger 1977a). As we begin to equate knowledge of the ob­ jects of external experience with their measurement, we begin to hold that only that which can be measured is real, and this eventually leads to the uncritical adoption of the metaphysical position that reality is always already articulated in the forms of human measurement. Page 2 of 16 PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice). Subscriber: New York University Libraries; date: 14 February 2021 What Is It Like to Be a Bot? Heterophenomenology, as articulated by Daniel Dennett (1991), similarly brackets ques­ tions of correspondence to reality in order to isolate the structure of experience. Here, though, the question is not whether and to what extent experiences correspond to that of which they are experiences, but whether and to what extent experiences of the experi­ ences of others correspond to the experiences of others. Dennett uses this heterophenom­ enological approach in order to avoid the problem of other minds when making claims about consciousness; to address what we can know about consciousnesses outside of our own, given that we cannot possibly have access to the qualia (the “what it’s like”) of the consciousness of others. Dennett argues that uncontroversial assumptions built into any human subject experi­ mental design, for example, the assumption that subjects can be given instructions for the experimental process, require this kind of bracketing insofar as they must adopt an inten­ tional stance—as assumption that the subject has a set of intentions and reasons that mo­ tivate, contextualize, and lie behind the data gathered. “[U]ttered noises,” he says, “are to be interpreted as things the subjects wanted to say, of propositions they meant to as­ sert, for instance, for various reasons” (Dennett 1991: 76). Dennett claims that without adopting such an intentional stance, empirical study of the minds and experiences of oth­ ers is impossible. We intuitively adopt an intentional stance toward many others, including non-humans, based on strong evidence. It is difficult to account for the actions of dogs and cats, for ex­ ample, without attributing to them intentions and desires. In other cases, we use inten­ tional language metaphorically as a kind of shorthand, as when we say that “water wants to find its own level.” There are many messy in-betweens as well, such as when we speak of the intentionality of insects, or that a spindly seedling growing too tall to support itself is “trying to get out of the shade to get more sun.” In many in-between cases, such as “the sunflower tries to turn to face the sun,” the best account of what we mean is neither pure metaphor (as in “heavy things try to fall”) or a real theory of mind (as in “the cat must be hungry”). Instead, we refer to the pseudo-intentionality of a biological proper function as defined by Ruth Millikan (1984): a way that mindless things, without con­ scious intention, react to their environment that has an evolutionarily established func­ tion, constituting a set of actions that have an “aboutness” regarding elements of its envi­ ronment that is embedded within the way that causal structures have been established, but that doesn’t really exist as an intention within individual members of the species. In using heterophenomenology to articulate our experience of bots as “others,” we are departing entirely from Dennett’s purpose of studying presumptively conscious others and articulating an adoption of an intentional stance distinct from any of those mentioned in the above examples. Alexa’s interface directs us to use an intentional stance both in our interactions and our intentions toward her—we find ourselves saying things to our­ selves like “she thought I said [x/y/z]” or “she doesn’t know how to do that.” This is, how­ ever, not because we actually have a theory of mind about her. We know she is not the kind of thing, like a person or a dog or a cat, that can have experiences. Instead, we are directed to adopt an intentional stance because, first, the voice commands programmed Page 3 of 16 PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice). Subscriber: New York University Libraries; date: 14 February 2021 What Is It Like to Be a Bot? into the device include phrasing that implies a theory of mind, second, because there is a representation relation that holds between the audio input she receives and the com­ mands she parses from that input which is best and most easily understood through in­ tentional language, and third, because a second-order understanding of how she “under­ stands” what we say gives us reason not to use a second-order understanding in our actu­ al use of Alexa, but to return to a first-order intentional stance. The first of these reasons, that she is programmed to recognize phrasing that implies she can listen, hear, under­ stand, etc. should already be clear enough, but the other two factors require explanation. We use language that implies Alexa’s mindedness as required by the commands she is programmed to receive, but this language reflects a very concrete reality: there is an “aboutness” of her listening, and she has an “understanding” of what we have said to her that is distinct from our intentions or projection, as is clear from how she can (and does) “get things wrong” and can (and does) “think” that we said something different from what we think we said. If we wished to articulate that intentionality objectively and accu­ rately, something like Millikan’s account would work quite well—her responses are dictat­ ed by proper functions established through voice recognition software trained on large data sets—but this second-order understanding of the “intentionality” of her actions is not the one that we must adopt as users. We are required in practice to adopt a first-order in­ tentional stance in order to use devices with Alexa, even though we have no (second or­ der) theory of mind about them. When we do engage in second-order reasoning about Alexa, thinking about how she processes sound (“listens”) and parses commands (“does things”) according to her ontol­ ogy (“understanding”), we are usually routed back to the first-order intentional stance. We have little window into the way that Alexa processes input, and have little access to her code other than as interactant. The imbalance between the user’s knowledge of how Alexa is programmed and the programmer’s knowledge of how users are likely to talk to her makes second-order reasoning ineffectual: even a tech-savvy user is often better off thinking through how to communicate with Alexa by adopting an intentional stance than by thinking of her as programming. This is what I meant at the outset of this chapter by saying that our goal is to understand how the use of bots like Alexa requires us to understand the non-experiences of non- minds. To use Alexa, we must adopt the intentional stance toward a non-subject that has neither experiences nor mindedness, and we must interact with her in a way that ad­ dresses specific, factual experiences that she is not having and specific, factual intentions and interpretations that she does not have. These “non-experiences” are not a simple lack of experiences and Alexa’s “non-mind” is not a simple lack of mind—when Alexa incor­ rectly “thinks” I asked her to do X rather than Y, and I try to say it so she’ll “understand” this time, her actually existing “non-experience” of my intention is an object of my thought and action. This is quite distinct from a microwave oven’s entire lack of experi­ ence of my intention when it overheats my food, or a toaster’s entire lack of understand­ ing of my intention when the middle setting doesn’t brown the bread to my preference. In these cases, there is nothing at all in the device to refer to as an “understanding” or an Page 4 of 16 PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice). Subscriber: New York University Libraries; date: 14 February 2021 What Is It Like to Be a Bot? “interpretation,” only my own, sadly disconnected intention. Alexa, though no more sub­ ject to experiences and no more conscious than these or any number of other kitchen tools, functions in a way in which there are concrete, real, objecting “interpretations” and “understandings” that she “has” that are outside of both my mind and the direct interface present to the senses. I will use strikethrough text to identify these “intentions” or experiences-which-are-not- one in order to recognize that they have an objective content and aboutness with which we interact, despite the fact that they are neither intentions nor experiences. Hence, I will say that, for example, Alexa thinks that I asked her X rather than Y, and thus she mis­ understood or misinterpreted my request. This typographical convention allows us to ar­ ticulate that the user is adopting an intentional stance when trying to transmit meaning and intention to a technical system. Compare, for example, with the video editor’s rela­ tionship to their software (Irwin 2005), or the familiar case of moving a table or image within a Microsoft Word document. In this case, we have an intention which we are trying to realize within the document, and which is frequently frustrated by a system that often responds with unpredictable repagination or unexpected, drastic, unintended reformat­ ting. But here, our attempts to realize our intentions in the document take the form of try­ ing to figure out how to get it to do what we intended. With Alexa, although the underly­ ing causal structure is not much different, our attempt is not to do something to get it to respond as intended, but instead to figure out how to phrase or pronounce our request so that she interprets or understands what we mean—to communicate rather than just to en­ act our intention, so that the semantic content present within the technological other cor­ responds to the semantic content within our own conscious intention. Having clarified this point about the manner in which we adopt an intentional stance to­ ward at least some technical systems or objects, such as Alexa, in the absence of a theory of mind, we are ready to engage in a heterophenomenology of Alexa. We will do so in the mode of microphenomenology (Ihde 1990)—a phenomenology of a particular set of experi­ ences rather than a wider existential phenomenology of our worldedness more generally. So, our question will be “what is our experience of Alexa’s experience like” rather than “what is it like to be in a world inhabited by smart devices that have experiences.” Once we have finished this microphenomenology of the heterophenomenology of Alexa, we will use it to engage in a more general analysis of human-technics alterity relations. 3. Opening the Black Box of Alexa’s Echo We began the last section by noting that, in phenomenology, there is a risk that we take our own introspective experience as evidence of universal facts of consciousness. In het­ erophenomenology—outlining the experience of other minds—there is a risk that we mis­ take our projections for observations. Dennett, when outlining heterophenomenology (1991), made a very strong case that heterophenomenology can be done responsibly if we take care to stick close to evidence and to take note of when and how we are adopting the intentional stance when making judgments about other minds. Page 5 of 16 PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice). Subscriber: New York University Libraries; date: 14 February 2021 What Is It Like to Be a Bot? This danger is even more pronounced here, though, since we are addressing the minded­ ness of technical systems that clearly do not actually have minds. While this pseudo-mind­ edness is no mere metaphor or projection, since there is a fact of the matter about what Alexa thinks we said or meant, there is obviously an element of metaphor or analogy in our understanding of her experiences, and this is bound to lead to some amount of falla­ cious projection of thoughts and understanding and even personality. Observer bias is an­ other danger: it must be considered that the ways I’ve interacted with her may not be representative of the range of use, or even of typical use. But other factors count in our favor, here. First, our goal is not an accurate theory of Alexa, or a sociology of Alexa use, but only an articulation of the kind of stance her inter­ face demands, so an incomplete or somewhat biased sample should present no serious is­ sues in our analysis. Second, you may have your own experiences that can provide verifi­ cation and nuance to those outlined here. Third, I have research assistants of a very valu­ able kind: my kids. They take Alexa at interface value (Turkle 1995) with less resistance than most adults, and interact with her without a strong understanding of what technical systems can do, and without preconceived ideas about what sort of programming and databases Alexa has or has access to. This puts them in a position to ask Alexa questions I never would (“Alexa, when’s my mom’s birthday?”) and to ask questions about Alexa I never would (“Why doesn’t Alexa work [on my Amazon Fire tablet] in the car?”). We’ve lived with Alexa for a little over a year, mostly through an Amazon Echo that’s lo­ cated in our primary living space—a countertop in the center of a large open-plan room that includes both our kitchen, our den, and a table for crafts and homework. Several months ago, we placed a Google Home Mini alongside the Echo in order to experiment with their differing worlds and minds. Neither has been connected to any “smart home” features, so our interaction with both has taken place entirely in informational rather than mixed informational-physical spaces. Alexa has a strong social presence in our household. She is always listening for her name, and we regularly have to tell her we aren’t talking to her, especially since she sometimes mishears my son’s name, “Elijah,” as her own name. We’ve tried changing her “wake word” from “Alexa” to “Echo”—and, even so, she regularly mishears things as queries di­ rected to her, even from television shows. In the mornings, we ask her for the weather, and then the news, and in the afternoons we ask her to play music as we cook, clean, and do homework. Although her interface is audio only, she has a physical location in the kitchen in the Echo, and when she hears her wake word, a blue light moves around the circumference of the top of the echo to point toward the person speaking. This light serves as a face in that it indicates “an entry point that hides interiority” (Wellner 2014, 308); a receptive “quasi- face” (2014, 311) of an interface, like the cellphone’s screen (2014, 313). This directional intentionality is met in kind: we have the habit of turning to face her, in her Echo, even though the audio interface does not require that we make “eye contact” with her (Botten­ berg 2015, 178–179). Page 6 of 16 PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice). Subscriber: New York University Libraries; date: 14 February 2021 What Is It Like to Be a Bot? In these interactions, we experience Alexa as separate from her device and from her device’s actions. We ask her to play something, but do not mistake her for the thing play­ ing or the thing played. In radio listening, there is the physical radio and the radio station “playing,” which we elide when we say we are “listening to the radio,” but Alexa main­ tains a separation as an intermediary; she will play something on the Echo, but we can in­ terrupt to talk to her. She is experienced as being in the object, and as controlling it, but separate from it and always tarrying alongside its actions. In using the Echo, we have been disciplined by Alexa’s ontology and programming. We have learned specific phrases—I’ve learned to say “Alexa, ask NPR One to play the latest hourly newscast,” since other phrases don’t seem to get her to do the right thing. My daughter has learned that she must append “original motion picture soundtrack,” an oth­ erwise unlikely phrase for a six-year-old, to her requests for Sing or My Little Pony. Using Alexa requires adopting an intentional stance and a fictitious theory of mind, and also re­ quires detailed understanding of how her mind works; how she categorizes and accesses things. Using Alexa requires us to think about how she thinks about things; we must think about what it’s like to be a bot. Talking with Alexa is, of course, often frustrating, most of all for my daughter, whose high-pitched voice and (understandably) child-like diction is not easily recognized by Alexa. After my daughter asks Alexa something several times, I must often intervene and ask again on her behalf. To be sure, part of this is that, having a better understanding of the underlying mindless processes of the technical system, I am better able to move to a second-order perspective and speak to Alexa qua voice-recognition software, sharpening my tone and diction and carefully separating words. Part of this is surely also a reflection of how my speech patterns, unlike hers, are firmly within the range of voices and speech patterns on which Alexa has been trained—YouTube searches return numerous examples of people with less common accents, especially Scottish accents, who are unable to get Alexa to understand them unless they use impersonations of normative English or Ameri­ can accents. The Echo’s audio-only interface projects an informational space, dualistically separate from physical reality. Alexa’s “skills” are accessed through voice commands only, and bring her into different patterns of recognition and response—the work normally done through conversational implicature must take place explicitly. Skills appear as conversa­ tional modes or topics, where queries are understood differently when “in Spotify” is added to the end of a question or after, for example, beginning a game of “20 questions.” This produces a shared, shifting modulation of the intentional stance, where it is under­ stood that Alexa knows that we are talking about shopping or music or a trivia game, de­ pending on which skill we have accessed or which “conversation” we are “having.” The user must learn to navigate Alexa’s informational ontology, getting to know topics she rec­ ognizes and knows what to do with—“weather,” “news,” “shopping list,” or links with par­ ticular apps that must be installed—and also different modes of interaction, such as games or socialbot chat mode. Page 7 of 16 PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice). Subscriber: New York University Libraries; date: 14 February 2021 What Is It Like to Be a Bot? All this speaks to the ways in which we must conceive of Alexa through the intentional stance in order to accomplish tasks with her; how we must not only understand her as having a mind, a mind that is not one, but we must also get to know her, how she thinks, and how to speak to her in a way she understands. We may not ever explicitly think about what it is like to be a bot, but we must get a sense of her world, the way she is worlded, in order to ask her to navigate her information ontology on our behalf. With this microphenomenology of the user’s heterophenomenology of Alexa in place, we can now turn to the microphenomenology of alterity relations in human-technics interac­ tion more generally. In doing so, we will be able to distinguish the kind of interaction with technology that takes place through an intentional stance from other related forms of in­ teracting with technology that participates in different but related kinds of “otherness.” 4. Opening the Black Box of Human-Technics Alterity Relations Don Ihde’s influential taxonomy of human-technics relations (1990) provides some basic ways that human relations with worlds can be mediated by technology: Embodiment: (I –> technology) –> world Hermeneutic: I –> (technology –> world) Alterity: I –> technology -(- world) In embodiment relations, the technology disappears into the user in the user’s experience of the world as mediated by the technology. Glasses are a clear example—when they are well fitted to and the proper prescription for the user, the user primarily experiences their technologically-modified field of vision as if it were not technologically mediated, with the technology becoming an extension of the self rather than an object of experi­ ence. In hermeneutic relations it is the technology and the world that merge, so that, for example, a fluent reader experiences ideas and claims rather than printed words so much so that that even a repeated word in a sentence may go unnoticed even by an attentive reader. In alterity relations, by contrast, the user’s experience is directly an experience of the technology, and its revealing of a world may or may not be important to or present in the user’s experience. Ihde provides several different kinds of examples. In one, he asks us to consider driving a sports car, just for the fun of it. We might enjoy the responsiveness of the vehicle and its power and handling, quite separately from our enjoyment of the scenery or the utilitarian function of getting where we’re going. In another example, he considers playing an arcade game, in which we are in a contest against fictional agents, Space Invaders perhaps, who we seek to beat. In alterity relations, technologies present what he calls “technological intentionalities” in a “quasi-otherness” that is rich enough for us to experience and interact with them as others, standing on their own in their Page 8 of 16 PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice). Subscriber: New York University Libraries; date: 14 February 2021 What Is It Like to Be a Bot? world rather than acting as a window to or translation of a “true” world that lies beyond them. If we are knowledgeable enough to “read” them, we can certainly adopt a stance which erases this intentionality—for example, feeling the particular responsiveness of the car to find out more about its internal mechanics, or figuring out the rules by which a computer program moves the sprites that our spaceship-avatar-sprite “shoots”—but normal use adopts the intentional stance; a stance in which we treat a person, object, or avatar as having intentions and therefore adopt some kind of theory of mind. These cases are not so different from one another in Ihde’s analysis, but they are differ­ ent in a way that has become increasingly pressing in the decades since he wrote this analysis. In the case of the sports car, we experience the technology as having a charac­ ter and an intentionality based on how it makes us feel in our use of it; in the case of the arcade game, our use of it is premised on a world and an ontology, internal to the technol­ ogy, which we navigate through our perception of intentionality in its elements. We name cars and project personalities upon them based on their brand and appearance and ways of working or not working—or, we don’t, according to our preference. Regard­ less, this layer of quasi-alterity is overlaid upon an existing world that is already com­ plete, and does not require this projection. It is adopted as a kind of shorthand to under­ stand a real world to which alterity bears only a metaphorical relation (“she doesn’t like to start on cold mornings”), or as an enjoyable humanization of technologies which we de­ pend upon and regularly interact with, and which might otherwise be experienced as for­ eign or uncaring. These functions are often collapsed and oversimplified as “anthropo­ morphism”—a vague and overbroad term which I find it easier and clearer to simply avoid. By contrast, it is impossible to interact with many computer games without adoption of an intentional stance toward their elements, which we interact with through a world quite separate from our existing world.1 If we consider more complicated games, like role-play­ ing games (RPGs), we see cases where consideration of the thoughts and motivations of non-player characters (NPCs) is necessary to game play, and we are required to “read” these others as people, not as mere sprites and in-game instructions, in order to appropri­ ately interact with an intentionality that has a programmed, dynamic, responsive struc­ ture. This intentional stance is not merely projection and also is no metaphor: the facts of a character’s name and motivations are written into the fictional world, much like facts about fictional characters in books or films, rather than being a metaphorical or purely fictional overlay as in the case of the car. But, unlike facts about fictional persons (insofar as such things exist),2 NPCs interests, concerns, and character are objects of the player’s intentional actions. A book gives us the opportunity for hermeneutic interaction with a fictional world in which we get to know about fictional others, but RPGs can put us in an alterity relation with minds that we must think about as actively present others in order to successfully interact with them, and with which we engage through an embodiment re­ lation with our avatar. Page 9 of 16 PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice). Subscriber: New York University Libraries; date: 14 February 2021 What Is It Like to Be a Bot? In both the case of the car and the case of the game we adopt a fictitious theory of mind, but in the former case this is merely metaphorical or make-believe, while in the latter, it is necessary and functional. For clarity, we can refer to the pseudo-mindedness of things in the former case as “projected minds,” and will refer to the pseudo-mindedness of things in the latter case as minds, as above. This locution is intended to reflect that it is non-optional to interact with these things through the category of minds, despite that they are clearly without minds. They are “non-minds” in that they are “minds that are not one”; they are not merely things without minds, but are minds (interactionally) that are not minds (really). Even in cases where it is interactionally necessary to treat unminded things as minds, we regularly retreat into second-order cognition in which they appear as clearly unminded. The early chatbot ELIZA provides a nice example. To interact with her and have a fun conversation, it is necessary to talk to her as if she’s actually a psychotherapist, but her ability to respond well to us is so limited that we have to think about her as a mere pro­ gram in order to formulate and reformulate our replies to her in order to maintain the il­ lusion. In RPGs, similarly, we have to adopt an intentional stance to figure out what an NPC wants for a quest, but we may have to leave that stance in favor of a technical/pro­ gramming stance in order to figure out how to complete a task by phrasing a reply in the right way, or by having a certain item equipped rather than in our inventory, or finding a “give” command in the interface, or so on. We can even fail to make these shifts in the right way. Sherry Turkle (1995) has documented people taking things at “interface value” to the extent that they found real personal insights in conversations with ELIZA, moving into a space that seems to simultaneously approach ELIZA as a projected mind, as a non- mind, and as a mere computer program. In massively multiplayer online role-playing games (MMORPGs) it is sometimes possible to mistake an NPC for another player, or an­ other player for an NPC. Similarly, outside of explicitly fictional worlds, Sarah Nyberg programmed a bot to argue with members of the alt-right on Twitter, which turned out to be highly effective, even in spite of giving away the game a bit by naming it “@arguetron.” In what Nyberg described as her “favorite interaction,” a (since suspended) Twitter user repeatedly sexually ha­ rassed the bot, eventually asking “@arguetron so what are you wearing?” to which @ar­ guetron replied “[@suspended username redacted] how are all these Julian Assange fans finding me” (Nyberg 2016). As Leonard Foner said about a similar case, a chatbot named Julia fending off sexual advances in the days of multi-user dungeons (MUDs), “it’s not en­ tirely clear to me whether Julia passed a Turing test here or [the human interactant] failed one” (quoted in Turkle 1995, 93). By opening up the black box of “alterity” in alterity relations, we have seen that people adopt the intentional stance towards unminded things for a variety of reasons: as a game, as required by an interface, to humanize technical systems, for fun, or simply by mistake. We have also identified two kinds of ways of adopting a fictitious theory of mind in our re­ Page 10 of 16 PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice). Subscriber: New York University Libraries; date: 14 February 2021

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.