ebook img

Making AI Philosophical Again: On Philip E. Agre's Legacy PDF

13 Pages·2015·0.17 MB·English
by  
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Making AI Philosophical Again: On Philip E. Agre's Legacy

“So here I was in the middle of the AI world—not just hanging out there but totally dependent on the people if I expected to have a job once I graduated—and yet, day by day, AI started to seem insane. This is also what I do: I get myself trapped inside of things that seem insane.” — Philip E. Agre, RRE News and Recommendations (7/12/2000) Introduction 8 4: 5 Philip Agre is a former professor of information 1 0 studies at the University of California, Los 2 1 / Angeles, who was reported missing in October 4. e 2009. Concerned missing notes appeared widely u ss on the internet (Carvin 2009; Pescovitz 2009; I Young 2009; Rothman 2009) for he was presumed to have had some kind of mental breakdown and to be homeless somewhere in Los Angeles. In January 2010 it was reported that he was found Making AI alive and indeed “self-sufficient and in good health” (Carvin 2010). However, he never returned to academic life and explicitly asked his closest Philosophical Again: friends to be left alone. And this is also how the work of one of the most interesting theorists on the relationship between the computer and On Philip E. Agre’s information sciences and the humanities faded away. He was an internet scholar and a sort of proto-inventor of web 2.0 (in the 1990s!) who Legacy grew increasingly worried about the consequences that communication technologies were having on people’s privacy. Agre theorized this with his usual shrewdness and Jethro Masís theoretical discernment (Agre 1994; Agre 1998). People who ‘knew’ him, or at least who worked close to him, describe Agre as a very reclusive person who never really spoke about his personal life. As reported by The Chronicle of Higher y Education (Young 2009), Agre had not just gone c ga missing but had abandoned his job and his e L apartment and he suffered from manic s re’ depression. g A E. p When Agre was still missing, Michael Travers—a hili computer scientist who met him in Graduate P n School at MIT—summarized in a blog post O n: (Travers 2009) Agre’s significance for computer ai studies and beyond. I think his words are worth g A al quoting at length: c hi op Phil was one of the smartest people I knew in s hilo graduate school. More than smart, he had the sP intellectual courage to defy the dominant MIT asíAI Mg sensibilities and become not just an engineer but hro akin also a committed critic of the ideology under the etM surface of technology, especially as it was applied J continentcontinent.cc / ISSN: 21599920| This work is licensed under a Creative Commons Attribution 3.0 License. to artificial intelligence. He was a leader of the consider the Cartesian lineage of AI ideas as situated action insurgency in AI, a movement that merely incidental and as having no purchase on questioned the foundations of the stale theories the technical ideas that descend from it (1997, p. of mind that were implicit in the computational 23). Quite on the contrary, argues Agre, approach. Phil had the ability to take fields of “computer systems are thus, among other things, learning that were largely foreign to the culture of also philosophical systems—specifically, MIT (continental philosophy, sociology, mathematized philosophical systems—and much anthropology) and translate them into terms that can be learned by treating them in the same way” made sense to a nerd like me. I feel I owe Phil a (1997, p. 41). Agre claims “AI is philosophy debt for expanding my intellectual horizons. underneath” (2005, p. 155); an assertion he clarifies in five points: 9 Phil was a seminal figure in the development of 5 4: Internet culture. His Red Rock Eater email 1 AI ideas have their genealogical roots in 0 list[1] was an early predecessor to the many on-line 1 / 2 philosophical ideas. pundits of today. Essentially he invented e 4. AI research programs attempt to work out u blogging, although his medium was a broadcast s and develop the philosophical systems s I email list rather than the web, which didn’t yet they inherit. exist. He would regularly send out long AI research regularly encounters newsletters containing a mix of essays, pointers to difficulties and impasses that derive from interesting things, and opinions on random things. internal tensions in the underlying He turned email into a broadcast medium, which philosophical systems. struck me as weird and slightly undemocratic at These difficulties and impasses should be the time, but he had the intellectual energy to fuel embraced as particularly informative clues a one-man show, and in this and other matters about the nature and consequences of the Phil was just ahead of the times. philosophical tensions that generate them. Analysis of these clues must proceed outside the bounds of strictly technical This paper discusses some ideas envisioned by research, but they can result in both new Agre, particularly the ones concerning a critical technical agendas and in revised technical practice and the possibilities of making understandings of technical research itself. this practice (AI, for instance) philosophical again. (idem) Although I have Agre’s work in high regard, I shall criticize his idea that finding the right technical Influenced heavily by Dreyfus’s pragmatization of implementation for everyday practice can be Heidegger, Agre too understands Sein und Zeit as achieved under the rubric of programming providing a phenomenology of ordinary routine Heidegger’s Zuhandenheit. I am also appreciative activities, and believes Heidegger’s Analytik des of Agre’s idea that there are certain metaphors Daseins can provide useful guidance for the pervading technical work which must be taken development of computational theories of into account, but I will also argue that the interaction. Most importantly, it can also y c Heideggerian so-called Sichöffnende and Offene, a contribute to afford technical practice a historical g that is, the open character of human existence, is Le conscience it overtly lacks, since “research like s precisely not amenable to programming and that e’ Heidegger’s can have its most productive r g Agre could not get rid of the modern “technical A influence upon AI when AI itself recovers a sense construction of the human being as machine” E. of its own historical development” (Agre 1996, p. p (Heidegger Zoll, p. 178). hili 25). This last critical and historical trait permits P n Agre, as a philosopher of computing, to O n: denounce that modern computational practices ai can be viewed as the resolute incarnation of a g Making AI Philosophical Again A disembodied conception of philosophy having al c Augustine, Descartes, and Turing as pivotal hi Philip Agre received his doctorate in Electrical p figures, with the opposition of body and soul at o Engineering and Computer Science at MIT, but he os the core of their thinking: hil was always more interested (than many of his sP peers, that is) in exploring the philosophical asíAI Each man’s cultural milieu provided fresh meaning Mg assumptions pervading technological o n for this opposition: Augustine struggled to practices.[2] Thus, he deemed ‘mistaken’ to ethrMaki maintain his ideals of Christian asceticism, J continentcontinent.cc/index.php/continent/article/view/177 Descartes described the soldier’s soul overcoming computer science, peppered with internal his body’s fear as the Thirty Years’ War raged, and disputes against the more obviously invalid and Turing idealized disembodied thought as he politically loaded claims—and as a critique from suffered homophobic oppression in modern without—recognizing the ultimate cultural England (1997, p. 103). contingency of all claims to scientific truth (Sengers 1995, p. 151). Nevertheless, it is crucial This means, for Agre, that there is a historical for Agre to present his work as neither an tradition and discourse sustaining the practices of internalist account of AI, nor as a philosophical contemporary computational approaches, so by study about AI, but as “actually a work of AI: an no means can they be said to sustain themselves intervention within the field that contests many of exclusively on technical terms. The latter view is its basic ideas while remaining fundamentally 0 not only naïve but also dishonest. But 6 sympathetic to computational modeling as a way 4: unfortunately, Agre sees that computer science is 1 of knowing” (1997, p. xiv). Agre finds it more 0 2 utterly oblivious to “its intellectual contingency 1 / daring to intervene in the field and show and recast itself as pure technique” (idem). This is e 4. practically how critical technical views might help u the reason why Agre castigates this forgetfulness s develop better artificial systems. Both the critical s I of the assumptions running deep in AI, which intervention on the field and the fundamental more often than not are compensated for, put sympathetic posture deserve, furthermore, a aside, and substituted by the formalist attempt to separate explanation. cleanse computational programs of the ‘inexactness’ of natural language, and to strip AI With regard to the critical intervention, Agre altogether of its historical and cultural notes that there is a certain mindset when it underpinnings. It is by virtue of not paying comes to what ‘computer people’ believe—and attention to how their scientific practices are this is, of course, Agre’s niche—regarding the constituted that formalists attempt to liberate aims and scope of their own work. This belief is computational work precisely from the unruliness not at all arbitrary or merely capricious, but rather and imprecision of vernacular language, which it must be viewed in conjunction with the very appears foreign and annoying to their technical nature of computation and computational field. Moreover, “they believed that, by defining research in general. According to Agre, their vocabulary in rigorous mathematical terms, computational research can be defined as an they could leave behind the network of inquiry into physical realization as such. Moreover, assumptions and associations that might have “what truly founds computational work is the attached to their words through the practitioner’s evolving sense of what can be built sedimentation of intellectual history” (Agre 2002, and what cannot” (1997, p. 11). The motto of p. 131). This is why Agre believes such an attempt computational practitioners is simple: if you should not be countenanced any longer but rather cannot build it, you do not understand it. It must it should be confronted by means of a ‘critical be built and we must accordingly understand the technical practice:’ the kind of critical stance that constituting mechanisms underlying its workings. would guide itself by a continually unfolding This is why, on Agre’s account, computer y c awareness of its own workings as a historically a scientists “mistrust anything unless they can nail g specific practice (1997, p. 22). As such, “it would Le down all four corners of it; they would, by and s accept that this reflexive inquiry places all of its e’ large, rather get it precise and wrong than vague r g concepts and methods at risk. And it would A and right” (1997, p. 13). There is also a ‘work regard this risk positively, not as a threat to E. ethic’ attached to this computationalist mindset: it p rationality but as a promise of a better way of hili has to work. However, Agre deems it too narrow P doing things” (Agre 1997, p. 23). n to entertain just this sense of ‘work.’ Such O n: conception of what counts as success is also This critical technical practice proposed by Agre ai ahistorical in that it can simply be defined as g has clear overtones oscillating amid an immanent A working because the program conforms to a pre- al critique, on the one hand, and an ‘epistemological c given formal-mathematical specification. But an AI hi electroshock therapy’ toward situating scientific p system can also be said to work in a wholly o knowledge (Haraway 1988), on the other. Indeed, os different sense: when its operational workings can hil Agre’s view might appear problematic if it is ill- sP be narrated in intentional terms by means of construed as confusing or ambiguous, since it can asíAI words whose meaning goes beyond the Mg be seen both as a critique from within—accepting o n mathematical structures (which is, of course, a the basic methodology and truth-claims of ethrMaki pervasive practice in cognitive scientific J continentcontinent.cc/index.php/continent/article/view/177 explanations of mechanism). For example, when a both technical precision and philosophical rigor. robot is said to ‘understand’ a series of tasks, or By expanding the comprehension of the ways in when it is proclaimed that AI systems will give us which a system can work, “AI can perhaps deeper insights about human thinking processes. become a means of listening to reality and This is indeed a much broader sense of ‘work,’ learning from it” (Agre 2002, p. 141). But it is one that is not just mathematical in nature, but because of its not listening to reality that, for rather a clearly discursive construction. And it instance, Dreyfus (1992) launched his attacks certainly bears reminding that such discursive against AI as an intellectual enterprise. construction is part of the most basic explanatory desires of cognitive science. So in the true sense On this account, Agre contends that merely of the words ‘build’ and ‘work,’ AI is not only “lashing a bit of metaphor to a bit of mathematics 1 there to build things that merely work. Let us 6 and embodying them both in computational 4: quote at length: 1 machinery” (1997, p. 30)—which is usually what 0 2 1 / computer scientists come up with—will not do the The point, in any case, is that the practical reality e 4. job of contributing to the understanding of u with which AI people struggle in their work is not s humans and their world. So framed, the approach s I just ‘the world,’ considered as something appears to Agre as too narrow, naïve, and a clear objective and external to the research. It is much way of not listening to reality. So he has a more more complicated than this, a hybrid of physical ambitious project: the very metaphors being reality and discursive construction. The trajectory lashed to a bit of mathematics that end up in of AI research can be shaped by the limitations of machinery implementation must be investigated. the physical world—the speed of light, the three Both physical reality and discursive construction dimensions of space, cosmic rays that disrupt must be taken into account. Although technical memory chips—and it can also be shaped by the languages encode a cultural project of their own limitations of the discursive world—the available (the systematic redescription of human and stock of vocabulary, metaphors, and narrative natural phenomena within the limited repertoire conventions. (Agre 1997, p. 15) of technical schemata that facilitate rational control)—a fact which, incidentally, tends to be as This also gives hints as to how exogenous such elided—“it is precisely this phenomenon that discourses, like philosophy, are supposed to be makes it especially important to investigate the incorporated into technological practices. Agre is role of metaphors in technical practice” (Agre of the opinion that the point is not to invoke 1997, p. 34). At this juncture, Agre sounds Heideggerian philosophy, for example, as an strikingly similar to Blumenberg, whose exogenous authority thus supplanting technical metaphorological project “seeks to burrow down methods: “the point, instead, is to expand to the substructure of thought, the underground, technical practice in such a way that the relevance the nutrient solution of systematic crystallizations; of philosophical critique becomes evident as a but it also aims to show with what ‘courage’ the technical matter. The technical and critical modes mind preempts itself in its images, and how its of research should come together in this newly history is projected in the courage of its y c expanded form of critical technical consciousness” a conjectures” (2010, p. 5). For Agre too, g (1997, p. xiii). The critical technical practice Agre Le metaphors play a role in organizing scientific s envisions is one “within which such reflection on e’ inquiry or, to say it with Blumenbergian tones, r g language and history, ideas and institutions, is A metaphors are by no means ‘leftover elements’ part and parcel of technical work itself” (2002, p. E. (Restbestände) but indeed ‘foundational p 131). More exactly, Agre confesses that his hili elements’ (Grundbestände) of scientific intention is “to do science, or at least something n P discourse.[3] Clinging to Kuhnian terminology (see O about human nature, and not to solve industrial n: Kuhn 1996), this can also be couched in terms of problems” (1997, p. 17). And he adds: “but I ai the tension between normal science—with its g would also like to benefit from the powerful A aseptic attitude toward reducing instability of al modes of reasoning that go into an engineering c meaning and inconsistency via a cleansing of hi design rationale” (idem). In such a way, Agre p elements of inexact, ambiguous nature—and o pretends to salvage the most encompassing os revolutionary science which makes metaphoric hil claims of AI research—that it can teach us sP leaps that create new meanings and applications something about the world and about asíAI that might constitute genuine theoretical progress Mg ourselves—by means of incorporating a self- o n (Arbib & Hesse 1987, p. 157). By showing how correcting, history-laden approach combining ethrMaki technical practice is not only the result of J continentcontinent.cc/index.php/continent/article/view/177 technical work but also of discursive construction of rigor amounts to developing “a positive and unexplained metaphors, Agre’s critical theory of the environment, that is, some kind of technical practice might meet the criteria for principled characterization of those structures or being considered a truly revolutionary approach in dynamics or other attributes of the environment in Kuhnian terms. It remains to be seen, however, virtue of which adaptive behavior is adaptive” whether this is indeed the case. (Agre & Horswill 1997, p. 113). Accordingly, Agre and Horswill lament that AI has downplayed the The sympathetic attitude towards computational distinction between agent and environment by modeling that Agre espouses takes as its point of fatally reducing the latter to a discrete series of departure the analysis of agent/environment choices in the course of solving a problem, but interactions which accordingly should be “this is clearly a good way of modeling tasks such 2 extended to include the conventions and 6 as logical theorem-proving and chess, in which the 4: invariants maintained by agents throughout their 1 objects being manipulated are purely formal” 0 2 activity. This notion of environment is referred to, 1 / (idem). AI can go on well without a well- with clear Husserlian overtones (see Husserl 1970), e 4. developed concept of the environment but only u as lifeworld, and can be incorporated into s at the price of focusing on mere toy-problems, s I computational modeling via “a set of formal tools microworlds, and toy-tasks within such artificial for describing structures of lifeworlds and the environments. It should then not come as a ways in which they computationally simplify surprise that the situation changes dramatically for activity” (Agre & Horswill 1997, p. 111). From this tasks involving physical activities, where “the follows that Agre’s theoretical emphasis lies on world shows up, so to speak, phenomenologically: the concept of embedding. This means that in terms of the differences that make a difference agents are not only to be conceived of as forthis agent, given its particular representations, embodied but more crucially as embedded in an actions, and goals.” (idem). The environmental environment. The distinction between indexicality that is brought forward here is often embodiment and embedding can be explained as objected to by cognitivists as though agents follows: performed tasks without any computation whatsoever, or as though agents inhabiting a ‘Embodiment’ pertains to an agent’s life as a lifeworld lived in an adamant reactive mode. But body: the finiteness of its resources, its limited the point is rather that “the nontrivial cognition perspective on the world, the indexicality of its that people do perform takes place against a very perceptions, its physical locality, its motility, and considerable background of familiar and generally so on. ‘Embedding’ pertains to the agent’s reliable dynamic structure” (Agre & Horswill 1997, structural relationship to its world: its habitual p. 118). Now, precisely indexicality has been paths, its customary practices and how they fit in difficult to accommodate within AI research. With with the shapes and workings of things, its this in view, Agre has criticized the usual connections to other agents, its position in a set assumptions of the received view in technical of roles or a hierarchy, and so forth. The concept practice as follows: of embedding, then, extends from more concrete y c kinds of locatedness in the world (places, things, a That perception is a kind of reverse optics g actions) to more abstract kinds of location (within Le building a mental model of the world by s social systems, ecosystems, cultures, and so on). e’ working backward from sense-impressions, r g Embodiment and embedding are obviously A inferring what in the world might have interrelated, and they each have powerful E. produced them. p consequences both for agents’ direct dealings hili That action is conducted through the P with other agents and for their solidarity activities n execution of mental constructs called O in the physical world. (Agre & Horswill 1997, n: plans, understood as computer programs. 111-112) ai And finally, that knowledge consists in a g A model of the world, formalized in terms of al The importance for cognitive science of having a c the Platonic theory analysis of meaning in hi well-developed concept of the environment is not p the tradition of Frege and Tarski. (2002, p. o to be underestimated, since it seems that only os 132) hil prior to a basic understanding of an agent’s sP environment can a given pattern of adaptive asíAI The dissociation of mind and body (the founding Mg behavior be figured out. Taking a stride towards o n metaphor of cognitive science and modern defining the environment with at least a modicum ethrMaki philosophy) is here at work, precisely when J continentcontinent.cc/index.php/continent/article/view/177 traditional AI thinks of the mind roughly as a plan aside) and show what kind of representational generator and the body as the executor of the activity is at work in interaction. Thus, the point is plan. Moreover, AI is so framed in terms of a to criticize the underlying view of knowledge series of dissociations: mind versus world, mental presupposed by the traditional theory of activity versus perception, plans versus behavior, representation (that knowledge is picture, copy, the mind versus the body, and abstract ideas reflection, linguistic translation, or physical versus concrete things (Agre 2002, p. 132). simulacrum of the world), while suggesting that According to Agre, these dissociations are “the primordial forms of representation are best contingent and can be considered ‘inscription understood as facets of particular time-extended errors’ (Smith 1996): “inscribing one’s discourse patterns of interaction with the physical and social into an artifact and then turning around and world” (Agre 1997, p. 222). Therefore, “the 3 ‘discovering’ it there” (Agre 2002, p. 130). And 6 notion of representation must undergo painful 4: this is not to be admired. As Nietzsche contented 1 surgery to be of continued use” (Agre 1997, p. 0 2 in Über Wahrheit und Lüge im außermoralischen 1 / 250). Given that this redefinition of representation Sinne (1873), when someone hides something e 4. by Agre has its own quirks, it must now be u behind a bush and looks for it again in the same s carefully explained. s I place and finds it there as well, there is not much to praise in such seeking and finding. The traditional theory of representation, which has been put into work and is thoroughly That AI research has been framed along these presupposed in traditional AI research, is based contingent oppositions makes it clear that it is on the notion of world model. Such notion refers part of the history of Western thought. As such, to some structure that is thought to be within the mind or machine that represents the outside it has inherited certain discourses from that world by standing in a systematic correspondence history about matters such as mind and world, with it (Agre 1997, p. 223). As such, the and it has inscribed those discourses in computing assumption that there is a world model being machinery. The whole point of this kind of represented by the mind is the epitome of technical model-building is conceptual mentalism (Agre 1997, p. 225). Mentalism was clarification and empirical evaluation, and yet AI previously defined by Agre as the generative has failed either to clarify or to evaluate the metaphor pervasive in cognitive science according concepts it has inherited. Quite the contrary, by to which every human being has an abstract inner attempting to transcend the historicity of its space called a ‘mind’ which clusters around a inherited language, it has blunted its own dichotomy between outside and inside organizing awareness of the internal tensions that this a special understanding of human existence (Agre language contains. The tensions have gone 1997, p. 49). Marres (1989), a defender of underground, emerging through substantive mentalism, defines it as the view that the mind assumptions, linguistic ambiguities, theoretical directs the body. Thus, on Agre’s terms, giving equivocations, technical impasses, and ontological preeminence to indexicality amounts to inverting confusions. (Agre 2002, p. 141) this picture, since conceding that human beings y c a are not minds that control bodies implies that g Nevertheless, it is interesting to note that—for all Le interaction cannot be defined “in terms of the s his philosophical acumen—Agre himself has not e’ relationships among a mind, a body, and an r g been able to liberate himself from the persistence A outside world” (1997, p. 234), which is of a representational theory of cognition, even E. unfortunately so typical in cognitive scientific p when his is certainly more concrete, more hili explanations. And here the key term is indeed P historically conscious, and more enactive than the n interaction, understood not as the relation O one customarily held in the traditional view. As a n: between the subjective and the objective, but result, the latter critical concepts grouped ai rather as emerging from the actual practices g together conform with the motivation for A people employ to achieve reference in situ. al developing a concept of indexical-functional or c Indexicality “begins to emerge not merely as a hi deictic representation (Agre & Chapman 1987; p passive phenomenon of context dependence but o Agre 1997), the main idea being that agents os as an active phenomenon of context constitution” hil represent objects in generic ways through sP (Agre 1997, p. 233). relationships to them (Agre & Horswill 1997, p. asíAI Mg 118). On Agre’s view, what must be done is refine o n David Chalmers could only table the question the concept of representation (and not just cast it ethrMaki what is it like to be a thermostat? (1996, p. J continentcontinent.cc/index.php/continent/article/view/177 293)—thus blurring in one fell swoop the which is most incumbent on them. difference between living organisms and mechanism—by means of importing some heavy For Agre, the latter requires a proper theory of philosophical baggage, namely the assumption intentionality couched within the Heideggerian that the thermostat controls the temperature of distinction between Zuhandenheit and systems in general (not of this specific system, say Vorhandenheit (SZ § 15). Traditional AI research the internal combustion engine of a specific car), can be accused of having only paid attention to or that a thermometer measures the temperature present-at-hand phenomena, thus attempting to “in room 11” (instead of here), or that one eats model computationally what precisely appears with “fork number 847280380” in some cosmic salient objectively in perception. In contrast, Agre registry of forks (instead of precisely this fork I am finds that this phenomenological distinction is 4 holding with my left hand). Quite on the contrary, 6 neither psychological nor mechanistic but a 4: when indexicality is introduced as a constituting 1 description of the structure of everyday 0 2 factor of interaction, it turns out that “human 1 / experience that can be suitable for a new way of activities must be described in intentional terms, e 4. computational modeling of that experience. u as being about things and toward things, and not s Preston (1993) had already explored this s I as meaningless displacements of matter. Physical Heideggerian distinction in relation to another and intentional description are not incomparable, one: that of nonrepresentational and but they are incommensurable” (Agre 1997, p. representational intentionality. One could, à 245). From this follows that the typical ascription la Dreyfus (2002a; 2002b), identify respectively of intentional states to nonembedded systems is Vorhandenheit with representational intentionality absurd, precisely because embedding, and the and Zuhandenheit with a sort of interaction deriving thereof, is the condition of nonrepresentational intentionality and so proclaim possibility of intentional comportment. The beforehand the failure of artificial systems ubiquitous character of experience suggested in propounding the accomplishment of high-level Chalmers’s classic book on the philosophy of intelligence. For Agre, however, this is too radical consciousness is also an inscription error, for it and, above all, too pessimistic. What is needed is arises from obviating the need for a proper theory a clarification of what kinds of representation exist of intentionality or, to be more exact, such view and the role they play in real activities (Agre 1997, derives from the naturalization of intentionality. p. 237). Herein resides the importance of delving When actual, concrete intentional activities are into experience and providing AI with a set of taken off the picture, representation is no longer tools to enrich its vocabulary and metaphors. This connected with a lifeworld. Thus, the illusion can is needed because “the philosophy that informs be then entertained that a semantic theory merely AI research has a distinctly impoverished entails the categorization in some objective way phenomenological vocabulary, going no further of the ontology of a concrete situation, before the than to distinguish between conscious and event of activity has taken place, or ignoring tout unconscious mental states” (Agre 1997, p. 239). court the eventual character of activity (Agre Agre is onto something more important here, 1997, p. 232). which is nothing less than making AI philosophical y c a again: “technology at present is covert g Incidentally, the aforementioned illusion is Agre’s Le philosophy; the point is to make it openly s critique of the semantic theory espoused by e’ philosophical” (1997, p. 240). r g Barwise and Perry (1983), which, on Agre’s A criticism, comports a metaphysical realism that E. The traditional idea of representation understood p obscures indexicality. According to Agre, “when a hili it as a model in an agent’s mind that corresponds P speaker uses an indexical term such as ‘I,’ ‘you,’ n to the outside world through a systematic O ‘here,’ ‘there,’ ‘now,’ or ‘then’ to pick out a n: mapping. Agre opines that AI research has been specific referent, this picking out is determined by ai concerned only with a partly articulated view of g relations between situations; it is not an act on the A representation. No wonder, then, the meaning of al speaker” (1997, p. 233). These interactions and c representations for an agent can be determined hi how they shape situations must be clarified, since p almost as en-soi—to use Sartre’s terminology in o it can be said that “interaction is central, both to os L'être et le néant (see Sartre 1984)—without any hil human life and to the life of any agent of any sP reference being provided as to the agent’s great complexity” (Agre 1997, p. 234). Embedded asíAI location, attitudes, interests, and idiosyncratic Mg activities must be investigated in how they are o n perspective (as être-pour-soi). This is also the structured, as well as the sort of representing ethrMaki reason explaining why “indexicality has been J continentcontinent.cc/index.php/continent/article/view/177 almost entirely absent from AI research” (Agre deictic notation introduced by Agre, “some 1997, p. 241). Moreover, “the model-theoretic examples of deictic entities are the-door-I-am- understanding of representational semantics has opening, the-stop-light-I-am-approaching, the- made it unclear how we might understand the envelop-I-am-opening, and concrete relationships between a representation- the-page-I-am-turning. Each of these entities is owning agent and the environment in which it indexical because it plays a specific role in some conducts its activities” (idem). On Agre’s view, the activity I am engaged in; they are not objective, reason why AI research has lagged behind a clear- because they refer to different doors, stop lights, cut understanding of representation and envelopes, and pages on different occasions” indexicality has not been its nondistinctiveness (idem). Their nonobjective character, however, between mechanism and human phenomena. does not imply that, by contrast, indexical entities 5 Notwithstanding Agre’s crucial imports from the 6 are to be considered as subjective and, for that 4: alien province of phenomenology, he would 1 matter, as phantasms or internal and intimate 0 2 nevertheless defer to Chalmers’s highly 1 / qualia. The idea behind this is precisely that a controversial idea that experience is ubiquitous, e 4. deictic ontology should not be confused with u albeit with a caveat: the problem is not to ask s subjective, arbitrary musings of an encapsulated s I whether there is something it is like for a subject. In the first place, this is the ontology that thermostat to be what it is, for Agre has it that can be most properly ascribed to routine any device that engages in any sort of interaction activities. Therefore, it would be preposterous to with its environment can be said to exhibit some suggest that they are private or ineffable. kind of indexicality (1997, p. 241). Chalmers’s Routines and activities are realized ‘out there’ in problem is simply not to have considered exactly the world and, for that very reason, do not pertain which kind of intentionality might be ascribed to to an internal mental game: they are, indeed, artifacts like thermostats. Artifacts do have some public. Accordingly, in routine activities the sort of ambience embedding. In example, “a objective character of entities with which one thermometer’s reading does not indicate copes, is not salient or important. Neither is their abstractly ‘the temperature,’ since it is the ‘subjective feel,’ nor the way they appear to me as temperature somewhere, nor does it indicate individual. That their character is deictic means concretely ‘the temperature in room 11,’ since if that what is most important is the role they play in we moved it to room 23 it would soon indicate the whole of activity. Therefore, hyphenated noun the temperature in room 23 instead. Instead, we phrases like the-car-I-am-passing or need to understand the thermometer as the-coffee-mug-I-am-drinking-with are not mental indicating ‘the temperature here’—regardless of symbols in the cognitivist sense. They designate whether the thermometer’s designers thought in “not a particular object in the world, but rather a those terms” (idem). As Agre’s contention goes, role that an object might play in a certain time- the point is to ascribe indexicality to artifacts. In extended pattern of interaction between an agent fact, “AI research needs an account of and its environment” (Agre 1997, p. 251). intentionality that affords clear thinking about the ways in which artifacts can be involved in concrete Agre’s alternative way of conceiving of activity y c activities in the world” (1997, p. 242). a and the express purpose of modeling it g Le computationally is very attractive. As a matter of s Such account of intentionality was coined by Agre e’ engineering, the leading principle is that of r g under the rubric of deictic representation as A machinery parsimony: “choosing the simplest opposed to objective representation. First, two E. machinery that is consistent with known p sorts of ontology are to be distinguished. hili dynamics” (Agre 1997, p. 246). This view explicitly P According to an objective ontology, individuals n contrasts with the emphasis on expressive and O can be defined without reference to activity or n: explicit representation typical of traditional AI, intentional states. A deictic ontology, by contrast, ai with all the inherent difficulties of programming g can be defined only in indexical and functional A beforehand, as scripts, all the situations an al terms and in relation to an agent’s location, social c artificial agent might encounter when coping with hi position, current goals and interests, and p the world. By clear contrast with traditional AI, o autochthonous perspective (Agre 1997, p. 243). os “the principle of machinery parsimony suggests hil Entities entering the space of whatever interaction sP endowing agents with the minimum of knowledge with the agent, can only be understood correctly asíAI required to account for the dynamics of its Mg in terms of the roles they play in the agent’s o n activity” (Agre 1997, p. 249). In such a way, Agre’s activities. In accordance with the aforementioned ethrMaki approach also resonates with Brooksian tones (see J continentcontinent.cc/index.php/continent/article/view/177 Brooks 1999) of removing ‘intelligence’ and even being aware of the underlying metaphors ‘reason’ from the picture in order to render an pervading their work. For Agre, this is particularly account of interactive representation. Moreover, problematic because “as long as an underlying Agre sees deictic representation as changing the metaphor system goes unrecognized, all traditional view altogether since it presents us manifestations of trouble in technical work will be with the possibility, not of expressing explicitly interpreted as technical difficulties and not as and in every detail objective states of affairs, but symptoms of a deeper, substantive problem” of participating in them: “conventional AI ideas (1997, p. 260). about representation presuppose that the purpose of representation is to express As an exemplary case of technical work based on something, but this is not what a deictic the aforementioned levels of analysis, Agre 6 representation does. Instead, a deictic 6 presents Pengi, a program designed by Chapman 4: representation underwrites a mode of relationship 1 and Agre (1987) in the late 1980s under the rubric 0 2 with things and only makes sense in connection 1 / of being an implementation of a theory of activity. with activity involving those things” (1997, p. 253). e 4. Pengi is a penguin portrayed in the commercial u However, the objection may be raised that such a s computer game Pengo, who finds itself in a maze s I deictic approach violates the grand spirit of AI made up of ice blocks that is surrounded by an which seeks greater explicitness of representation electric fence. The maze is also inhabited by and broader generality, since Agre’s formula for deadly bees that are to be avoided at all costs by design might simply contribute to model only Pengi and the task of the player is to maintain special-purpose—and thusly limited—devices. But Pengi alive and defend it from such perils coming Agre responds that “the conventional conception along the way. As defense, the bees can be killed of general-purpose functionality is misguided: the by crushing them with a moving ice block or by kind of generality projected by current AI practice kicking the fence while they are touching it. This (representation as expression, thought as search, momentarily stuns the bees and they can be planning as simulation) simply cannot be realized” crushed by simply walking over them. Agre argues (1997, pp. 249-250). that Pengo is an improvement on the blocks world, although it obviously fails to capture This is, of course, not just a series of theoretical numerous elements of human activity. What is postulates urged by Agre, since he distinguishes important is the combination of goal-directedness amongst levels of analysis (1997, pp. 27-28). and improvisation involved in the game, from The reflexive level, which has been already which Agre hopes to learn some computational exhibited in the previous pages of this exposition, lessons. First of all, Agre and Chapman did not provides ways for analyzing the discourses and attempt to implement in advance everything they practices of technical work. Given that technical knew about the game, thus contradicting the language is unavoidably metaphorical, the mapping out beforehand which is typical in reflexive level permits one to let those metaphors traditional AI systems. The point is to see Pengi as come to the surface and thus can they be taken relating to the objects that appear in its world, into account when technical work encounters not in terms of their resemblance to mental y c trouble in implementation. On the a models which were beforehand programmed, but g substantive level, the analysis is carried out with Le solely in terms of the roles they play in the s reference to a particular technical discipline, in e’ ongoing activity. As such, what Agre and r g this case AI. But Agre is primarily interested in A Chapman attempted to program was actually proceeding, on top of the reflexive and E. deictic representations: p substantive levels, on a technical level, in order to hili P explore “particular technical models employing a n the-ice-cube-which-the-penguin-I-am- O reflexive awareness of one’s substantive n: controlling-is-kicking, commitments to attend to particular reality as it ai the-bee-I-am-attacking, g becomes manifest in the evolving technical work” A the-bee-on-the-other-side-of-this-ice-cube- al (1997, p. 28). On Agre’s view, traditional AI c next-to-me, etc. hi practitioners have not conscientiously attended to p o this partitioning of levels of analysis. Particularly, os hil the reflexive level that prescribes an awareness of sP the role of metaphors in technical work has been asíAI At any rate, Agre does not argue that this simple Mg disdained, as though AI researchers could simply o n system can be regarded as intelligent: “Pengi bootstrap their way to technical success without ethrMaki does not understand what it is doing. No J continentcontinent.cc/index.php/continent/article/view/177 computer has ever understood to any significant what but a for-what (2007, p. 252). Dreyfus has it degree what it was doing” (1997, p. 301). But the that Agre was able to show how Heidegger wants bottom line is straightforward enough to explain: to get at something more basic than simply a class the game constituting Pengi’s world as agent is of objects: equipment (Zeug). The entire point of not made up of present-at-hand entities and the equipmental character of things in the world is processes, but more importantly of possibilities not that they are entities with a function for action that require appropriate responses from feature—this was Dreyfus’s pre-Agrean the agent. This shows Agre’s understanding of interpretation of Zeug and ready-to-hand entities as no entities at all, but as Zeugzusammenhang—but rather that they open possibilities for action and subsequent responses up possibilities for action, solicitations to act, and to the demands of the situation at hand. Given motivations for coping; an idea that Dreyfus takes 7 that these possibilities for action are not objects 6 admittedly from Agre’s endeavors towards 4: at all and that usually this sort of open stance for 1 modeling Zuhandenheit on the basis of deictic 0 2 responding skillfully to environmental challenges 1 / intentionality. Nevertheless, Dreyfus is of the does not appear in propositional referring, it is e 4. opinion that in attempting to program ready-to- u understandable that they have been rather elusive s hand, Agre succumbs to an abstract s I for programmers. After all, how can one program objectification of human practice, because possibilities for action, since the focus is not on affordances—inasmuch as they are not objects this particular object or the other but rather on but the in-between interaction in which no subject the movement constituting the towards-which for- nor object is involved—are not amenable to the-sake-of-which? The wellspring of this programming. That they are not is not something movement is all the more elided because, as Agre seems to fully understand, and this is why he Heidegger has it, precisely what is closest to us thinks that somehow deictic representations must ontically is ontologically (and for that very reason) be involved in human understanding. According that which is farthest (SZ § 5, p. 15). This has been to Dreyfus, “Agre’s Heideggerian AI did not try to Agre’s task, namely: to attempt to reveal the program this experiential aspect of being drawn ontological dimension by means of technological in by an affordance. Rather, with his deictic implementation that does not obfuscate it but representations, Agre objectified both the that rather embraces it. By programming deictic functions and their situational relevance for the representations instead of just objective ones, agent. In Pengi, when a virtual ice cube defined Agre argues, computational programs can learn by its function is close to the virtual player, a rule this fundamental lesson: what was lacking in dictates the response (e.g., kick it). No skill is traditional AI systems was precisely a model to involved and no learning takes place” (2007, p. envision a specific relationship between machinery 253). It must be admitted that a virtual world is and dynamics based on the concept of not even slightly comparable with the complex interaction. This lesson, so the argument goes, dynamics of the real world. In a virtual world, the can gradually dispel the need for mentalist dynamics of relevance are determined approaches. beforehand, so a program like Pengi simply cannot account for the way human beings cope y c a with new relevancies. Dreyfus concludes that Agre g Le “finesses rather than solves the frame problem. s Conclusion e’ Thus, sadly, his Heideggerian AI turned out to be r g A a dead end. Happily, however, Agre never It should be noted that Agre was clearly E. claimed he was making progress towards building p influenced by Dreyfus’s early critique of artificial hili a human being” (2007, p. 253). P reason (see Dreyfus 1992) but his path was n O individually constructed and his insights were also n: Agre’s contribution consists in his attempt to supported by different motivations. What is ai program Zuhandenheit instead of Vorhandenheit. g perhaps more interesting is his willingness to go A That this can be made is, however, highly al on and program something, and this after c controversial. Certainly, what is deeply hi researching with seriousness the history of the p contentious is not that phenomenological insights o philosophy of mind and drawing even upon os can be brought to bear on cognitive science for a hil continental sources. Dreyfus credits him with sP critical technical practice like the one Agre expounding the philosophical debate and even asíAI requires, but rather the assumption that the Mg with understanding Zuhandenheit better than he o n experiential dimension which phenomenology has himself did, since for Agre ready-to-hand is not a ethrMaki revealed can be programmable. According to J continentcontinent.cc/index.php/continent/article/view/177

Description:
“So here I was in the middle of the AI world—not just hanging out there but totally dependent on the people if I expected to have a job once I graduated—and yet, day by day, AI started to seem insane. This is also what I do: I get myself trapped inside of things that seem insane.” —. Phil
See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.