ebook img

From the Ascendancy of ARPA to the Advent of Commercial Expert Systems Forward Part I. PDF

341 Pages·2013·2.23 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview From the Ascendancy of ARPA to the Advent of Commercial Expert Systems Forward Part I.

UC Berkeley UC Berkeley Previously Published Works Title Building the Second Mind, 1961-1980: From the Ascendancy of ARPA to the Advent of Commercial Expert Systems Permalink https://escholarship.org/uc/item/82h464gg Author Skinner, Rebecca Elizabeth Publication Date 2012-09-03 eScholarship.org Powered by the California Digital Library University of California Building the Second Mind, 1961-1980: From the Ascendancy of ARPA to the Advent of Commercial Expert Systems Rebecca E. Skinner Copyright 2013 by Rebecca E. Skinner ISBN 978-0-9894543-2-2 Forward Par t I . Introductio n Preface Chapter 1. Introduction: The Status Quo of AI in 1961 Par t II . Twi n Bolt s o f Lightnin g Alte r th e Landscap e Chapter 2. The Revolutionary Impact of the Integrated Circuit Chapter 3. The Revitalization of the Advanced Research Projects Agency and the Foundation of the IPTO Chapter 4. Hardware, Systems and Applications in the 1960s Par t III . Th e Bell e Epoqu e o f th e 1960 s Chapter 5. MIT: Work in AI in the Early and Mid-1960s Chapter 6. CMU: From the General Problem Solver to the Physical Symbol System and Production Systems Chapter 7. Stanford University and SRI Par t IV . Th e Challenge s o f 197 0 Chapter 8. The Mansfield Amendment, “The Heilmeier Era”, and the Crisis in Research Funding Chapter 9. The AI Culture Wars: the War Inside AI and Academia Chapter 10. The AI Culture Wars: Popular Culture Par t V . Bi g Idea s an d Hardwar e Improvement s i n th e 1970 s Chapter 11. AI at MIT in the 1970s: The Semantic Fallout of NLR and Vision Chapter 12. Hardware, Software, and Applications in the 1970s Chapter 13. Big Ideas in the 1970s Chapter 14. Conclusion: the Status Quo in 1980 Chapter 15. Acknowledgements Chapter 16. List of Abbreviations Chapter 17. Bibliography Chapter 18. Endnotes Forward to the Beta Edition This book continues the story initiated in Building the Second Mind: 1956 and the Origins of Artificia l Intelligence Computing . Building the Second Mind , 1961-1980: From the Establishment of ARPA-IPTO to the Advent of Commercia l Expert Systems follows the history, through to the fortunate phase of the second decade of AI computing. How quickly we forget the time during which the concept of AI itself was considered a strange and impossible effort ! The successes and rapid progress of AI software programs, with related work in adjacent fields such as robotics and complementary fields such as hardware, thus are our theme and meat of our narrative. The Building the Second Mind venture traces back to the author's Ph.D. dissertation, which studied the commercialization of AI during the 1980s. Curiously, portions from the author’s original Ph.D. dissertation itself, concerning the abortive business enterprises, working in fits and starts, devoted to early commercialization of expert systems, now make up several chapters of the third volume. Most of AI is found in ongoing laboratory research, published in dense academic journal articles and in conference proceedings. Synoptic and simplifying works, such as this book, are apt to appear superficial to the researchers who write such papers, yet overly complex to the man on the street. The author tries to avoid both of these ills. Popularizers are not well-liked in academia or the world of research. However, with the exception of those who write the journal articles, everyone is someone's popularizer. They are a necessary part of the intellectual ecosystem, since sci-fi novels and movies alone can't do the work of representing research. This work is neither highly technical nor comprehensive: my interests are skewed in favor of knowledge representation. Building the Second Mind draws heavily upon both academic and journalistic nonfiction sources regarding AI. Much work in AI is necessarily specialist, but the value of synoptic works should not be underestimated. New technological advances allow alterations of the traditional publishing conventions, leaving social and professional convention to adjust. Paper publishing schedules and the pace of review have historically slowed publication and precluded revisions. This author instead asks for editorial input from the Cloud for this Beta edition. This will result in far more reviews from a larger pool of knowledgeable people. This will make it far easier for this text to include hyperlinks, refs to oral histories and museum and scholarly websites, as well as corrections and editorial improvements. I will take comments for several months and then release a revised ePub edition later in 2012. Preface Building the Second Mind , 1961-1980: From the Establishment of ARPA-IPTO to the Advent of Commercia l Expert Systems tells the story of the development of AI, the field of research that sought to get computers to do things that would be considered intelligent if a person did them, during the 1960s and 1970s. (The definition is commonly attributed to Marvin Minsky). The book takes its leave at the conclusion of the ‘70s, shortly before both expert systems and personal computing were commercially introduced in their awkward early forms. In the late 1950s, the field had been founded and began to undertake extremely rudimentary logic and problem- solving programs. In the 1960s, the immense growth in funding given to the field of computing, the development of integrated circuits for mainframe computing, and the increasing numbers of people in AI and in computer science in general, all tremendously advanced the field. This is evidenced in more complex problem- solving, development of an understanding of the field of knowledge representation; appearance of fields such as graphic representation and problem-solving for more knowledge-intensive fields. Finally, the early integrated circuits of the 1960s gave way to microprocessors and microcomputers in the early 1970s. This heralded a near future in which computers would be increasingly fast and ubiquitous. In a clear virtuous cycle, the enhanced cheapness of processing power and storage would encourage the proliferation of applications. This work takes up where Building the Second Mind: 1956 and the Origins of Artificia l Intelligence Computing ended. That book studied this field from the distant prehistory of abstract thinking, through its early formation as a field of study in the mid-1950s, and the several years following that turning point. Watching the advances of the 1960s and 1970s offers the satisfying vindication of the early efforts of AI’s founders- John McCarthy, Marvin Minsky, Allen Newell, and Herbert Simon. Enthralled with Technology “ O Brave New World that has such people in it !” Aldous Huxley, Brave New World . 1932. The early 1960s was an expansive, ebullient era. The NASA mission to land a man on the Moon, initiated in 1961 and achieved in 1969, was one of the most memorable events of both the Cold War and scientific and technical history. During the Cuban Missile Crisis of 1962-1963, the United States confronted Cuba, and by proxy the USSR, demanding that the Soviets refrained from stationing missiles in the Caribbean. The peaceful resolution of this showdown proved that the USA at least held its own, and could keep the Soviets from maintaining a military presence in the Western Hemisphere. The conquest of diseases through vaccines continued. The cultural fascination with the products of technological modernity continued: new materials in the form of novel fabrics, building materials, and jarring and atonal music and art, ensured that cultural enlightenment included an element of discomfort and shock. High Modernist architecture provided exposed metal and glass, with enough sharp edges and right angles to fill a geometry textbook. Freeways plowed through cities: an elevated highway wrecked the San Francisco waterfront until an earthquake took it down, and a highway nearly was run through the dense streets of Greenwich Village in New York City. The sleek and metallic and manufactured and aerodynamic were lauded; the quaint and candlelit and rustic had no place. Modernism in the form of the wholesale glorification of the new, including faith in AI and the progress offered by computation, would see profound challenges. The central cultural shocks which began to break Modernism apart had not taken place yet. But they were beginning to appear: Jane Jacobs’ influential masterwork The Death and life of great American cities appeared in 1961 (1), and Silent Spring by Rachel Carson was published in 1963 (2). Jacobs helped to bring about the new urban movement of densely populated, village-like urban centers, and Silent Spring decried environmental pollution. Both helped to bring caveats to technological growth, but the early 1960s saw only the very beginning of doubts as to its munificence. Likewise, the ardent participation of both parties in the Cold War had not yet begun to show fatigue. Building the Second Mind will be an advocate of computer technology as an augmentation technology for human abilities and as a research technology for cognitive science. General belief in- if not blind adhesion to- scientific progress has been an occupational requirement for writing this work. However, the caveats to technological progress issued by such challenges to Modernism are significant, and will be considered later in this book. The Context of AI’s Progress Artificial Intelligence was one of the great triumphs of Postwar technological optimism, and this time period was one of its sweet spots. During the 1960s and 1970s, the field developed tremendously. It started to devise ways to represent visual data by computation, as in the 1966 MIT Summer Vision project; began to examine the mysteries of creativity (in the work of Herbert Simon and his students); tore apart sentences for their meaning (Schank conceptual primitives, Bobrow calculus problem parser); and developed means to represent high-level scientific problem-solving and classification knowledge (Buchanan and Feigenbaum’s Dendral program), among other things. AI’s earlier history, in the years of the postwar research and development frenzy, had been more precarious. After the Second World War, the idea of making machines think had seemed strange and disturbed. Beyond this, it had often been perceived as morally bad, and the whole concept sinister. The appearance of arguments in its favor, or at least those urging measured equivocation, was a welcome salve. But by the late 1960s, the situation was immensely different. The shoe was on the other foot, and the field had achieved success with chess-playing programs, with the Logic Theorist, with calculus-solving programs, with early robotic arms, and most recently with the Dendral program, which solved diagnoses in internal medicine. With the appearance of the Advanced Research Projects Agency (ARPA, known as Darpa starting in 1972), AI had attained research centers, tenure-track professors, and influence in large-scale, capital-intensive timesharing projects. Throughout the 1970s the tasks which the field was capable of doing proceeded to be come far more erudite, as better forms of abstraction and embodiment of knowledge were created. As Edward Feigenbaum, a Stanford researcher who will be important to this book said, the field had a great deal of low fruit, easily attained, to pick. This book tells this story. 2. Themes The thematic concerns of this work should not overshadow its historical nature. We recognize that there is a voluminous and enthusiastic body of knowledge regarding the history and sociology of technology and science. Our concern with contribution to this body of knowledge is first and foremost expository rather than theoretical. We will revisit the themes of the last book. In other cases they are revised because of the change in the lessons of history. First, the successive successes of AI are salient in their most unalloyed form in this time period. During the 1950s all advances were painstaking and even the most simple thing took astonishing effort. During the 1960s, and into the 1970s, AI successively knocked down the list of things that had seemed that it would not be able to do. Herbert Simon’s 1963 scholarship began to analyze creativity in a systematic fashion. Programs involving schematized worlds, or “toy” domains, were devised. More difficult domains, such as the systematic but complex field of mass spectrometry, were and AI programs analyzed spectrometric data. This was accomplished by a program known technically as a production systems, which would later be developed as “expert systems” in the 1980s. Vision, robotics, computer chess, and knowledge representation all underwent new and fundamental changes. Second, the theme of the successful implementation of technological innovation by governmental programs- war both hot and cold, is repeated (3). The author said it in the beginning of Building the Second Mind: 1956 and the Origins of AI Computing, and shall say it again here. The role of the military in the initial birth and later development of the computer and its ancillary technologies cannot be erased, eroded, or diminished. As we saw in the first book, this was true of the earliest development of the digital computer during the Second World War, and of its progress along a broad variety of fronts during the postwar decades. The advances which may be directly attributed to the war effort- the Cold War effort, that is- include relay memory followed by magnetic core memory; storage in the form of punched cards and magnetic tape; input-output in the form of followed by teletype operations, and finally the CRT screen with a keyboard. The crucial role of government policy in expressly and successfully encouraging computer development is a well- articulated scholarly theme (4). Our study is simply a further corroboration. Third, a well-tuned organizational culture was also essential in the development of the computer in every aspect, along the course of the entire chronology. ARPA allowed and encouraged creative people to work on their own terms, at a multiplicity of institutions- the Rand Corporation, SRI, and the laboratories at each of the universities, among others. The same results would not have taken place without the ARPA bureaucracy and the institutional freedom which it encouraged. A fourth theme is that of the paradigm transition in scientific and popular conception of computers and machinery entailed by AI. The essay of AI itself envisions an active intelligence apart from humans. Getting this new gestalt in place has required the incipience of acceptance of the computer, and a rethinking of the capabilities of human tools. Belief in the computational metaphor for the human mind, and belief in the possibility of computing as qualitative as well as only numerical and quantitative information processing, were established among a very small cohort by the time discussed in the beginning of this book. The corner had already been turned in these erudite circles, but the decades of the 1960s and 1970s saw the vast widening of the software to prove that programs could emulate cogitation, and the hardware which could carry this out. The proof was in the software. Fifth and finally, our study is concerned with the profound contributions of AI beyond the area of its designated efforts- to build tools that do ‘intelligent’ things and to emulate human intelligence. Much of the achievement of the field during this time consisted of AI researchers’ development of computer languages and operating systems more generic to research and consumer computing (the Multics program, which helped lead to C++ and Unix); more interactive input-output in the form of input through a keyboard, with a monitor one looks at; and useful computing tools such as newswire programs, editing conventions, and the concept of software agents for a wide range of tasks. Put in even simpler terms: AI and its wider circles was absolutely central to the development of the personal computer and the popular Internet. Outline of the Chapters The history of AI through 1956, and in the several years afterwards, naturally structures itself with the culmination of field’s foundation. The history of AI during the 1960s and 1970s naturally structures itself as well. The onslaught of monies given to AI and other computing disciplines, following the 1961 selection of ARPA to be the pre-eminent research agency for funding computer science, was a game-changing windfall. It could be compared to a television or movie sight gag in which the door is opened and a flood ensues. Thus the narrative begins with a bang, imposed by history, as the increased effort in the United States’ space program meant increased funding for AI as well. We commence by presenting the framing facts of the integrated circuit (IC), which has allowed computers that offer bigger, better, faster, and more computing, and ARPA, which footed the tab for its development. These were invented and founded in 1959 and 1958 respectively, but did not become prominent until their country needed them, so to speak, because of the NASA Apollo mission. The 1962 foundation of ARPA’s Information Processing Techniques Office, and its immeasurable contributions in funding much AI- related research during these two decades, are discussed as it specifically relates to AI’s agenda (Chapters Two and Three). The wide dispersal of improvements to input-output, memory, and storage are discussed in Chapter Four. Part III . The Belle Epoque of the 1960s The sine qua non of ARPA’s contribution to AI was its free rein to researchers. AI flourished in response to opportunity unfettered. We refer to this decade as the Belle Epoque, in reference to the salubrious climate for art and literature prior to the First World War. In Chapter Five we discuss the more general task of institution- building that nurtured AI. At the Carnegie Institute of Technology (now Carnegie Mellon University), Newell and Simon expanded upon the schematic and simple problems they had initially taken on with the General Problem Solver program, and began to work on the production system protocol for solving toy problems. These programs, which were seminal for cognitive science, were generally not concerned with domain knowledge, but with the nature of accretion of knowledge in small, closely defined ‘toy’ domains such as games and SAT-type problems (Chapter Six). Today computer programs can solve hard problems; at that time solving tiny problems was an enormous achievement. Closely influenced by Newell and Simon (and thus included in the CMU chapter), Bruce Buchanan and Simon’s protégée Edward Feigenbaum at Stanford, initiated the Dendral project, AI’s first significant success in solving a problem in a difficult scientific domain. Chapters Five and Seven address AI at MIT and Stanford respectively. Embodiment of intelligence through vision and robotics proceeded with work at the MIT AI Lab, and at the Stanford AI Lab (‘SAIL’), established by John McCarthy in the early 1960s. Concepts such as the representation of graphic imagery by pixels were first

Description:
Multics program, which helped lead to C++ and Unix); more interactive input-output in the form of input such things appear as ubiquitous “apps” used by billions of people. McCarthy was one of the main .. enthusiastic program directors in themselves did not unleash the flood of computing power
See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.