ebook img

UC Berkeley PDF

394 Pages·2014·1.33 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview UC Berkeley

UC Berkeley UC Berkeley Previously Published Works Title Building the Second Mind, 1961-1980: From the Ascendancy of ARPA-IPTO to the Advent of Commercial Expert Systems Permalink https://escholarship.org/uc/item/7ck3q4f0 ISBN 978-0-989453-4-6 Author Skinner, Rebecca Elizabeth Publication Date 2013-12-31 eScholarship.org Powered by the California Digital Library University of California Building the Second Mind, 1961-1980: From the Ascendancy of ARPA to the Advent of Commercial Expert Systems copyright 2013 Rebecca E. Skinner ISBN 978 09894543-4-6 Forward Part I. Introduction Preface Chapter 1. Introduction: The Status Quo of AI in 1961 Part II. Twin Bolts of Lightning Chapter 2. The Integrated Circuit Chapter 3. The Advanced Research Projects Agency and the Foundation of the IPTO Chapter 4. Hardware, Systems and Applications in the 1960s Part II. The Belle Epoque of the 1960s Chapter 5. MIT: Work in AI in the Early and Mid-1960s Chapter 6. CMU: From the General Problem Solver to the Physical Symbol System and Production Systems Chapter 7. Stanford University and SRI Part III. The Challenges of 1970 Chapter 8. The Mansfield Amendment, “The Heilmeier Era”, and the Crisis in Research Funding Chapter 9. The AI Culture Wars: the War Inside AI and Academia Chapter 10. The AI Culture Wars: Popular Culture Part IV. Big Ideas and Hardware Improvements in the 1970s invert these and put the hardware chapter first Chapter 11. AI at MIT in the 1970s: The Semantic Fallout of NLR and Vision Chapter 12. Hardware, Software, and Applications in the 1970s Chapter 13. Big Ideas in the 1970s Chapter 14. Conclusion: the Status Quo in 1980 Chapter 15. Acknowledgements Bibliography Endnotes Forward to the Beta Edition This book continues the story initiated in Building the Second Mind: 1956 and the Origins of Artificial Intelligence Computing. Building the Second Mind, 1961-1980: From the Establishment of ARPA to the Advent of Commercial Expert Systems continues this story, through to the fortunate phase of the second decade of AI computing. How quickly we forget the time during which the concept of AI itself was considered a strange and impossible effort ! The successes and rapid progress of AI software programs, with related work in adjacent fields such as robotics and complementary fields such as hardware, thus are our theme and meat of our narrative. In provenance, the entire Building the Second Mind effort traces back to the author's Ph.D. dissertation, which studied the commercialization of AI during the 1980s. Curiously, portions from the author’s original Ph.D. dissertation itself now make up several chapters of the third volume. This work is neither highly technical nor entirely comprehensive: my interests are skewed in favor of knowledge representation. It draws heavily upon both academic and journalistic nonfiction sources regarding AI. Much work in AI is necessarily specialist, but the value of synoptic works should not be underestimated. Everyone is someone’s popularizer. New technological advances allow alterations of the traditional publishing conventions, leaving social and professional convention to adjust. Paper publishing schedules and the pace of review have historically slowed publication and precluded revisions. This author instead asks for editorial input from the Cloud for this Beta edition. This will result in far more reviews from a larger pool of knowledgeable people. This will make it far easier for this text to include hyperlinks, refs to oral histories and museum and scholarly websites, as well as corrections and editorial improvements. I will take comments for three months and then release a revised ePub edition later in 2012. Preface Building the Second Mind, 1961-1980: From the Establishment of ARPA to the Advent of Commercial Expert Systems tells the story of the development, during the 1960s and 1970s, of AI, the field that sought to get computers to do things that would be considered intelligent if a person did them. In the late 1950s, the field was founded and began to undertake extremely rudimentary logic and problem-solving programs. In the 1960s, the immense growth in funding given to the field of computing, the development of integrated circuits for mainframe computing, and the increasing numbers of people in AI and in computer science in general, tremendously advanced the field. This is evidenced in more complex problem-solving, development of an understanding of the field of knowledge representation; appearance of fields such as graphic representation and problem-solving for more knowledge-intensive fields. Finally, the early integrated circuits of the 1960s gave way to microprocessors and microcomputers in the early 1970s. This heralded a near future time at which computers would be increasingly fast and ubiquitous. In a clear virtuous cycle, the enhanced cheapness of processing power and storage would encourage the proliferation of applications. This work is the sequel to Building the Second Mind: 1956 and the Origins of Artificial Intelligence Computing, which studied this field from the distant prehistory of abstract thinking through its early formation as a field of study in the mid-1950s. Watching the advances of the 1960s and 1970s offers a satisfying vindication of the early efforts of AI’s founders- John McCarthy, Marvin Minsky, Allen Newell, and Herbert Simon. Enthralled with Technology “ O Brave New World that has such people in it !” Aldous Huxley, Brave New World. The early 1960s was an expansive, ebullient era. The confrontation and showdown of the United States with Cuba, and by proxy with the USSR, during the Cuban Missile Crisis of 1962-1963, proved that the USA at least held its own and could keep the Soviets from maintaining a military presence in the Western Hemisphere. The NASA mission to land a man on the Moon, initiated in 1961 and achieved in 1969, was one of the most memorable events of both the Cold War and scientific and technical history. The conquest of diseases through vaccines continued. The cultural fascination with the products of technological modernity continued: new materials in the form of novel fabrics, building materials, and jarring and atonal music and art, ensured that cultural enlightenment included an element of discomfort and shock. High Modernist architecture provided exposed metal and glass, with enough sharp edges and right angles to fill a geometry textbook. Freeways plowed through cities: an elevated highway wrecked the aesthetics of the waterfront of San Francisco until an earthquake took it down, and a highway nearly was run through the dense streets of Greenwich Village in New York City. The sleek and metallic and manufactured and aerodynamic were lauded; the quaint and candlelit and rustic had no place. Modernism- including faith in AI and the progress offered by computation- would see profound challenges. Some of the central cultural shocks which began to break Modernism apart had not taken place yet. Jane Jacobs’ influential masterwork The Death and life of great American cities appeared in 1961 (1)1, and Silent Spring by Rachel Carson was published in 1963 (2)2. Jacobs helped to bring about the new urban movement of densely populated, village-like urban centers, and Silent Spring decried environmental pollution. Both helped to bring caveats to technological growth, but the early 1960s saw only the beginning of questions to its munificence. Building the Second Mind will be an advocate of computer technology as an augmentation technology for human abilities and as a research technology for cognitive science. General belief, if not blind adhesion, to scientific progress, is an occupational requirement for writing this work. However, the caveats to technological progress issued by such challenges to Modernism are significant, and will be considered later in this book. The Context of AI’s Progress Artificial Intelligence was without a doubt one of the great triumphs of postwar technological optimism. During the 1960s and 1970s, the field of AI developed tremendously. It started to devise ways to represent visual data by computation, as in the 1966 MIT Summer Vision project; began to examine the mysteries of creativity (Simon article); tore apart sentences for their meaning (Schank conceptual primitives, Bobrow calculus problem parser); and developed means to represent high-level scientific problem-solving and classification knowledge (Buchanan and Feigenbaum’s Dendral program), among other things. AI’s earlier history, in the years of the postwar research and development frenzy, had been more precarious. After the Second World War, not only had the idea of making machines think seemed strange and disturbed. It had often been perceived as morally bad, and the whole concept sinister. The appearance of arguments in its favor, or at least those urging measured equivocation, was a welcome salve. But by the mid-1960s and late 1960s, the situation was immensely different. The field had achieved success with chess-playing programs, with the Logic Theorist, with calculus-solving programs, with early robotic arms, and most recently with the Dendral program (which solved diagnoses in internal medicine). With the appearance of the Advanced Research Projects Agency (ARPA, known as Darpa starting in 1972), AI had attained research centers, tenure-track professors, and influence in money-intensive timesharing projects. Throughout the 1970s the tasks which the field was capable of doing proceeded to be come far more erudite, as better forms of abstraction and embodiment of knowledge were created. As Edward Feigenbaum, a Stanford researcher who will be important to this book said, the field had a great deal of low fruit, easily attained, to pick. This book tells this story. 2. Themes The thematic concerns of this work should not overshadow its historical nature. We recognize that there is a voluminous and enthusiastic body of knowledge regarding the history and sociology of technology and science. Our concern with contribution to this body of knowledge is first and foremost expository rather than theoretical. We will revisit- and in several cases repeat- the themes of the last book. In other cases they are revised because of the change in the lessons of history. First, the successive successes of AI are salient in their most unalloyed form in this phase of the work. During the 1950s all advances were painstaking and even the most simple thing took astonishing effort. During the 1960s, and into the 1970s, AI successively knocked down the list of things that had seemed that it would not be able to do. Herbert Simon’s 1963 scholarship began to analyze creativity in a systematic fashion. Programs involving schematized worlds, or “toy” domains, were devised. More difficult domains, such as the systematic but complex field of mass spectrometry, were and AI programs analyzed spectrometric data. The technical means through which this was accomplished is in production systems, which would later be developed as “expert systems” in the 1980s. Vision, robotics, computer chess, and knowledge representation all underwent new and fundamental changes. The low fruit phase, as referred to earlier, was intrinsically attractive and- it’s too good to resist- fruitful. Second, the theme of the successful implementation of technological innovation by governmental programs- war both hot and cold, is repeated (3)3. The author said it in the beginning of Building the Second Mind: 1956 and the Origins of AI Computing, and shall say it again here. The role of the military in the initial birth and later development of the computer and its ancillary technologies cannot be erased, eroded, or diminished. As we saw in the first book, this was true of the earliest development of the digital computer during the Second World War, and of its progress along a broad variety of fronts during the postwar decades. The advances which can be directly attributed to the war effort- the Cold War effort, that is- include relay memory followed by magnetic core memory; storage in the form of punched cards and magnetic tape; input-output in the form of followed by teletype operations, and finally the CRT screen with a keyboard. The crucial role of government policy in expressly and successfully encouraging computer development is a well-articulated scholarly theme (4)4. Our study is simply a further corroboration. Third, a well-tuned organizational culture was also essential in the development of the computer in every aspect, along the course of the entire chronology. ARPA allowed and encouraged creative people to work on their own terms, at a multiplicity of institutions- the Rand Corporation, SRI, and the laboratories at each of the universities, among others. The same results would not have taken place without the ARPA bureaucracy and the institutional freedom which it encouraged. A fourth theme is that of the paradigm transition entailed by AI. AI envisioning an active intelligence apart from humans: getting this new gestalt in place required a kind of thinking and rethinking that was difficult in more than an algorithmic and logistical fashion. Belief in the computational metaphor for the human mind, and belief in the possibility of computing as qualitative as well as only numerical and quantitative information processing, were established among a very small cohort. The corner had already been turned in these circles, but the two decades we study in this book saw the vast widening of the software to prove that programs could emulate cogitation, and the hardware which could carry this out. The proof was in the software. Outline of the Chapters The history of AI through 1956, and in the several years afterwards, naturally structured itself with the culmination of field’s foundation. The history of AI during the 1960s and 1970s naturally structures itself as well. The narrative begins with a bang, imposed by history, as the increased effort in the United States’ space program meant increased funding for AI as well. Inside the field of AI itself, there was a will but no easily visible way to move from very simple problems to bigger domains. An immense and consequential event took place early on in this history. The onslaught of monies shoveled at AI (and other computing disciplines) following the 1961 selection of ARPA to be the pre-eminent research agency for funding computer science was a great windfall. It could be compared to a television or movie sight gag in which the door is opened and a flood ensues, or Willy Wonka’s Golden Ticket. We commence by presenting the framing facts of the integrated circuit (IC), which has allowed computers that offer bigger, better, faster, and more computing, and ARPA, which footed the tab for its development. The 1958 foundation of ARPA, and its immeasurable contributions in funding much AI-related research during these two decades, are discussed as it specifically relates to AI’s agenda (Chapters Two and Three). The wide dispersal of improvements to input-output, memory, and storage are discussed in Chapter Four. Part II. The Belle Epoque The sine qua non of ARPA’s contribution to AI was its free rein to researchers: in response to opportunity unfettered, AI flourished. We refer to this decade as the Belle Epoque, in reference to the salubrious climate for art and literature prior to the First World War. In Chapter Five we discuss the more general task of institution-building that nurtured AI. At the Carnegie Institute of Technology (now Carnegie Mellon University), Newell and Simon expanded upon the schematic and simple problems they had initially taken on with the General Problem Solver, and began to work on the

Description:
of Defense and a silver lining in the post-war military cloud. The ARPA division known as the Information. Processing Techniques Office (IPTO) has Fredkin remained involved in such AI topics as computer chess, including one program with which he became so obsessed that he carried the related
See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.