ebook img

Uncertainty in Artificial Intelligence. Proceedings of the Eighth Conference (1992), July 17–19, 1992, Eighth Conference on Uncertainty in Artificial Intelligence, Stanford University PDF

372 Pages·1992·6.73 MB·English
by  
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Uncertainty in Artificial Intelligence. Proceedings of the Eighth Conference (1992), July 17–19, 1992, Eighth Conference on Uncertainty in Artificial Intelligence, Stanford University

Uncertainty in Artificial Intelligence Proceedings of the Eighth Conference (1992) July 17-19, 1992 Eighth Conference on Uncertainty in Artificial Intelligence Stanford University Edited by Didier Dubois Université Paul Sabatier Toulouse, France Michael P. Wellman USAF Wright Laboratory Dayton, Ohio Bruce D'Ambrosio Oregon State University Corvallis, Oregon Phillipe Smets Université Libre de Bruxelles Brussels, Belgium Morgan Kaufmann Publishers San Mateo, California Project Management and Composition by Professional Book Center Box 102650 Denver, CO 80210 Morgan Kaufmann Publishers, Inc. Editorial Office: 2929 Campus Drive, Suite 260 San Mateo, California 94403 © 1992 by Morgan Kaufmann Publishers, Inc. All rights reserved Printed in the United States of America No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means—electronic, mechanical, photocopying, recording, or otherwise—without the prior written permission of the publisher. 96 95 94 92 5 4 3 2 1 Library of Congress Cataloging-in-Publication Data is available for this book. ISBN 1-55860-258-5 Preface This volume collects the papers presented at the Eighth Conference on Uncertainty in Artificial Intelligence, held at Stanford University on 17-19 July 1992. The research contributions herein cover many specific topics, but all represent advances in artificial intelligence (AI) methods expressly accounting for uncertainty in beliefs. Whereas the field of AI has always been concerned with performance under conditions of uncer- tainty, the ideas of explicit representation of degrees of belief and special mechanisms for management of uncertainty have often been controversial. Indeed, it was primarily a perceived neglect of established mathe- matical models of uncertainty by mainstream AI that prompted Peter Cheeseman, John Lemmer, Laveen Kanal, and others in 1985 to organize the first Workshop on Uncertainty in AI. Since that original meeting, the Uncertainty-in-AI community has progressed a long way. Thanks in no small part to new methods advanced and developed in this conference series, research in probabilistic and other forms of uncertain reasoning is increasing in importance and acceptance within the broader AI community. Moreover, techniques deriving directly from this work are starting to find substantial application, and we are even beginning to see start-up companies form to exploit this technology. Of course, the technical challenges facing us will require sustained, and even accelerated, progress. Further- more, many issues in uncertain reasoning continue to be controversial, as evidenced by the range of tech- niques and arguments appearing in these pages. Part of the diversity is explained by the international constituency of this community—fully half of the papers in this volume originate from outside the United States. Although the disparity of points of view sometimes inhibits communication, we believe that a contin- ued effort to bring together these perspectives and explore their root differences will ultimately improve our fundamental understanding of uncertainty and its role in artificial intelligence. The ultimate success of our enterprise will depend on our ability to continue to generate new ideas and attract talented new researchers to work on our problems. We take it as a sign of the community's vitality that at each conference some of the most innovative work is presented by students. This year, we are happy to recognize Thomas Verma of the University of California at Los Angeles as the author (with Judea Pearl) of an outstanding student paper, "An Algorithm for Deciding if a Set of Observed Independencies Has a Causal Explanation." The 50 papers appearing in this proceedings were selected from nearly 100 submitted to the program chairs. Each paper was reviewed by at least two referees with substantial expertise in the topic of the submitted paper. Accepted papers were selected for plenary or poster sessions based on perceived suitability for oral presentation and fit within thematic session clusters. The proceedings do not distinguish between the pre- sentation modes. We have tried, subject to the basic quality criteria, to maintain a balance between the various schools of uncertainty modeling (e.g., numeric vs. symbolic representations, probabilistic vs. non- additive measures), so as to promote discussions on fundamental issues. We have also striven to focus on important AI topics such as temporal reasoning, model construction, logics of uncertainty, graphical knowl- edge representation structures, and planning and decision making. Lastly, we attempted to make room for convincing examples of applications. We did our best to achieve these goals (despite the inevitable vagaries of a distributed reviewing process), and hope that this year's technical program will continue the tradition of quality and innovation characteristic of the Conference on Uncertainty in AI. Michael P. Wellman and Didier Dubois, Program Co-Chairs Bruce D'Ambrosio and Philippe Smets, Conference Co-Chairs Acknowledgments The Eighth Conference on Uncertainty in Artificial Intelligence, as well as this Proceedings, is the product of efforts of numerous individuals. For assisting us in the difficult task of selecting quality papers for the conference, we are grateful for the expertise and dedicated labor of the following reviewers: Bruce Abramson Jean-Louis Golmard T. Martin N. Ayache Benjamin Grosof Andrew Mayer Fahiem Bacchus Peter Haddawy Serafin Moral Sa lern Benferhat Joseph Halpern Eric Neufeld Piero Bonissone Steve Hanks Hung-Trung Nguyen Jack Breese David Heckerman Daniel O'Leary Wray Buntine Max Henrion Gerhard Paaß C. Cayrol A. Herzig Judea Pearl Michael Clarke U. Hohle David Poole Greg Cooper Eric Horvitz Greg Provan M. Cordier Yen-Teh Hsia Enrique Ruspini Bruce D'Ambrosio Jean-Yves Jaffray Alessandro Saffiotti M. DeMolombe Keiji Kanazawa Kerstin Schill Jon Doyle George Klir D. Schmeidler Dimiter Driankov J. Kohlas Ross Shachter S. Dutta Paul Krause Prakash Shenoy John Fox Vladik Kreinovich Philippe Smets Robert Fung Rudolf Kruse David Spiegelhalter Alex Gammerman Henry Kyburg Marcus Spies Peter Gardenfors J. Lang Jonathan Stillman Hector Geffner Kathryn Laskey V.S. Subrahmanian Dan Geiger J. Laumond S. Termini M.Gil Steffen Lauritzen Pietro Torasso Angelo Gilio Tod Levitt Robert Valette Lluis Godo Xiaohui Liu S.K.M. Wong Robert Goldman Ronald Loui Ronald Yager We would like to extend special thanks to Ross Shachter for his work on the logistics of local arrangements for holding the meeting at Stanford University. For proposing and arranging panel discussions at the conference, we appreciate the efforts of Piero Bonissone, Jack Breese, and Eric Horvitz. Thanks also are due to Heuristicrats Research, Hugin, Information Extraction and Transport, Knowledge Industries, and Noetic Systems for financial support enabling broader announcement of our tutorial and conference program. Finally, we are grateful to our home institutions—Oregon State University, Université Libre de Bruxelles, Institut de Recherche en Informatique de Toulouse, and USAF Wright Laboratory—for their cooperation in our work for this conference. 1 TZSS—A Relative Method for Evidential Reasoning Zhi An, David A. Bell , John G. Hughes Department of Information Systems University of Ulster at Jordanstown Newtownabbey, Co. Antrim BT37 OQB, N. Ireland, UK. E-mail:CBFC23@UJVAX. ULSTER. AC. UK Abstract that many arguments are presented and that these ar- guments might be inconsistent. They might be based In this paper we describe a novel method on different observable facts or derived from different for evidential reasoning [1]. It involves mod- parts of our knowledge or oriented towards different elling the process of evidential reasoning in points of view. In such situations, we have to address three steps, namely, evidence structure con- the strengths of these arguments for competitive state- struction, evidence accumulation, and deci- ments to resolve any inconsistency. A common activity sion making. in such situations is to compare the arguments sup- The proposed method, called 72£S, is novel porting and refuting competitive statements respec- in that evidence strength is associated with tively. Usually, arguments for competitive statements an evidential support relationship (an argu- are weighed with respect to each other, i.e. the ar- ment) between a pair of statements and such guments are compared with each other with regard to strength is carried by comparison between ar- how trustworthy they are. Then, we make our judge- guments. This is in contrast to the onven- ments according to the results of these comparisons. tional approaches, where evidence strength It is very desirable to have some standard for mea- is represented numerically and is associated surement so that we can measure the strengths of all with a statement. these arguments to find out which statement we should choose as our decision, based on many arguments. But because there are many factors affecting the strengths 1 Introduction of arguments, such measures are not easily available and are sometimes not suitable. Moreover, such mea- .. .talking about 'good1 measurements and sures may not be necessary in that sometimes only 'bad9 ones... comparisons among arguments are sufficient. does not any analysis of measurement require concepts more fundamental than measurement? In this paper we describe a method for eviden- And should not the fundamental theory be about these tial reasoning based on reflecting the above ideas. more fundamental concepts? The method is called 1Z6S for Relative Evidential Support. In 1Z8S, evidential support relationships be- —J.S. Bell, 'Quantum Mechanics for Cosmologists' tween statements, called arguments, and their relative strengths, represented by comparisons between argu- ments, are represented. It is sometimes difficult enough to make a judgement even if we can observe directly that which we want In the following section, we present our viewpoint that to judge. But there are many uncertain situations in evidential reasoning is a process composed of three which we cannot make observations directly. Instead, steps, namely, evidence structure construction, evi- in such situations, we have to resort to arguments. dence accumulation, and decision making. In sec- Arguments, while not directly available from observa- tion 2.1, the steps are modelled in the method TIES. A tions, link observable facts to unobservable features. simple example is presented along with the exposition And these arguments are then taken as our justifi- of the method. In section 3, a well known example in cations for judgements. Obviously, in our reasoning the literature is analyzed using IISS, which provides process, good arguments give excellent justification for a basis for us to compare TIES against probabilistic our statements, second only to observable facts. Thus, methods and the endorsement model. In the last sec- arguments should be a focus for studies of uncertain tion, a brief summary is given. Some other results of reasoning. our studies of HES are also listed without going into details. But in uncertain situations, it is also commonly found 2 An, Bell, and Hughes 2 Our Point of View 2.1 7Z8S-A Model for Relative Evidential Support The semantic model for the method is based on two For pragmatic purposes, human reasoners must reject the assertion that nothing is true but The Whole1, abstract spaces for making judgements (statements). One space is for sensing evidence and the other is for where The Whole is the complete truth of the world. making decisions.2 Here we formalize the two spaces as We understand something and we have some knowl- two first order logics. We denote the space for evidence edge. But at the same time, there is much about the sensing as C and the space for decision making as £ .3 world and about The Whole that we do not know or e p we do not know exactly. We have to make judgements Evidence (consisting of sentences in the first space) is based on that which we do know, and act on these related to choices (sentences in the second space) only judgements. The situation is like that in which we are via arguments such as "this evidence sentence supports observing a distant object and want to figure out its (or refutes) that conclusion sentence". We denote an features. Because it is distant, some of its features can- argument that "evidence e supports choice p" as (e,p) not be observed directly. At the outset, we must know and will call such pairs "arguments from C to £ ".4 e p and understand some of the relationships between our In (e,p), we will call e the presumption and p the con- observations and the features of the object. Otherwise clusion. Thus, the word "argument" is taken to have our observations are useless. Sometimes, we might not the intuitive meaning "a step in reasoning" rather than really know a relationship, but we believe it. At the a special kind of statement. same time, we know that some of the relationships are It should be noticed that all arguments are conditioned very accurate (or convincing) and some of them are in that an argument will not be enforced unless its pre- less accurate. Generally speaking, these relationships sumption is satisfied. Thus, an argument (e,p) should have strengths of some kind. be read as "if the presumption e is satisfied, then there With knowledge as described above, we can make our is an argument supporting the conclusion p." observations. We are guided at this time by our knowl- edge that we are looking for observations which we 2.2 The Structure for Evidence know or believe can be related to the features of the object. Other observations should not be made, be- cause they are simply useless for the task in hand. Definition: Let A be a set of arguments from Ce to C and ΊΖ be a relation on A, denoted by "X". Then, p As the observations are made, some of our knowledge the 4-tuple SS = (C ,C,A,1Z) is called an evidence e p may be triggered to give us justifications for predi- structure if and only if cating the features of the object. During this pro- cess, some predications might conflict with others. In 1. V(e,p)eA, such cases, we will have to compare the justifications (e,p) < (e,p); supporting these conflicting predications. As a result, some predications with weak justifications might be 2. V(ei,pi),(e ,p ),(e ,p ) G A if (epi) X overwhelmed by others with stronger justifications. 2 2 3 3 u (e ,p ) and (e ,p ) ·< (e3,P3), then 2 2 2 2 We view evidential reasoning as a process as described (euPi) di (^3,Ρ3); above. Concretely, we view evidential reasoning to be a process composed of three steps, namely, evi- 3. V(e,pi),(e,p ) G A, if pi -► p (pi implies p in dence structure construction, evidence accumulation, 2 2 2 £ ), then 5 and decision making. In the first step, we collect what p we know about the relationships between that which (e,Pi) ^ (e,p2); is observable and that which we want to figure out. 4. V(ei,pi), (e ,p ) G A, if (ei — e ) Λ -.(e -► ei) Judging the strengths of these relationships is a very 2 2 2 2 (ei implies e and e\ is not equivalent to e in important part of this step. In the second step we 2 2 £ ), then make our observations and fit them into the structure. e In the last step, we make our decisions based on the (^2,Ρ2) ·< (ei,Pi). result of the first two steps. 2This division is unnecessary. We retain it because it In the following section, we propose a model of eviden- helps clarify the process of evidential reasoning. tial reasoning which reflects these steps directly. 3We suggest that £e and Cp be delineated in such a manner that the truth of sentences in £ are readily avail- c able and the truth of sentences in C are what we are p seeking. 4 We omit arguments such as "evidence e refutes choice p" because they can be represented using (e,p) in a natural though the philosophy of this assertion has its jus- way, as we will show later. tification. See Russell [7] for his description of Hegel's 5 Notice that the two arguments have a common dialectic. presumption. %TS —A Relative Method for Evidential Reasoning 3 5. V(ci,pi),(c ,P2> € A become irrelevant. 2 (ei Ve ,pi Vp ). Notice the symmetry in the last two conditions that the 2 2 stronger the evidence but the weaker the conclusion, It should be noticed that the constraints on evidence the stronger the argument. structure are very simple. These constraints guarantee The last constraint concerns only the presence of ar- that our structure of evidence is consistent. We believe guments, i.e. the existence of evidential supports rela- this requirement is necessary so that even if our obser- tionships. The constraint says that for two arguments, vations and their implications should be inconsistent the disjunction of their presumptions supports the dis- in themselves or inconsistent with our knowledge, our junction of their conclusions. In other words, it says knowledge itself should be consistent. that if one of the two presumptions is granted, then These constraints have intuitive appeal. The first two one of the two conclusions is supported. Notice that, constraints say that "X" is a partial order relation on by constraint 4, the argument with the disjunctions is A. The third constraint requires that under the same less trustworthy than the original two arguments. presumption, a statement always commands no more In the relation ΊΖ of an evidence structure SS, two support than any of its logical implications. If state- arguments might not be related. But if they are, ments are viewed as subsets of the set of possible values (ei>Pi) ^ (e2?P2) should be read as "argument (βι,ρι) for a variable, this constraint simply states that, by the is no more believable or no stronger than argument same evidence, no subset can be supported better than (β2,Ρ2>". its supersets. Some conventions can be used to denote other rela- The fourth constraint specifies that any logical impli- tionships between arguments. For example, in the fol- caiton of a statement cannot enforce stronger support lowing, we will use (βχ,ρι) ~ (β2,Ρ2) for ((βι,ρι) ·< than the statement itself. This constraint is included (θ2,Ρ2)) A((e ,p ) ■< (^ι,Ρι)) and (βι,ρι) < (e ,p ) because when (ei —► e )A-»(e —► βχ), we can say that 2 2 2 2 2 2 for ((ei,pi) X (e ,p )) Λ ^((e ,p > ·< (βι,Ρι)). ei is more specific than e and contains more informa- 2 2 2 2 2 tion. In such cases, we can say that information con- Example 1 Let {A/i, A/2, A/3} be all the alternative veyed in e is contained in that conveyed by e\. Thus, 2 states of a system. Suppose it cannot be directly de- all arguments based on ei can be viewed as based on tected what state the system is in. But we can conduct something which has already taken into account ev- some tests on the system: erything e has to say. In this sense, the evidence 2 Testl: e\y if positive, supports choosing alternative Ali space can be viewed as being layered—any thing said as well as A/2. at a higher level contains or covers all things which Test2: β2, if positive, refutes Al\; if negative, it sup- has been said at a lower level, (ei —► e ) Λ ->(e —+ ei) 2 2 ports Al\. means that e\ is at a higher level than e . Thus, what 2 said by e should be weaker than that said by t\ in 2 The knowledge in the example can be represented as the sense that t\ might have nothing more to say than an evidence structure as follows. e , but if it does, these things are more defensible than 2 those based on e . This includes two kinds of cases. 2 £ = 2{ei'e2>; In cases where these additional things based on e\ are e consistent with those based on e2, knowing t\ enforces Cp = 2iAlltAh,Al3}. what has been said. In cases where the additional A= {(e {Ali}),(eu{Ah}) things are inconsistent with those bases on e , the ones u 2 (-e ,{A/ }),(62,{A/2}>,(e ,{A/3})}; based t\ will overrule those based on e because more 2 1 2 2 considerations are commanded based on e\. 1Z = ΦΙ)1ΖΑ, where ΊΙ^ is the smallest relation on A satisfying the constraints for evidence structure. For example, penguins do not fly though birds do. Be- This means that we have no information about cause when we are talking about penguins, their being the strengths of the arguments (other than that birds has already been considered, then any conclu- derivable from the subset relations of C and C ). e p sions reached from penguin considerations have pri- ority over these from only bird considerations. In Notice that in the representation above, evidential this case, inconsistency arises but the desirable result refutation tte2, if positive, refutes {A/i}" is represented should be that penguins don't fly based on the more as evidential supports "e , if positive, supports {A/ } 2 2 specific consideration of penguins. as well as {A/ }w. The justification for doing so is 3 For another example, penguins have legs, so do birds. that only relative evidential support strengths are im- portant in 1Z8S. This is a consistent situation where penguin's having legs is supported by their being penguins and being birds. But obviously, no changes concerning only the 2.3 Evidence Accumulation legs of birds in general can affect this assertion. In this sense, because more specific considerations are avail- As noted before, all arguments are conditioned in the able, less specific considerations are overshadowed and sense that argument (e,p) can enforce its support for 4 An, Bell, and Hughes p only if e is satisfied. This provides us with a natural way to accumulate evidence on an evidence structure. ^ = {<-e ,{A/ }),(-e ,{A/ })}; a a 2 3 Suppose our information allows us to grant a sentence e in £e, then we can identify a set of arguments com- ei negative and e2 positive posed of all arguments whose presumptions are logical implications of e, i.e. the set of all arguments whose A = {(e ,{Ah})}; presumptions are granted if e is granted and are thus 2 applicable under e. This set and the relationships be- e\ positive and β2 negative tween arguments in this set, compose a sub-structure SS oî S S for any sentence e of C. This sub-structure, e e defined as follows, is viewed as the result of evidence Λ= {(e {Ah}), {e {Ah}), u u accumulation. <-*2,Μ/ }),<-.β ,{Α/ })}; 2 2 3 Definition: Let SS = (£,C ,A,1l) be an evidence e\ positive and e*i positive e p structure and e be a sentence of C . Then the sub- e structure SS = (£ ,£ ,A,H), where e ee Pe e e A = {( , {Ah}), (ex, {Ah}), (e , {Ah})}. ei 2 1 C — Γ · 2.4 Decision Making 3. A = {(e',p)\(e' )£A,e-+e'}; Based on an evidence structure SS and evidence e, a e lP partial relation on C can be defined with respect to 4. 7£ = Tln(A x *4 ), *.e. the sub-relation of 71 on p e e e how well a sentence is supported in SS. e is the conditioned evidence structure of SS conditioned Definition: Let SS — (£e? £p,A, 1Z) be an evidence on e. structure and e an item of evidence, i.e. a sentence in C. The comparison relation C on C determined by e p The conditioning on an evidence structure has the fol- SS, denoted by "pi < p^', where p\ and pi are two e lowing property. sentences in £ , is defined as: p Corollary 1 Let SS = (£e,£p,.4, ΊΖ) be an evidence 1. in the case where there are arguments in Ae sup- structure and ei,e2 G £e such that e\ —► e2. Then porting some sentences p\ such that p[ —* p\, then pi < P2 d=f v(ci,pi) e Α,((ΡΊ -* Pi) -+ (3(e ,p' ) G A,(P -*P2A(ei,pi) ^ (e ,p' )))); 2 2 2 2 2 Proof 2. in the case where there are no arguments support- The proposition is obvious because any argument trig- ing any sentence p[ such that p[ —► p\, then gered by t2 will also be triggered by t\. I Pi < P2 d=f 3(e2,p2) G Λ, (P2 -► P2-) This proposition says that the set of triggered argu- Informally, the first formula states that if there are ar- ments will not contract with more evidence, i.e. the guments supporting p\ or its subsets, i.e. there are justifications for conclusion pi, then we can say p\ is triggered argument set will expand monotonically with more evidence. But notice that SSei^e2 = (SSei)e2 is no more believable than p2 if and only if for every justi- fication for pi, there exists at least one justification for not generally true, which means that evidence accu- p and this justification (argument) is no worse than mulation cannot be done cumulatively. Also notice 2 that justification for p\. The second formula states that, as can be seen later, TZSS is non-monotonic of that if there is no justification for conclusion pi, then the conclusions it reached. we can say p\ is no more believable than p if and only 2 For our example presented in the last section, the evi- if p has some justifications. 2 dence accumulation process is to conduct the tests and Using this relation, conclusions with respect to which condition the evidence structure on the results of these conclusion is better supported than another is clarified. tests. Different results will then issue different struc- Intuitive terms can be defined. tures. The resulting conditioned evidence structures with different results of the tests are shown as follows. The £e,£P and ΊΙ parts are omitted because the for- Definition: Let C be a comparison relation on Cp and mer two parts are the same as in the original evidence let pi,p2 be two sentences in Cp, then we say structure, while the last part is empty except for these relationships derivable from subset relations. • pi is less believable than p2 iff t\ negative and e2 negative Pi <P2 andp2 £pi; %TS—A Relative Method for Evidential Reasoning 5 • pi is as believable as p iff 2 Pi <P2 andp <ΡΓ> 2 • pi is not comparable to p iff 2 px £p andp £ρι· 2 2 From the relation so defined, decisions can be made. For example, we can have the following definition. Definition: A conclusion p is call a plausible con- clusion supported by an evidence structure under evi- ci negative and e2 positive In this case, Al\ is re- dence e if and only if -»p is less believable than p. futed by e , which makes {Ah} less believable 2 Many other ways of decision making are also possible than either {Al2} or {A/3}. So we have the fol- and sensible. For example, "choosing the best" from lowing diagram (in which the subset relationships competitors (as we used in the examples in this pa- are omitted). per), and/or providing some standard arguments as thresholds. But more than that, why and how the decision is reached can be easily explicated in 1ZES because of its symbolic representation. We can trace all support commanded by the competing choices to provide the ei positive and e negative In this case, Ah is reason why some of these supports are overwhelmed 2 supported by both t\ and ~^t \ Al is supported while others are taken as the basis for reaching the 2 2 only by βχ; and AI3 commands no support at all. conclusion. This will provide justification for the re- This gives us the following diagram. sults in cases where the results alone are not convincing enough. Ah Returning to our example above, for which we have built the evidence structure (section 2.2) and shown the results of evidence accumulation, i.e. conditioned evidence structures, with different results of the tests (section 2.3). From these conditioned evidence struc- \Al 2 tures, relationships among conclusions with respect to evidential support under different forms of evidence (different results of the tests) are shown in the dia- grams below (for clarity, the subset relation is shown only in the first case). In the diagrams, nodes are statements in the conclusion space and two nodes, one Al3 above another, have a path linking them iff the higher Both e\ and e are positive In this case, A/2 is node is no less believable than the lower node. 2 supported by e\\ although Ah is also supported by ei, it is also refuted by e . This makes it less 2 believable than Al but not comparable with A/3, 2 ei negative and e2 negative The diagram for this which has no support. case is as follows, which shows that {Ali} is more believable than {A/2, A/3} while the {A/2} and {A/3} are not comparable. This is so because the only argument applicable (with its presumption satisfied by the results of the tests) is (->e ,Ah). 2 This argument gives some support but only to Ah. Other relationships like {Ah} is more believable It can be seen that if we also know that e\ is more than {AI2} and {A/3} are also represented. trustworthy than e , (or vice versa), then in the last 2 6 An, Bell, and Hughes case we will able to say that Ali is more believable than 5. The three forms represent three distinct species. Als (or vice versa). It is something of a surprise that even this information of relative strength is not always Here are the items of evidence, or arguments, that Walker needed, e.g. in the example above, this information and Leakey use in their qualitative assessment of the prob- makes no difference in the first three cases. abilities of these five hypotheses: 1. Hypothesis 1 is supported by general theoretical ar- 3 An Example guments to the effect that distinct hominid species cannot co-exist after one of them has acquired cul- ture. In this section, using USS, we re-work the example of "The Hominids of East Turkana" which is the key 2. Hypothesis 1 and 4 are doubtful because they pos- example in both Shafer [10] and Cohen [4]. The ex- tulate extreme adaptations within the same species: position of the example from Shafer's paper is copied The brain seems to overwhelm the chewing apparatus here. in III, while the opposite is true in I. 3. There are difficulties in accepting the degree of sexual 3.1 The Example dimorphism postulated by hypotheses 2 and 3. Sex- ual dimorphism exists among living anthropoids, and Example 2 In the August, 1978 issue of Science Amer- there is evidence from elsewhere that hints that dental ica,, Alan Walker and Richard E.T. Leakey [11] discuss the dimorphism of the magnitude postulated by hypothe- hominids that have recently been discovered in the region sis 2 might have existed in extinct hominids. The di- east of Lake Turkana in Kenya. These fossils, between a morphism postulated by hypothesis 3, which involves million and two million years of age, show considerable va- females having roughly half the cranial capacity of riety, and Walker and Leakey are interested in deciding males, is less plausible. how many distinct species they represent. 4. Hypotheses 1 and 4 are also impugned by the fact that In Walker and Leakey's judgment, the relatively complete specimens of type I have not been found in Java and cranium specimens discovered in the upper member of the China, where specimens of type III are abundant. Koobi Fora Formation in East Turkana are of three forms: (I) A "robust" form with large cheek teeth and massive 5. Hypotheses 2 and 3 are similarly impugned by the jaws. These fossils show wide-fanning cheekbones, very absence of specimens of type II in Java and China. large molar and premolar teeth, and smaller incisors and canines. The brain cases have an average capacity of about Before specimens of type III were found in the Koobi Fora 500 cubic centimetres, and there is often a bony crest run- Formation, Walker and Leakey thought it likely that I and ning fore and aft across its top, which presumably provided II specimens constituted a single species. Now on the basis greater area for the attachment of the cheek muscles. Fos- of the total evidence, they consider hypothesis 5 the most sils of this form have also been found in South Africa and probable. East Asia, and it is generally agreed that they should all be classified as members of the species Australopithecus ro~ bustus. (II) A Smaller and slenderer (more "fragile") form 3.2 11SS Design that lacks the wide-flaring cheekbones of I, but has similar cranial capacity and only slightly less massive molar and Following the terminology of Shafer and Tversky, we premolar teeth. (Ill) A large-brained (c. 850 cubic cm) and will call our representation of the example using 1ZSS small-jawed form that can be confidently identified with as a "design of evidence". The design is as follows. the Homo erectus specimens found in Java and northern China. • £ = 2iei'e2'ei2'e23'ei3>, where e The placement of the three forms in the geological strata in East Turkana shows that they were contemporaneous with each other. How many distinct species do they represent? e\: "the theory"; Walker and Leakey admit five hypotheses: e : "Absence of type I and II among III 2 in Far East"; 1. I, II, III are all forms of a single, extremely variable eiji "Difference between type i and type j". species. C = {B B, B, B, B}, where 2. There are two distinct species: one Australopithecus p U 2 3 4 5 robustus, has I as its male form and II as its female form; the other, Homo erectus, is represented by III. B\ — One species; 3. There are two distinct species: one, Australopithecus J92 = Two species, III and robustus, is represent by I; the other has III, the so- one composed of I (male) and Il(female); called Homo erectus form, as its male form, and II as JB = Two species, I and 3 its female form. one composed of III (male) and II (female); B\ = Two species, II and 4. There are two distinct species: one is represented by one composed of I and III; the fragile form II; the other, which is highly variable, B = Three species. consists of I and III. 5

Description:
Uncertainty in Artificial Intelligence
See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.