ebook img

Multi-Agent Negotiation using Trust and Persuasion PDF

276 Pages·2012·3.08 MB·English
by  
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Multi-Agent Negotiation using Trust and Persuasion

University of Southampton Research Repository ePrints Soton Copyright © and Moral Rights for this thesis are retained by the author and/or other copyright owners. A copy can be downloaded for personal non-commercial research or study, without prior permission or charge. This thesis cannot be reproduced or quoted extensively from without first obtaining permission in writing from the copyright holder/s. The content must not be changed in any way or sold commercially in any format or medium without the formal permission of the copyright holders. When referring to this work, full bibliographic details including the author, title, awarding institution and date of the thesis must be given e.g. AUTHOR (year of submission) "Full thesis title", University of Southampton, name of the University School or Department, PhD Thesis, pagination http://eprints.soton.ac.uk UNIVERSITY OF SOUTHAMPTON Multi-Agent Negotiation using Trust and Persuasion by Sarvapali Dyanand Ramchurn A thesis submitted in partial fulfillment for the degree of Doctor of Philosophy in the Faculty of Engineering and Applied Science School of Electronics and Computer Science December 2004 UNIVERSITY OF SOUTHAMPTON ABSTRACT FACULTY OF ENGINEERING AND APPLIED SCIENCE SCHOOL OF ELECTRONICS AND COMPUTER SCIENCE Doctor of Philosophy by Sarvapali Dyanand Ramchurn In this thesis, we propose a panoply of tools and techniques to manage inter-agent dependencies in open, distributed multi-agent systems that have significant degrees of uncertainty. Inparticular,wefocusonsituationsinwhichagentsareinvolvedinrepeated interactions where they need to negotiate to resolve conflicts that may arise between them. To this end, we endow agents with decision making models that exploit the notion of trust and use persuasive techniques during the negotiation process to reduce the level of uncertainty and achieve better deals in the long run. Firstly, we develop and evaluate a new trust model (called CREDIT) that allows agents to measure the degree of trust they should place in their opponents. This model re- duces the uncertainty that agents have about their opponents’ reliability. Thus, over repeatedinteractions,CREDITenablesagentstomodeltheiropponents’reliabilityusing probabilistic techniques and a fuzzy reasoning mechanism that allows the combination of measures based on reputation (indirect interactions) and confidence (direct interac- tions). In so doing, CREDIT takes a wider range of behaviour-influencing factors into account than existing models, including the norms of the agents and the institution within which transactions occur. We then explore a novel application of trust models by showing how the measures developed in CREDIT ca be applied negotiations in multiple encounters. Specifically we show that agents that use CREDIT are able to avoid unreli- able agents, both during the selection of interaction partners and during the negotiation processitselfbyusingtrusttoadjusttheirnegotiationstance. Also, weempiricallyshow that agents are able to reach good deals with agents that are unreliable to some degree (rather than completely unreliable) and with those that try to strategically exploit their opponent. Secondly, having applied CREDIT to negotiations, we further extend the application of trusttoreduceuncertaintyaboutthereliabilityofagentsinmechanismdesign(wherethe honesty of agents is elicited by the protocol). Thus, we develop Trust-Based Mechanism Design (TBMD) that allows agents using a trust model (such as CREDIT) to reach efficient agreements that choose the most reliable agents in the long run. In particular, we show that our mechanism enforces truth-telling from the agents (i.e. it is incentive iv compatible),bothabouttheirperceivedreliabilityoftheiropponentandtheirvaluations for the goods to be traded. In proving the latter properties, our trust-based mechanism is shown to be the first reputation mechanism that implements individual rationality, incentive compatibility, and efficiency. Our trust-based mechanism is also empirically evaluatedandshowntobebetterthanothercomparablemodelsinreachingtheoutcome that maximises all the negotiating agents’ utilities and in choosing the most reliable agents in the long run. Thirdly, having explored ways to reduce uncertainties about reliability and honesty, we use persuasive negotiation techniques to tackle issues associated with uncertainties that agents have about the preferences and the space of possible agreements. To this end, we propose a novel protocol and reasoning mechanism that agents can use to generate and evaluate persuasive elements, such as promises of future rewards, to support the offers they make during negotiation. These persuasive elements aim to make offers more attractive over multiple encounters given the absence of information about an opponent’s discount factors or exact payoffs. Specifically, we empirically demonstrate that agents are able to achieve a larger number of agreements and a higher expected utility over repeated encounters when they are given the capability to give or ask for rewards. Moreover, we develop a novel strategy using this protocol and show that it outperforms existing state of the art heuristic negotiation models. Finally, the applicability of persuasive negotiation and CREDIT is exemplified through a practical implementation in a pervasive computing environment. In this context, the negotiationmechanismisimplementedinaninstantmessagingplatform(JABBER)and usedtoresolveconflictsbetweengroupandindividualpreferencesthatariseinameeting room scenario. In particular, we show how persuasive negotiation and trust permit a flexible management of interruptions by allowing intrusions to happen at appropriate times during the meeting while still managing to satisfy the preferences of all parties present. Contents Acknowledgements xiv 1 Introduction 1 1.1 Motivation for Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 Automated Negotiation Mechanisms . . . . . . . . . . . . . . . . . . . . . 4 1.3 Trust in Multi-Agent Systems . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.4 Argumentation-Based Negotiation . . . . . . . . . . . . . . . . . . . . . . 14 1.5 Research Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 1.6 Thesis Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2 Argumentation-Based Approaches to Negotiation 23 2.1 External Elements of ABN Frameworks . . . . . . . . . . . . . . . . . . . 23 2.1.1 The Language for Bargaining . . . . . . . . . . . . . . . . . . . . . 24 2.1.1.1 State of the Art . . . . . . . . . . . . . . . . . . . . . . . 25 2.1.1.2 Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.1.2 Participation Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.1.2.1 State of the Art . . . . . . . . . . . . . . . . . . . . . . . 29 2.1.2.2 Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.1.3 Information Stores . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 2.1.3.1 State of the Art . . . . . . . . . . . . . . . . . . . . . . . 32 2.1.3.2 Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . 34 2.2 Elements of ABN Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 2.2.1 Argument and Proposal Evaluation. . . . . . . . . . . . . . . . . . 38 2.2.1.1 State of the Art . . . . . . . . . . . . . . . . . . . . . . . 40 2.2.1.2 Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . 43 2.2.2 Argument and Proposal Generation . . . . . . . . . . . . . . . . . 44 2.2.2.1 State of the Art . . . . . . . . . . . . . . . . . . . . . . . 44 2.2.2.2 Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . 45 2.2.3 Argument Selection . . . . . . . . . . . . . . . . . . . . . . . . . . 46 2.2.3.1 State of the Art . . . . . . . . . . . . . . . . . . . . . . . 46 2.2.3.2 Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . 47 2.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 3 Trust in Multi-Agent Systems 53 3.1 Individual-Level Trust . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 3.1.1 Learning and Evolving Trust . . . . . . . . . . . . . . . . . . . . . 54 3.1.1.1 Evolving and Learning Strategies . . . . . . . . . . . . . 55 v vi CONTENTS 3.1.1.2 Trust metrics . . . . . . . . . . . . . . . . . . . . . . . . . 58 3.1.2 Reputation Models . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 3.1.2.1 Retrieving Ratings from the Social Network . . . . . . . 60 3.1.2.2 Aggregating Ratings . . . . . . . . . . . . . . . . . . . . . 61 3.1.3 Socio-Cognitive Models of Trust . . . . . . . . . . . . . . . . . . . 64 3.2 System-Level Trust . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 3.2.1 Truth Eliciting Interaction Protocols . . . . . . . . . . . . . . . . . 67 3.2.2 Reputation Mechanisms . . . . . . . . . . . . . . . . . . . . . . . . 70 3.2.3 Security Mechanisms . . . . . . . . . . . . . . . . . . . . . . . . . . 73 3.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 4 Formal Definitions 79 4.1 Basic Notions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 4.1.1 Contracts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 4.1.2 Utility Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 4.2 The Multi-Move Prisoner’s Dilemma . . . . . . . . . . . . . . . . . . . . . 81 4.2.1 The Action Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 4.2.2 The Game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 4.2.3 Using Persuasive Negotiation in the MMPD . . . . . . . . . . . . . 86 4.2.4 Using Trust in the MMPD . . . . . . . . . . . . . . . . . . . . . . 88 4.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 5 CREDIT: A Trust Model based on Confidence and Reputation 91 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 5.2 The CREDIT Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 5.2.1 Rules Dictating Expected Issue-Value Assignments . . . . . . . . . 94 5.2.2 Interaction History and Context . . . . . . . . . . . . . . . . . . . 96 5.2.3 Confidence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 5.2.3.1 Confidence Levels . . . . . . . . . . . . . . . . . . . . . . 99 5.2.3.2 Evaluating Confidence . . . . . . . . . . . . . . . . . . . . 101 5.2.4 Reputation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 5.2.5 Combined Confidence and Reputation Measures . . . . . . . . . . 104 5.2.6 Trust . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 5.2.7 Algorithmic Description and Computational Complexity . . . . . . 107 5.3 CREDIT in Practice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 5.3.1 Influencing an Agent’s Choice of Interaction Partners . . . . . . . 109 5.3.2 Influencing an Agent’s Negotiation Stance . . . . . . . . . . . . . . 110 5.3.2.1 Redefining Negotiation Intervals . . . . . . . . . . . . . . 110 5.3.2.2 Extending the Set of Negotiable Issues . . . . . . . . . . 111 5.4 Evaluating the CREDIT Model . . . . . . . . . . . . . . . . . . . . . . . . 113 5.4.1 Bandwidth Trading Scenario . . . . . . . . . . . . . . . . . . . . . 113 5.4.1.1 Specified and Unspecified Issues . . . . . . . . . . . . . . 114 5.4.1.2 Defections and Cooperation . . . . . . . . . . . . . . . . . 115 5.4.2 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . . 117 5.4.3 Experimental Set 1: Facing Extreme Strategies . . . . . . . . . . . 119 5.4.3.1 Using Norms and Trust in Negotiation . . . . . . . . . . 119 5.4.3.2 Trust and Negotiation Intervals . . . . . . . . . . . . . . 122 CONTENTS vii 5.4.4 Experimental Set 2: Facing Degree Defectors . . . . . . . . . . . . 130 5.5 Benchmarking CREDIT . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 5.5.1 Experimental Set 1: Facing Extreme Strategies . . . . . . . . . . . 137 5.5.2 Experimental Set 2: Facing Degree Defectors . . . . . . . . . . . . 139 5.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 6 Trust-Based Mechanism Design 143 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 6.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 6.3 A Standard VCG Task Allocation Scheme . . . . . . . . . . . . . . . . . . 146 6.4 Trust-Based Mechanism Design . . . . . . . . . . . . . . . . . . . . . . . . 149 6.4.1 Properties of the Trust Model . . . . . . . . . . . . . . . . . . . . . 149 6.4.2 Augmenting the Task Allocation Scenario . . . . . . . . . . . . . . 150 6.4.3 Failure of the VCG Solution . . . . . . . . . . . . . . . . . . . . . . 152 6.5 The Trust-Based Mechanism . . . . . . . . . . . . . . . . . . . . . . . . . 153 6.5.1 Properties of our Trust Based Mechanism . . . . . . . . . . . . . . 154 6.5.2 Instances of TBM . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 6.5.3 Self-POS Reports Only . . . . . . . . . . . . . . . . . . . . . . . . 156 6.5.4 Single-Task Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . 156 6.6 Experimental Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 6.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 7 Persuasive Negotiation for Autonomous Agents 161 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 7.2 The Negotiation Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 7.2.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 7.2.2 The Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 7.2.2.1 Negotiation Illocutions . . . . . . . . . . . . . . . . . . . 166 7.2.2.2 Persuasive Illocutions . . . . . . . . . . . . . . . . . . . . 167 7.2.3 Semantics of Illocutions . . . . . . . . . . . . . . . . . . . . . . . . 167 7.2.3.1 Basic Axioms . . . . . . . . . . . . . . . . . . . . . . . . . 168 7.2.3.2 Dynamics of Commitments . . . . . . . . . . . . . . . . . 169 7.3 The Persuasive Negotiation Strategy . . . . . . . . . . . . . . . . . . . . . 172 7.3.1 Properties of the Negotiation Games . . . . . . . . . . . . . . . . . 173 7.3.2 Applying Persuasive Negotiation . . . . . . . . . . . . . . . . . . . 175 7.3.3 Asking for or Giving a Reward . . . . . . . . . . . . . . . . . . . . 176 7.3.4 Determining the Value of Rewards . . . . . . . . . . . . . . . . . . 179 7.3.4.1 Sending a Reward . . . . . . . . . . . . . . . . . . . . . . 181 7.3.4.2 Asking for a Reward . . . . . . . . . . . . . . . . . . . . . 181 7.3.5 The Reward Generation Algorithm . . . . . . . . . . . . . . . . . . 182 7.3.6 Evaluating Offers and Rewards . . . . . . . . . . . . . . . . . . . . 182 7.4 Experimental Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 7.4.1 Experimental Settings . . . . . . . . . . . . . . . . . . . . . . . . . 186 7.4.2 Negotiation Tactics . . . . . . . . . . . . . . . . . . . . . . . . . . . 188 7.4.2.1 The Standard Negotiation Tactics . . . . . . . . . . . . . 188 7.4.2.2 The Reward-Based Tactic . . . . . . . . . . . . . . . . . . 189 7.4.2.3 The Algorithm for the Reward Based Tactic . . . . . . . 192 viii CONTENTS 7.4.3 Efficiency Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 7.4.4 Comparing PN strategies against Non-PN strategies . . . . . . . . 194 7.4.5 Evaluating the Reward Based Tactic . . . . . . . . . . . . . . . . . 199 7.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 8 Persuasive Negotiation in a Pervasive Computing Environment 209 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210 8.2 Intrusiveness and Interruptions . . . . . . . . . . . . . . . . . . . . . . . . 212 8.2.1 Receiving and Managing Interruptions . . . . . . . . . . . . . . . . 212 8.2.2 Typology of Interruptions . . . . . . . . . . . . . . . . . . . . . . . 214 8.2.3 Intrusiveness in the Meeting Room . . . . . . . . . . . . . . . . . . 215 8.3 The Multi-Agent Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 8.3.1 Formal Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 8.3.2 Persuasive Negotiation . . . . . . . . . . . . . . . . . . . . . . . . . 221 8.3.3 The Negotiation Algorithm . . . . . . . . . . . . . . . . . . . . . . 222 8.4 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 8.4.1 System Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 8.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 9 Conclusions 227 9.1 Summary of Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 9.2 Theoretical Implications . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 9.3 Practical Implications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 9.4 Open Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 A Trust in Practice 253 B Using CREDIT in a Bandwidth Trading Scenario 257 List of Figures 1.1 Approaches to negotiation in multi-agent systems and the cloud of uncer- tainty covering various aspects of the interaction. . . . . . . . . . . . . . . 7 1.2 A classification of approaches to trust in multi-agent systems. . . . . . . . 13 1.3 ApplyingCREDIT,TBMD,andPNtoreducetheuncertaintyunderlying negotiation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.1 Conceptual Elements of a Classical Bargaining Agent. . . . . . . . . . . . 36 2.2 ConceptualElementsofanArgumentation-BasedNegotiation(ABN)Agent (thedashedlinedboxesrepresenttheadditionalcomponentsnecessaryfor ABN agents). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 4.1 Transforming the normal Prisoner’s Dilemma to the Multi-Move Pris- oner’s Dilemma. The defection degree increases from 0 to 1 along the direction of the arrows for each agent and the payoffs to each agent is shown in each slot of the game matrix. The shaded region in the MMPD consists of the payoffs of the agents for each degree of defection which we aim to define in terms of the relationship between the utility functions of the agents. Thus, we aim to make the transition from one end of the MMPD to the other a continuous one rather than the discrete one. . . . . 82 4.2 The social utility (i.e. sum of both agents’ utilities) for different nego- tiation outcomes in the Multi-Move Prisoner’s Dilemma (MMPD). C α means that the agent α cooperates while D means that α defects. A α higher level of cooperation equates to a higher level of concession in ne- gotiation and a defection equates to demanding more (exploiting the op- ponent). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 4.3 Choosing the combination of outcomes that maximises the overall utility while ensuring agents have non-zero utilities. . . . . . . . . . . . . . . . . 87 4.4 Agents can retaliate using their trust model to capture defections by con- straining future agreements. . . . . . . . . . . . . . . . . . . . . . . . . . . 88 5.1 Shapes of membership functions in different labels and ranges supporting confidence levels in ‘Good’ (0.6), ‘Average’ (0.25), and ‘Bad’ (0) as well as the intersection of the supports of these sets. . . . . . . . . . . . . . . . 100 5.2 The processes involved in calculating and using trust with an opponent β. Functiong generatestheintervalgiventhedistributionofutilitylosses over multiple interactions. Function f evaluates confidence levels as in section 5.2.3.2. Reputation information is assumed to be available. . . . . 108 5.3 The different types of behaviours considered in evaluating CREDIT . . . 113 ix

Description:
In this thesis, we propose a panoply of tools and techniques to manage inter-agent dependencies in open, distributed multi-agent systems that have significant degrees of uncertainty. In particular, we focus on situations in which agents are involved in repeated interactions where they need to negot
See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.