Universally Composable Security With Local Adversaries RanCanettiandMargaritaVald(cid:63) BostonUniversityandTelAvivUniversity Abstract. The traditional approach to formalizing ideal-model based definitions of security for multi-party protocolsmodelsadversaries(bothrealandideal)ascentralizedentitiesthatcontrolallpartiesthatdeviatefrom theprotocol.Whilethiscentralized-adversarymodelingsufficesforcapturingbasicsecuritypropertiessuchas secrecyoflocalinputsandcorrectnessofoutputsagainstcoordinatedattacks,itturnsouttobeinadequatefor capturingsecuritypropertiesthatinvolverestrictingthesharingofinformationbetweenseparateadversarial entities.Indeed,tocapturecollusion-freenessandandgame-theoreticsolutionconcepts,Alwenet.al.[Crypto, 2012]proposeanewideal-modelbaseddefinitionalframeworkthatinvolvesade-centralizedadversary. We propose an alternative framework to that of Alwen et. al. We then observe that our framework allows capturingnotonlycollusion-freenessandgame-theoreticsolutionconcepts,butalsoseveralotherproperties thatinvolvetherestrictionofinformationflowamongadversarialentities.Theseincludesomenaturalflavors ofanonymity,deniability,timingseparation,andinformationconfinement.Wealsodemonstratetheinability ofexistingformalismstocapturetheseproperties. Wethenprovestrongcompositionpropertiesfortheproposedframework,andusethesepropertiestodemon- strate the security, within the new framework, of two very different protocols for securely evaluating any functionoftheparties’inputs. (cid:63)both authors are supported by the Check Point Institute for Information Security, Marie Curie grant PIRG03-GA-2008- 230640,andISFgrant0603805843. Table of Contents 1 Introduction ..................................................................... 1 1.1 Ourcontributionsinmoredetails ............................................... 2 2 Collusion-preservingComputation .................................................. 7 3 Definingsecurityofprotocols ...................................................... 7 3.1 BasicUCsecurity............................................................ 8 3.2 TheLocalUCmodelofprotocolexecution....................................... 9 3.3 Protocolemulation ........................................................... 11 3.4 Realizingidealfunctionalities;Hybridprotocols .................................. 12 3.5 Emulationwithrespecttothedummyadversary .................................. 13 4 LocalUniversalcomposition ....................................................... 14 4.1 Theuniversalcompositionoperationandtheorem ................................. 14 4.2 Aproof..................................................................... 14 5 RelationtoUCframework ......................................................... 18 5.1 LUCsecurityimpliesUCsecurity .............................................. 18 5.2 Equivalenceandmergerfunctionalities .......................................... 20 5.3 Relationamongnotionsinpresenceofglobalsetup................................ 21 6 ApplicabilityofLUCsecurity ...................................................... 22 6.1 Anonymity.................................................................. 22 6.2 Bi-Deniability ............................................................... 28 6.3 Confinement ................................................................ 32 6.4 Incentive-structurePreservation ................................................ 36 7 Asolutionviaphysicalcomputation ................................................. 38 7.1 Buildingblocks.............................................................. 38 7.2 ThePhysical-GMWprotocol .................................................. 49 8 Asolutionviasemi-trustedmediator................................................. 53 8.1 Functionalities............................................................... 54 8.2 Toolsandassumptions ........................................................ 54 8.3 Obliviouscomputationofπ.................................................... 55 8.4 Anidealfunctionalityforsecurefunctionevaluation............................... 56 8.5 Acompilerforsecuremulti-partycomputation ................................... 56 8.6 Proofofsecurity ............................................................. 59 8.7 Puttingalltogether ........................................................... 69 8.8 ASolutionforadaptiveadversaries ............................................. 69 1 Introduction Rigorously capturing the security properties of cryptographic protocols has proven to be a tricky en- deavor.Overtheyears,thetrustedparty(or,simulation)paradigmhasemergedasausefulandgeneral definitionalmethodology.Thebasicidea,firstcoinedin[GM84,GMR85,GMW87],istosaythataproto- col"securelyrealizes"agivencomputationaltaskifparticipatingintheprotocol”emulates”theprocess of interacting with an imaginary "trusted party" that securely receives parties’ inputs and locally com- putestheiroutputs.Intuitively,thisparadigmallowsexpressingandcapturingmanysecurityproperties. Moreover, it has an attractive potential "composability" property: any system using a trusted party F shouldbehavethesamewaywhenwereplacethetrustedpartywiththerealizingprotocol. Overtheyears,manysecuritydefinitionswerebasedonthisintuitiveidea,e.g., [GL90,MR91,Can00,DM00,PW00,Can01,PS04,CDPW07]. First, thesedefinitions formulate an execu- tion model; then they formalize the notion of emulating an ideal task with an "ideal world" attacker, calledsimulator.Thesecurityrequirementisbasedontheinabilityofanexternalobservertodistinguish an“idealworld”executionfromarealone. These formalisms differ in many ways; however, they have one major thing in common: they all model the attacker as a centralized entity, who can corrupt parties, coordinate their behavior and, in- tuitively, constitute an "evil coalition" against the protocol being executed. This seems to be an over- simplificationofreallifesituations.Indeed,inreallife,partiesareoftenindividualswhoarenotneces- sarycontrolledbythesameentityorhaveanythingincommon.Itwouldseemthatlettingthemalicious parties coordinate their attacks should be a strengthening of the model; however, when this power is alsogiventotheadversaryintheidealmodel(akathesimulator),thesecurityguaranteecanpotentially be weakened. Therefore, a natural question to ask is whether it is justified to model the attacker as a centralizedentityordoesthismodelingundulylimititsexpressiveness? Indeed,theexistingformalismsdocapturebasicpropertiessuchasprivacyofinputs,andcorrectness of outputs against coordinated attack. However, as has been observed in the past, there exist security concernsthatarenotnaturallycapturedusingthecentralizedadversaryapproach.Considerforinstance the collusion- freeness concern: a protocol is collusion-free if even misbehaving protocol participants cannotusetheprotocoltoexchange"disallowed"informationwithoutbeingdetected.Aspointedoutby [ILM05], "centralized simulator" formalisms do not capture the inability of parties to collude. That is, with a centralized adversary, a protocol might allow collusions between corrupted parties even when it realizesanidealtaskthatiscollusion-free. An additional known limitation of standard security notions is cryptographic implementations of game-theoretic mechanisms. In contrast to cryptography, game theory considers rational players that behave according to their individual goals. In many realistic settings, the incentive structure depends on whom players can collaborate with and the cost of this collaboration. Security with a centralized adversarydoesnotguaranteethattheincentivestructurewithrespecttocollaborationispreservedwhen movingfromtheidealprotocoltotheonethatrealizesit.Consequently,itdoesnotcorrectlycapturethe incentivestructureanddoesnotsufficeforpreservinggame-theoreticsolutionconceptsthatrestrictthe formationofcoalitions. Anaturalwaytohandlethoseconcernswouldbetostrengthenthemodelbyrequiringthatthesim- ulationbe“local”insomesense;thatis,shatteringthecentralizedsimulatortomanysimulators,where eachsimulatorhasonlysome“local’’informationandisresponsibletosimulateadversarialbehaviorin onlya“local’’sense.Howeverrequiringlocalsimulatorswhileallowingtheadversarytobecentralized resultsinanunrealisticallystrongsecurityrequirementthatfailstoadmitmanyusefulschemesthathave practicalsecurityguarantees.Therefore,thenextpromisingideawouldbetorestrictalsotheadversaryto belocal.Thisapproachindeedappearsintheworksof[LMS05,ASV08,AKL+09,ILM11,MR11,AKMZ12]. In particular, [AKMZ12] gives general model with a composition theorem and application to game- theory.Theseworksgivedifferentandincomparabledefinitionsofcollusion-freeness;acommonaspect is that they all postulate an adversary/simulator for each participant, where a participant represents an entitythatisidentifiedviaitspartyidentifierandtreatedasa“singledomain”(i.e.,itiscorruptasaunit, 1 eitherwhollyornoneatall).However,aswedemonstratebelow,thereareanumberofsecurityconcerns thatcannotbenaturallycapturedevenbytheaboveformalizationsoflocalsimulation. Our contributions. We provide an alternative formalization of the local simulators approach in a way that preserves its intuitive appeal and captures reality more tightly. In particular, we establish a general securitynotionthatallowscapturingtherequirementsofarbitrarytaskswhilepreservingthelocalview of each individual component and each communication link between components in the system. This notion enables expressing variety of partitions of the system. Specifically, our first contribution is to refine the UC framework to deal with the locality of information available to clusters of components. The new formalism, called local UC (LUC), assigns a different adversary/simulator to each ordered pair of participants. Intuitively, the adversaries/simulators assigned to a pair of parties handle all the communicationbetweenthetwoparties.Informally, If π is an LUC-secure protocol that implements a trusted partyF, then eachindividual compo- nent participating inπ affects eachother entity in thesystem no more thanit does soin the idealexecutionwithF. Note that this is conceptually different from the guarantees provided by the UC framework of [Can01] and the CP framework of [AKMZ12]. In the UC framework, protocols that implement a trusted party are guaranteed to have similar effect on the external environment as in the ideal execution with F. In theCPframework,theprotocolisguaranteedtohavethesameeffectasthetrustedpartyindividuallyon eachentity.IntheLUCframework,itisguaranteedthateachentityaffectseachotherentityinthesame wayasintheidealexecution. This refined granularity allows LUC to capture various concerns that cannot be captured by previous frameworks. We show how LUC captures some actual security concerns and demonstrate the inabil- ity of existing notions to capture these concerns. Specifically, we address some flavors of anonymity, deniability,collusion-freeness,informationconfinement,andpreservationofincentivestructure. WealsoextendtheUCcompositiontheoremandthedummyadversarytheoremtothenewframework. We obtain strong composition result that enables "game theoretic composition", i.e., composition that preserves the power of coalitions (whatever they may be). Moreover, our strong composition also pre- servesdeniabilityandconfinement. Next we present two protocols for secure function evaluation with LUC security. The protocols, called the Physical GMW and the Mediated SFE protocols, satisfy the new security definition. The protocols areverydifferentfromeachother:ThePhysicalGMWprotocol,whichisstronglyinspiredby[ILM05], models players sitting in a room equipped with machines and jointly computing a function. The Me- diated SFE protocol is a simplified version of [AKL+09]. Like there, we use a semi-trusted mediator. That is, if the mediator is honest then the protocol is LUC secure. It is also UC secure in the standard sense even if the mediator is corrupted. It is interesting to note that although these two protocols have significantlydifferentnature,theyarebothanalyzablewithinourframework. 1.1 Ourcontributionsinmoredetails Thenewformalism.Inanutshell,thenewmodelingproceedsasfollows.RecallthatintheUCframe- work,theadversaryisacentralizedentitythatnotonlycontrolsthecommunicationinthenetwork,but alsocoordinatesthecorruptedparties’behavior.Thiscentralizationisalsoinheritedbythesimulator.As mentioned above, while this modeling captures privacy and correctness, which are “global’’ properties oftheexecution,itcertainlydoesnotcapturerationalityorlocalityofinformation.Thisimplicitlymeans thatonlythesituationswherecorruptedpartiesenjoyglobalviewofthesystemarebeingfullycaptured. 2 A first attempt to bridge this gap would be to follow the formalisms of [ASV08,AKMZ12,MR11] and consider one adversary per party. However, this modeling does not completely capture reality either. Consider for instance the following scenario: one of two parties A and B is having a conversation with a third party C. Later C is instructed to transfer this information to some potentially misbehaved fourth party D without revealing whether the source was A or B. Clearly for the protocol to make intuitive sense, A and B need to assume that C is trusted, or “incorruptible’’. However, going back to the suggested model we notice that the adversary associated with C participates in both conversations andcanthuscorrelatethecommunicationwithoutcorruptingC atall,therebyharmingtheanonymityin awaythatisnotintendedbytheprotocol.Inotherwords,thesuggestedmodelingdoesnotdistinguish between the “obviously insecure” protocol that allows C to be corrupted, and the “obviously secure” protocol that uses an incorruptible C. We conclude that having a single adversary per party does not faithfullymodelhonestparties.Thismotivatesustolookforamorerefinedmodel. To adequately capture locality, we extend the UC model as follows: For each party identity (denoted PID) we consider an adversary for any PID it might communicate with. In other words, each pair of PIDshasapairofadversaries,whereeachadversaryisinadifferentsideofthe"potentialcommunica- tion line". Each local adversary is in charge of a specific communication line and is aware only of the communicationviathisline. We also let the environment directly control the communication, by letting the local adversaries com- municatewitheachotheronlythroughtheenvironment.Thisisanimportantdefinitionalchoicethatis different from [AKMZ12]. In particular, this means that the centralized simulator no longer exists and, each local adversary is replaced with a local simulator in the ideal process, where the protocol is re- placedbythetrustedparty.Thetrustedpartymayallowdifferentsubsetsofsimulatorstocommunicate byforwardingmessagesbetweenthem.Therefore,thecommunicationinterfaceprovidedbythetrusted party to the simulators represents partition of the system to clusters. The effect of this modeling is that the simulator for an entity can no longer rely on other parties’ internal information or communication inwhichitwasnotpresent.Thisway,aproofofsecurityreliesonlyoneachentity’slocalinformation, andpotentially,representsindependenceofclustersdefinedbythetrustedparty. Topreservemeaningfulness,weallowthelocaladversariestocommunicateacrosspartyidentitiesonly via the environment or with ideal functionalities. Aside from these modifications in the adversarial in- terface,themodelisidenticaltotheUCmodel. Capturingconcerns. WediscussvarietyofsecurityconcernsthatarecapturedbyLUCsecuritybutnot inothersecuritynotions: Collusion-freeness. To provide initial evidence for the expressiveness of LUC, we consider any UC- secure protocol for multi-party secure computation (e.g. the [CLOS02] protocol). While this pro- tocol UC realizes any ideal functionality (even ones that guarantee collusion freeness) in the pres- ence of malicious adversaries, it allows individually corrupted parties to collude quite freely, even when the environment does not pass any information among parties. Indeed, this protocol does not LUC-realize any ideal functionality that guarantees collusion-freeness. This is so even in the pres- ence of only semi-honest adversaries. The reason for the failure in the LUC model is the inabil- ity of the separate simulators to produce consistent views on adversaries’ shared information (i.e., scheduling, committed values etc.) We note that this concern is also captured by the definitions of [LMS05,ASV08,AKL+09,AKMZ12]. Anonymity. We consider several flavors of anonymity such as existence-anonymity, timing-anonymity, andsender-anonymity.Specifically,weshowUCandCollusion-Preserving(CP)realizationsofideal functionalitiesthathavetheseanonymityrequirementsbyaprotocolthatdoesnothavetheseprop- erties. We’ll then show that this realization is not LUC secure. Let us informally present the above flavors of anonymity: The first anonymity concern we present is existence-anonymity. Intuitively, wewouldliketohavea“dropbox’’thatdoesnotlettherecipientknowwhetheranewmessagewas 3 received,andthushidesinformationregardingtheexistenceofthesender. Considerthefollowingone-time-dropboxfunctionality.Thedropboxisavirtualboxinitializedwith somerandomfile.Peoplecanputfilesintoonesdropbox.Inaddition,theownercanonetimequery thedropboxifanynewfilehasbeenreceived;ifthereareanyincomingfilesinthebox,theywould bedeliveredtotheowner;elsethedefaultfilewouldbedelivered. Indeed, whenever receiving a file from this dropbox, there is no certainty regarding the existence of a sender. Correspondingly, any protocol that LUC-realizes the dropbox functionality is guaran- teedtoprovideanonymityregardingtheexistenceofasender.ThisisnotsoforstandardUCsecurity. An additional anonymity concern that we consider is timing-anonymity. Timing-anonymity means hidingthetimeinwhichanactiontookplace.Forexampleconsiderthe following email feature: whenever sending an email, the sender can delay the sending of the email bysomeamountoftime(say,randomlychosenfromsomedomain). Indeed,uponreceivinganemail,thereceiverdoesnotknowwhenthisemailwassent.Thisproperty can be captured via an ideal functionality in a straightforward way. Again, any protocol that LUC- realizes this functionality will provide anonymity regarding the time of sending. This is not so for standardUCsecurity. An additional anonymity property already mentioned here is sender-anonymity. The common way to achieve this anonymity property in practice is onion routing. In the work of [CL05] the onion- routingproblemisdefinedintheUCframework;however,theyonlyaddressapotentialsolutionto the sender-anonymity concern rather than the concern itself. In contrast, we formalize the sender- anonymityproperty.WealsoshowhowUCsecurity(andevenCPsecurity)failtocapturethisprop- erty.Specifically,wedefineanidealfunctionalityandshowaprotocolthatisclearlynon-anonymous butstillCP-realizesthefunctionalityaccordingtothedefinitionof[AKMZ12].Thisprotocolisnot LUCsecure. Deniability. Itwaspointedoutin[CDPW07,DKSW09]thatUCsecuritydoesnotguaranteedeniability due to issues with modeling of the PKI. While these issues were resolved in the context of global setup and deniable authentication in the generalized UC framework, it turns out that the UC for- malism does not capture another deniability flavor, called Bi-deniability (the name is taken from [OPW11]): A protocol is Bi-deniable if the protocol participants can "deny" before a judge having participatedintheprotocolbyarguingthatany"evidence"oftheirparticipationintheprotocolcould havebeenfabricatedwithouttheirinvolvement,evenifthereexistsanexternalentitythathasanac- cesstoparties’logfilesofthecommunication.Inthecontextofauthentication,thejudgeisprovided with “evidences” of sender’s participation not only by the receiver but also by this external entity. Specifically, the sender can argue that any “evidence” of participation was fabricated by this exter- nal entity, even though this external entity cannot communicate with the receiver and only has an accesstothecommunicationlogfilesofthesender.Thisnotionisstrongerthanstandarddeniability, in which sender’s log files are ideally hidden from the judge. To motivate bi-deniability consider a corporation that is obligated to store its communication log files. The log files are collected by an externallawenforcementagency.Clearly,itwouldbedesirabletoensurethatevenifthesefilesare disclosed,thecorporationcanalwaysdenytheirauthenticity. Inthiswork,wegiveasimulation-baseddefinitionofBi-deniableauthenticationandproveitsequiv- alencetoLUCsecureauthentication.Moreover,duetothestrongconnectionbetweenBi-deniability and LUC security, we obtain that Bi-deniability is preserved under composition. In addition, we showthatUCframeworkfailstocapturethisflavorofdeniability. Confinement. Another important concern that seems hard to capture by the standard notions is the in- formation confinement property, defined by [Lam73]. A protocol is said to enforce confinement if even misbehaving participants cannot leak secret information that they possess across predefined boundaries. [HKN05] presents a game based definition of confinement. Their definition introduces 4 changesinthebasicUCmodel,butstillconsidersacentralizedadversary.Weshowthatthedefini- tionof[HKN05]isexcessivelystrongandprotocolsthatclearlyenforceconfinementfailtoadmitit. The root of the problem is the centralized adversary that enables information flow to unauthorized entities. Intuitively,separateadversariescontrollingdifferentpartsofthenetworkordifferentgroupsofpar- ties would indeed capture this requirement more tightly. We present a formal definition of confine- ment and show that LUC security implies it. Similarly to Bi-deniability, we obtain composability withrespecttoconfinement.Inaddition,weshowtheinabilityofUCtocaptureconfinement.More specifically, we show that any UC functionality that enforces confinement is super-ideal in some welldefinedsense.Asbefore,thisisnotsoforLUCfunctionalities. Game-theoretic implications. As pointed out in [ILM05] standard security does not suffice for imple- mentation of general equilibria due to collusion. In order to overcome this problem, new notions weredefinedin[ILM05,ILM08,ILM11,AKMZ12].Specifically,[ILM05,ILM08,ILM11]defineno- tions of mechanism implementation; however, these notions are specific to the problem at hand and are not suitable for reasoning about standard security. [AKMZ12] translate their security no- tiontothegame-theoreticsettinganddefineacorrespondingmodelofmediatedgames.Inaddition, theyshowthattheirconstructedcompilerachievespreservationofincentive-structure(moredetails appears in Section 2). In this work, we show how protocols modeled in the LUC framework can be viewed as games. Moreover, we show that any protocol that LUC-securely realizes some ideal functionalitypreservestheincentivestructureoftherealizedfunctionality.Moreconcretely,forany LUC-secure protocol π there exists an efficient mapping between real world strategies and ideal worldstrategiesthatcanbecomputedbyeachplayerinalocalmannerandachievesindistinguish- able payoffs. This implies that any Nash Equilibrium (NE) in the ideal-world game is mapped to a computationalNEinthereal-worldgameandnonewequilibriaareintroduced. Compositionanddummyadversary.WedemonstratethatLUC-securityispreservedundercomposi- tion. Due to the local nature of the model, this preservation applies not only to basic security concerns under composition, but rather to much more general concerns such as deniability, confinement, and game-theoretic solution concepts. The obtained game-theoretic composition implies that Nash equilib- riumispreservedundernon-concurrentcomposition. WealsoextendthedummyadversarynotiontothelocalUCframework,andshowitsequivalencetothe generalLUC-securitynotion. AninterestinglineforfutureresearchistotrytocasttheLUCframeworkwithintheAbstractCryptog- raphyframework[MR11].Inparticular,suchaworkmightprovideaunifiedbasisfortheLUC,CPand UCframeworks. 1.1.1 LUCsecureprotocols Wesketchthetwogeneralmultipartycomputationprotocolsthatweanalyzeinthiswork. The physical GMW protocol. The GMW version we use is the protocol from [Gol04]. Still, our con- struction is strongly inspired by [ILM05]. We cast the protocol in the physical world by considering a setofplayerssittinginaroomandjointlycomputingafunctionbyevaluationthegatesofitscircuit.In order to properly compute the function, the players use the following physical machinery: boxes with serial number, and machines for addition, multiplication, duplication, and shuffle of boxed values. In moredetails,LetP ,...,P beasetofpartiesintheroomandletf bethefunctionofinterest.Next: 1 n 1. Sharingtheinputs:Eachplayerpartitionsitsinputtorandomshares,oneshareforeachplayerand then,itpublicallysendsthoseshares,inopaqueboxes,totheplayers. 2. Circuit emulation: Proceeding by the order of wires, all players jointly and publically evaluate the circuitgates. 5 3. Outputreconstruction:EachplayerpublicallyhandstheBoxesoftheoutputsharestotheappropriate parties.Lastly,eachpartyprivatelyopenstheboxesandcomputesitsoutput. Theorem1 (Informalstatement).Letf beaPPTfunction.Then,thereexistsaprotocolthatinformation- theoreticallyLUC-realizestheSFEfunctionalitywithrespecttoadaptiveadversaries. Throughouttheprocess,everyoneseeswhichoperationsareperformedbyeachplayer.Still,theactual valuesinsidetheboxesremainsecret. In contrast with classic GMW protocol, here the byzantine case is not done by introducing ZK proofs; rathertheprimitivesthemselvesarerobust. We achieve LUC-security for any number of corrupted parties. While the work of [ILM05] requires at least one honest party for the collusion-freeness to hold, we achieve LUC security (which implies collusion-freeness) even when all the parties are corrupted. In addition, this protocol meets the strong notionofperfectimplementationdefinedby[ILM11],andthereforeachievesprivacy,strategy,andcom- plexityequivalence. The Mediated-SFE protocol. We present here a high-level description of the mediated protocol, fol- lowing[AKL+09]. Let P ,...,P , and mediator M be a set of parties and let π be a k-round protocol that UC-securely 1 n computesfunctionF.(Inspiredby[AKMZ12],wethinkoftheprotocolasrunningdirectlyoverunau- thenticated communication channels.) The protocol π is compiled to a new LUC-secure protocol for computingF withasemi-trustedmediator,whereallthecommunicationisdonethroughthemediator. Specifically,foreachroundoftheprotocolπ does: 1. EachpartyandMrunstwo-partysecurecomputation,whichoutputstoMthenextroundmessages ofthispartyinπ. 2. MsendsacommitmenttotherelevantmessagestoeachpartyP . i 3. Inthelastroundofπ,themediatorMandeachpartyrunsecuretwo-partycomputation,whereeach partyobtainitsoutput. Theorem2 (Informalstatement).Givena(poly-time)functionf = (f ,...,f )andaprotocolπ that 1 n UC-realizes the SFE functionality. Then there exists a protocol Π that LUC-realizes the SFE function- alitywithrespecttoadaptiveadversaries. WhenMishonestitseparatesthepartiesofπ andmakesthembeindependentofeachother.WhenM iscorrupted,theindependencedisappears.Still,weobtainstandardUCsecurity. Westrengthentheprotocoltobeimmunetopowerfuladversariesthatcontrolthescheduling,gaininfor- mationvialeakageintheprotocol,andareabletoadaptivelycorruptplayers.Incontrastto[AKMZ12], wedonotassumeideallysecurechannelsbetweenpartiesandthemediator. Organization.Section2presentsthemaindifferencesbetweentheCPresultsandthispaper.Section3 presents the LUC-security framework in details. Section 4 proves the composition theorem. Section 5 showshowLUCrelatestoprevioussecuritynotions.Section6presentsthegame-theoreticimplications and the insufficiency of standard notions to capture interesting flavors of anonymity, deniability and confinement.Sections7and8presentourprotocolsforsecurefunctionevaluation. 6 2 Collusion-preservingComputation Forclarity,webrieflyreviewtheworkof[AKMZ12],sinceitisthemostrelatedworktoours.Thiswork definesasecuritymodelwhichwedenotebytheCPmodel(forCollusionPreservingcomputation). TheCPmodelisaspecialcaseofLUCandissimilartotheGUCmodelof[CDPW07].Thediffer- encesbetweenCPandGUCarethefollowing:Insteadofcentralizedadversary,theCPmodelconsiders an adversary for each party, and each adversary can corrupt only the party associated with its identity. Inaddition, theadversariescommunicateneitherwith thehonestpartiesnor witheachother.In theCP model, all communication is done via functionalities called resources. The CP security definition is an extension of the GUC definition to the multiple adversary setting. Let R and F be n-party resources. Letπ andφben-partyprotocolsusingresourcesRandF respectively.Then,wesaythataprotocolπ CP-realizes φ if for any set of n adversaries {A } there exists a set of n simulators {S } such i i∈[n] i i∈[n] that no environment can distinguish between an execution of π running with {A } and φ running i i∈[n] with{S } . i i∈[n] Themainresultspresentedinthe[AKMZ12]paperarethefollowing: – A composition theorem with respect to the same resource within the CP model. That is, protocols usingdifferentresourcescannotbecomposed. – AcompilerforSecureFunctionEvaluationwithasemi-trustedmediatorasaresource.[AKMZ12] models the communication links between parties and the mediator as ideal channels, where abso- lutelynoinformationregardingcommunicationisseenbytheenvironment(andtheadversaries).In additiontotheideallysecurechannels,theanalysisoftheCPcompilerheavilyreliesonthefactthat the mediator is modeled as a resource as opposed to a party with its own adversary. This does not allow the environment to witness any communication, and therefore enables simulation in the CP model. – Aproofthattheabovecompilerpreservestheincentive-structure.Thatis,theytranslatetheirsecurity notiontothegame-theoreticsettinganddefineacorrespondingmodelofmediatedgames.Thenthey show that their security notion implies that any strategy in the ideal-world game (with ideal SFE resource)canbemappedtoastrategyinthegamedefinedbytheCPcompilerandviceversa. Summaryofthemaindifferencesbetweenourmodelingandresultsfromthatof[AKMZ12]: – The LUC execution model requires all the communication to go via the environment. This gives a less idealized and more realistic modeling of communication. In our constructed compiler, the mediatorismodeledasapartyandallthecommunicationintheprotocolisdoneviatheenvironment. WeremarkthatthismorerealisticmodelingwouldnotallowtheCPcompilertoachieveCPsecurity. – TheLUCcompositiontheoremiswithrespecttoanyprotocol.Thatis,LUCsecurelyholdsevenif protocolsusedifferentfunctionalities. – The LUC model can capture additional concerns (e.g., flavors of anonymity, deniability, confine- ment). – The LUC model provides more general implication and composition for game theoretic modeling (i.e.,anyLUCsecurerealizationpreservesstructure-incentiveasopposedtoaspecificconstruction ofCP). 3 Definingsecurityofprotocols In this section we introduce our new Local UC (LUC) framework and discuss its relationship to basic UC security. Many of the low-level technical details, especially those that are essentially identical to thoseofthebasicUCframework,areomitted.Afulltreatmentofthesedetailscanbefoundin[Can01]. WebeginwithabriefreviewofthebasicUCframeworkof[Can01]. 7 3.1 BasicUCsecurity We first briefly review the basic computational model, using interactive Turing machines (ITMs). Al- though we do not modify the underlying computational model, familiarity with it is important for un- derstandingourmodel. Systems of ITMs. To capture the mechanics of computation and communication in computer net- works, the UC framework employs an extension of the Interactive Turing Machine (ITM) model. A computerprogram(suchasforaprotocol,orperhapsprogramoftheadversary)ismodeledasanITM. An execution experiment consists of a system of ITMs which are instantiated and executed, with mul- tiple instances possibly sharing the same ITM code. (More formally, a system of ITMs is governed by acontrolfunctionwhichenforcestherulesofinteractionamongITMsasrequiredbytheprotocolexe- cutionexperiment.Herewewillomitthefullformalismsofthecontrolfunction,whichcanbefoundin [Can01].) A particular executing ITM instance running in the network is referred to as an ITI, and we must have a means to distinguish individual ITIs from one another even if they happen to be running identical ITM code. Therefore, in addition to the program code of the ITM they instantiate, individual ITIs are parametrizedbyapartyID(pid)andasessionID(sid).WerequirethateachITIcanbeuniquelyiden- tified by the identity pair id = (pid, sid), irrespective of the code it may be running. All ITIs running withthesamecodeandsessionIDaresaidtobeapartofthesameprotocolsession,andthepartyIDs are used to distinguish among the various ITIs participating in a particular protocol session. The sids areunique.WealsorefertoaprotocolsessionrunningwithITMcodeasaninstanceofprotocol.ITMs are allowed to communicate with each other via the use of three kinds of I/O tapes: local input tapes, local subroutine output tapes, and communication tapes. Writes to the local input tapes of a particular ITI must include both the identity and the code of the intended target ITI, and the ITI running with the specified identity must also be running the specified code, or else an error condition occurs. If a target ITI with the specified identity does not exist, it is created (“invoked”), and given the specified code.WealsorequirethatwhenanITIwritestothelocalsubroutineoutputtapeofanotherITI,itmust provide its own code. Finally, all “external” communications are passed via the communication tapes, whichdisclosetotherecipientneitherthecodeoftheintendedrecipientITI,northecodeofthesending ITI (but merely their identities). The control function determines which external-write instructions are “allowed”;“un-allowed”instructionsareignored. The UC Protocol Execution Experiment. The UC protocol execution experiment is defined as a system of ITMs that’s parametrized by three ITMs. An ITM π specifies the code of the challenge protocol for the experiment, an ITM A specifies the code of the adversary, and an ITM Z provides the code of the environment. The protocol execution experiment defines precise restrictions on the order in which ITIs are activated, and which ITIs are allowed to invoke or communicate with each other. Formally,therestrictionsareenforcedviathecontrolfunction.Theformaldetailsofthecontrolfunction canbefoundin[Can01],butweinformallydescribesomeoftherelevantdetails.Theexperimentinitially launchesonlyanITIrunningZ.Inturn,Z ispermittedtoinvokeonlyasingleITIrunningA,followed by (multiple) ITIs running the “challenge protocol” provided that these ITIs running π all share the same sid. This sid, along with the PIDs of all the ITIs running, may be chosen arbitrarily by Z. In summary,theenvironmentcancommunicateonlywiththeITIrunningthecodeoftheadversaryA,and ITIsparticipatinginasinglesessionofprotocolπ (inalimitedway).TheoutputoftheenvironmentZ inthisbasicUCprotocolexecutionexperimentisdenotedbyEXEC . π,A,Z Ideal Functionalities. An ideal functionality F is an ITM whose code represents a desired (inter- active) function to be computed by parties or other protocols which may invoke it as a subroutine (and thus, in a perfectly secure way). The pid of any ITI running F is set to the special value ⊥, indicating that the ITI is an ideal functionality. F accepts input from other ITIs that have the same sid as F, and may write outputs to multiple ITIs as well. Every ideal functionality F also induces an ideal protocol IDEAL . Parties running IDEAL with the session ID sid act as dummy parties, simply forwarding F F their inputs to the input tape of an ITI running F with the same sid, and copying any subroutine out- 8
Description: