ebook img

Very Static Enforcement of Dynamic Policies PDF

0.28 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Very Static Enforcement of Dynamic Policies

Very Static Enforcement of Dynamic Policies BartvanDelft1,SebastianHunt2,andDavidSands1 1 ChalmersUniversityofTechnology,Sweden 2 CityUniversityLondon 5 1 0 2 Abstract. Securitypoliciesarenaturallydynamic.Reflectingthis,therehasbeen n agrowinginterestinstudyinginformation-flowpropertieswhichchangeduring a programexecution,includingconceptssuchasdeclassification,revocation,and J role-change. 2 Astaticverificationofadynamicinformationflowpolicy,fromasemanticper- 1 spective,shouldonlyneedtoconcernitselfwithtwothings:1)thedependencies between data in a program, and 2) whether those dependencies are consistent ] withtheintendedflowpoliciesastheychangeovertime.Inthispaperweprovide R aformalgroundforthisintuition.Wepresentastraightforwardextensiontothe C principal flow-sensitivetypesystemintroduced byHuntandSands(POPL’06, . ESOP’11)toinferbothend-to-enddependenciesanddependenciesatintermedi- s c atepointsinaprogram.Thisallowstypingstobeappliedtoverificationofboth [ staticanddynamicpolicies.Ourextensionpreservestheprincipaltypesystem’s distinguishingfeature,thattypeinferenceisindependentofthepolicytobeen- 1 v forced:asingle,genericdependencyanalysis(typing)canbeusedtoverifymany 3 differentdynamicpoliciesofagivenprogram,thusachievingacleanseparation 3 between(1)and(2). 6 Wealsomakecontributionstothefoundationsofdynamicinformationflow.Ar- 2 guably,themostcompellingsemanticdefinitionsfordynamicsecurityconditions 0 intheliteraturearephrasedintheso-calledknowledge-basedstyle.Wecontribute 1. anewdefinitionofknowledge-basedterminationinsensitivesecurityfordynamic 0 policies.Weshowthatthenewdefinitionavoidsanomaliesofpreviousdefinitions 5 andenjoysasimpleandusefulcharacterisationasatwo-runstyleproperty. 1 : v 1 Introduction i X r a Information flow policies are security policies which aim to provide end-to-end securityguaranteesoftheform“informationmustnotflowfromthissourcetothisdes- tination”.Early workon informationflow concentratedonstatic, multi-levelpolicies, organisingthe various data sources and sinks of a system into a fixed hierarchy.The policydeterminedbysuchahierarchy(apartialorder)issimplythatinformationmust notflowfromatobunlessa⊑b. 1.1 Dynamicpolicies Since the pioneeringwork of Denning and Denning [DD77], a wide variety of infor- mation-flowpoliciesandcorrespondingenforcmentmechanismshavebeenproposed. ThisistheTechnicalReportof[DHS15]. Much recent work on information-flowproperties goes beyond the static, multi-level securitypoliciesofearlierwork,consideringinsteadmoresophisticated,dynamicforms of policy which permit different flows at different points during the excecution of a program.Indeed,thisshiftoffocusbetterreflectsreal-worldrequirementsforsecurity policieswhicharenaturallydynamic. Forexample,considerarequestforsensitiveemployeeinfor- // x→a; mation made to an employer by a regulatory authority. In order out x on a; to satisfy this request it may be necessary to temporarily allow // x6→a; thesensitiveinformationto flowtoaspecificuserintheHuman out 2 on a; Resourcesdepartment.Insimplifiedform,theessenceofthisex- Fig.1 ampleiscapturedinFigure1. Herexcontainsthesensitiveinformation,channelarepresentstheHRuser,andthe policyisexpressedbytheannotationsx→a(xmayflowtoa)andx6→a(xmustnot flowtoa).Itisintuitivelyclearthatthisprogramcomplieswiththepolicy. Considertwoslightlymoresubtleexamples,ineachofwhichrevocationofaper- mittedflowdependsonrun-timedata: In program A, the revocation of 1 /*Program A*/ /*Program B*/ x → a is controlledby the value of 2 // x,y→a; // x→a; 3 out x on a; out x on a; y, whereas in program B it is con- 4 if (y > 0) { if (x > 0) { trolledbythevalueofx itself. Note 5 out 1 on a; out 1 on a; that the policy for A explicitly al- 6 // x6→a; // x6→a; lows y → a so the conditionalout- 7 } } put(whichrevealsinformationabout 8 out 2 on a; out 2 on a; y)appearstobepermissible.Inpro- 9 out 3 on a; out 3 on a; gram B the conditional output re- veals informationaboutx itself, but this happens before the revocation. So should program B be regarded as compliant? We argue that it should not, as follows. Consider “the third output” of program B as observedonchannela.Dependingontheinitialvalueofx,theobservedvaluemaybe either2(line8)or3(line9).Thusthisobservationrevealsinformationaboutxand,in thecaseswhererevocationoccurs,theobservationhappensaftertherevocation. Unsurprisingly,increasingthesophisticationofpoliciesalsoincreasesthechallenge offormulatinggoodsemanticdefinitions,whichistosay,definitionswhichbothmatch ourintuitionsaboutwhatthepoliciesmeanandcanformthebasisofformalreasoning aboutcorrectness. Atfirstsightit mightseem thatincreasingsemanticsophisticationshouldalso re- quire increasingly intricate enforcementmechanisms. However, all such mechanisms mustsomehowsolvethesametwodistinctproblems: 1. Determinewhatdatadependenciesexistbetweenthevariousdatasourcesandsinks manipulatedbytheprogram. 2. Determinewhetherthosedependenciesareconsistentwiththeflowspermittedby thepolicy. Ideally,thefirstoftheseproblemswouldbesolvedindependentlyofthesecond,since dependenciesareapropertyofthecode,notthepolicy.Thiswouldallowreuseattwo levels:a)reuseofthesamedependencyanalysismechanismsandprooftechniquesfor different types of policy; b) reuse of the dependency properties for a given program acrossverificationofmultiplealternativepolicies(whetherofthesametypeornot). In practice, enforcement mechanisms are typically not presented in a way which cleanly separates the two concerns. Not only does this hamper the reuse of analysis mechanismsandprooftechniques,italsomakesithardertoidentifytheessentialdif- ferencesbetweendifferentapproaches. CentralContribution Wetakeawell-understooddependencytypesystemforasim- ple while-language,originallydesignedto supportenforcementof static policies, and extenditinastraightforwardwaytoalanguagewithoutputchannels(§5).Wedemon- strate the advantages of a clean separation between dependency analysis and policy enforcement,byestablishingagenericsoundnessresult(§6)forthetypesystemwhich characterises the meaning of types as dependency properties. We then show how the dependencyinformationderivedbythe typesystem canbeusedto verifycompliance withdynamicpolicies.Notethatthismeansthatthecoreanalysisforenforcementcan be done even before the policy is known:we dub this very static enforcement.More significantly, it opens the way to reuse of dependency analyses across verification of multipletypesofinformationflowpolicy(forexample,itmightbepossibletousethe dependencyanalysesperformedbyadvancedslicingtoolssuchasJoannaandIndus). Foundations of Dynamic Flow Policies Although it was not our original aim and focus,we alsomakesomecontributionsofamorefoundationalnature,andourpaper openswiththese(§2–§4).Thesemanticdefinitionofsecuritywhichweuseisbasedon workof AskarovandChong[AC12], and we beginwith their abstractformulationof dynamicpolicies(§2).Indefiningsecurityfordynamicpolicies,theymadeaconvincing caseforusingafamilyofattackersofvariousstrengths,followinganobservationthat the intuitively strongestattacker (who never forgetsanythingthat has been observed) actually places weaker security demands on the system than we would want. On the otherhand theyobservethatthe family of allattackerscontainspathologicalattacker behaviourswhichonecertainlydoesnotwishtoconsider.Duetothistheydonotgivea characterisationofthesetofallreasonableattackersagainstwhichoneshouldprotect. Wemakethefollowingtwofoundationalcontributions: FoundationalContribution1 Wefocus(§3.3)onthepragmaticcaseofprogressin- sensitive security (where slow information leakage is allowed through observation of computationalprogress[AHSS08]).Weargueforanewdefinitionofprogressinsensi- tivesecurity(Def11),whichunconditionallygrantsallattackersknowledgeofcompu- tationalprogress.Withthismodificationtothedefinitionfrom[AC12],theproblematic examplesofpathologicalattackersareeliminated,andwehaveamorecompletedefini- tionofsecurity.Consequently,weareabletoprovesecurityinthecentralcontribution ofthepaperforallattackers. FoundationalContribution2 Thedefinitionsofsecurityarebasedoncharacterising attacker knowledge and how it changes over time relative to the changing policy. As arguedpreviouslye.g.,[BS09],thisstyleofdefinitionformsamuchmoreintuitivebasis forasemanticsofdynamicpoliciesthanusingtwo-runcharacterisations.However,two- runformulationshavetheadvantageofbeingeasiertouseinproofs.Weshow(§4)that ournewknowledge-basedprogress-insensitivesecuritydefinitionenjoysasimpletwo- runcharacterisation.Wemakegooduseofthisinourproofofcorrectnessofourcentral contribution. 2 The DynamicPolicyModel Inthissectionwedefineanabstractmodelofcomputationandamodelofdynamic policieswhichmapscomputationhistoriestoequivalencerelationsonstores. 2.1 ComputationandObservationModel ComputationModel Thecomputationmodelisgivenbyalabelledtransitionsystem overconfigurations.We writehc,σi−−α→hc′,σ′imeansthatconfigurationhc,σi evalu- atesinonesteptoconfigurationhc′,σ′iwithlabelα.Herecisacommandandσ ∈Σ isastore.Inexamplesandwhenweinstantiatethismodelthestorewillbeamapping fromprogramvariablestovalues. Thelabelαrecordsanyoutputthathappensduringthatstep,andwehaveadistin- guishedlabelvalueǫtodenoteasilentstepwhichproducesnooutput.Everynon-silent labelαhasanassociatedchannelchannel(α)∈Chan andavaluevalue(α).Channels arerangedoverbyaandvaluesbyv.Weabbreviateasequenceofevaluationsteps hc0,σ0i−α−1→hc1,σ1i−α−2→...−−α−n→hcn,σni as hc0,σ0i−→nhcn,σni. We write hc0,σ0i−→∗hc′,σ′i if hc0,σ0i−→nhc′,σ′i for some n≥0. We write the projection of a single step hc,σi−−α→hc′,σ′i to some channel a as hc,σi−−β→ hc′,σ′i where β=v if channel(α) = a and value(α) = v, and β=ǫ a otherwise,thatis,whenαissilentoranoutputonachanneldifferentfroma. Weabbreviateasequenceofevaluationsteps hc0,σ0i−β−1→ahc1,σ1i−−β2→a...−β−n→ahcn,σni ashc0,σ0i−→t anhcn,σniwheretisthetraceofvaluesproducedonchannelawithevery silentǫfilteredout.Wewritehc0,σ0i−→t a∗hc′,σ′iifhc0,σ0i−→t anhc′,σ′iforsomen≥0. We use |t| to denote the length of trace t and t (cid:22) t to denote that trace t is a 1 2 1 prefixoftracet . 2 Attacker’sObservationModel Wefollowthestandardassumptionthatthecommand cisknowntotheattacker.Weassumeapassiveattackerwhichaimstoextractinforma- tionaboutaninputstoreσbyobservingoutputs.Asin[AC12],theattackerisableonly toobserveasinglechannel.Ageneralisationtomulti-channelattackers(whichwould alsoallowcolludingattackerstobemodelled)isleftforfuturework. 2.2 DynamicPolicies Aflowpolicyspecifiesalimitonhowmuchinformationanattackermaylearn.Avery generalwaytospecifysuchalimitisasanequivalencerelationoninputstores. Example1. Considera storewith variablesx andy. A simplepolicymightstate that theattackershouldonlybeabletolearnthevalueofx.Itfollowsthatallstoreswhich agreeon the value ofx shouldlookthe same to the attacker. Thisis expressedasthe equivalencerelationσ≡ρiffσ(x)=ρ(x). A more complicated policy might allow the attacker to learn the value of some arbitraryexpressioneontheinitialstore,e.g.x=y.Thisisexpressedastheequivalence relationσ≡ρiffσ(e)=ρ(e). Definition1 (Policy).ApolicyP mapseachchanneltoanequivalencerelation≡on stores.WewriteP fortheequivalencerelationthatP definesforchannela. a Asdefined,policiesarestatic.Adynamicpolicychangeswhiletheprogramisrun- ningandmaydictateadifferentP foreachpointintheexecution.Hereweassumethat thepolicychangessynchronouslywiththeexecutionoftheprogram.Thatis,theactive policycanbedeterministicallyderivedfromtheexecutionhistorysofar. Definition2 (ExecutionHistory).Anexecutionhistory H oflengthn isa transition sequencehc0,σ0i−α−1→hc1,σ1i−α−2→...−−α−n→hcn,σni. Definition3 (DynamicPolicy).AdynamicpolicyD mapseveryexecutionhistoryH toapolicyD(H).WewriteD (H)fortheequivalencerelationthatisdefinedbyD(H) a forchannela,thatistosay,D (H)=P whereP =D(H). a a Mostsynchronousdynamicpolicylanguagesintheliteraturedeterminethecurrent policy based solely on the store σ in the final configurationof the executionhistory n [AC12,BvDS13].Definition3allowsinprincipleformoreflexiblenotionsofdynamic policies, as they can incorporate the full execution history to determine the policy at eachstageofanexecution(similartothenotionofconditionalnoninterferenceusedby [GM84,Zha12]).However,ourenforcementdoesassumethatthedynamicpolicycanbe staticallyapproximatedperprogrampoint,whicharguablyisonlyfeasibleforpolicies in the style of [AC12,BvDS13]. Such approximations can typically be improved by allowingtheprogramtobranchonpolicy-relatedqueries. Sinceprogramsaredeterministic,anexecutionhistoryoflengthn isuniquelyde- terminedbyitsinitialconfigurationhc ,σ i.Weusethisfacttosimplifyourdefinitions 0 0 andproofs: Definition4 (Execution Point). An execution point is a triple (c ,σ ,n) identifying 0 0 the point in execution reached after n evaluation steps starting from configuration hc ,σ i. Such an execution point is considered well-defined iff there exists hc ,σ i 0 0 n n suchthathc0,σ0i−→nhcn,σni. Lemma1. Eachwell-definedexecutionpoint(c ,σ ,n)uniquelydeterminesan exe- 0 0 cutionhistoryH(c ,σ ,n)oflengthnstartinginconfigurationhc ,σ i. 0 0 0 0 In the rest of the paper we rely on this fact to justify a convenientabuse of notation, writingD(c ,σ ,n)tomeanD(H(c ,σ ,n)). 0 0 0 0 3 Knowledge-Based Security Conditions Recentworksondynamicpolicies,including[AC12,BDLG11,BNR08,BS10],make useofso-calledknowledge-basedsecuritydefinitions,buildingonthenotionofgradual releaseintroducedin[AS07].Thisformofdefinitionseemswell-suitedtoprovideintu- itivesemanticsfordynamicpolicies.Wefocusinparticularontheattacker-parametric modelfromAskarovandChongin[AC12]. Supposethat the inputstate to a programis σ. In the knowledge-basedapproach, an attacker’s knowledge of σ is modelled as a knowledge set K, which may be any set of states such thatσ ∈ K.Note that the largerthe knowledgeset, the less certain is the attackerofthe actualvalueof σ, so smaller K meansmoreprecise knowledge. (Sometimes,asweseebelow,itcanbemoreintuitivetofocusonthecomplementK, which is the set of a-prioripossible states which the attacker is able to exclude,since thisset,whichwewillcalltheexclusionknowledge,growsastheattackerlearnsmore). Nowsupposethatthecurrentlyactivepolicyis≡.Theessentialideainanyknow- ledge-basedsemanticsistoviewtheequivalenceclassesof≡asplacingupperbounds ontheattacker’sknowledge.Inthesimplestsetting,iftheactualinputstateisσandthe attacker’sknowledgesetisK,werequire: K ⊇{σ′|σ′ ≡σ} Or,intermsofwhattheattackerisabletoexclude: K ⊆{ρ|ρ6≡σ} (1) How then do we determinethe attacker’sknowledge?Supposean attacker knows the program c and observes channel a. Ignoring covert channels (timing, power, etc) anobviousapproachistosaythattheattacker’sknowledgeissimplyafunctionofthe tracetobservedsofar: k(t)={ρ|hc,ρi−→t } (2) a We define the exclusion knowledge as the complement of this: ek(t) = k(t). Note that, as a program executes and more outputs are observed, the attacker’s exclusion knowledge can only increase; if hc,σi−t−·v→ then ek(t) ⊆ ek(t·v), since, if ρ can a already be excluded by observation of t (because c cannot producet when started in ρ),thenitwillstillbeexcludedwhent·v isobserved(ifccannotproducetitcannot produceanyextensionofteither). Butthissimplemodelisproblematicfordynamicpolicies.Supposethatthepolicies ineffectwhentandt·v areobservedare,respectively≡ and≡ .Thenitseemsthat 1 2 wemustrequirebothek(t)⊆{ρ|ρ6≡ σ}andek(t·v)⊆{ρ|ρ6≡ σ}.Asobserved 1 2 above, it will always be the case that ek(t) ⊆ ek(t·v), so we are forced to require ek(t)⊆{ρ|ρ6≡ σ}.Inotherwords,theobservationsthatwecanpermitatanygiven 2 momentareconstrainednotonlybythepolicycurrentlyineffectbutalsobyallpolicies whichwillbeineffectinthefuture.Thismakesitimpossibletohavedynamicpolicies which revoke previously permitted flows (or, at least, pointless; since all revocations wouldapplyretrospectively,theearlier“permissions”couldneverbeexercised). AskarovandChong’ssolutionhastwo keycomponents,outlinedin the following twosections. 3.1 ChangeinKnowledge Firstly, recognisingthat policy changesshould not apply retrospecively,we can relax (1)toconstrainonlyhowanattacker’sknowledgeshouldbeallowedtoincrease,rather thanitsabsolutevalue.Theincreaseinattackerknowledgegoingfromttot·visgiven bythesetdifferenceek(t·v)−ek(t).So,insteadof(1),werequire: ek(t·v)−ek(t)⊆{ρ|ρ6≡σ} (3) where ≡ is the policy in effect immediately before the output v. (Some minor set- theoreticrearrangementgivestheequivalent k(t·v)⊇k(t)∩{σ′|σ′ ≡σ} whichistheformoftheoriginalpresentationin[AC12].) 3.2 Forgetfulattackers Focussing on change in knowledgeaddresses the problemof retrospectiverevocation butitcreatesanewissue.Considerthefollowingexample. Example2. The programin Figure 2 producesthe same outputmanytimes, butonly thefirst outputispermittedbythe policy.Assume thatthe valueofx is5. Beforethe first output, the knowledge set of an observer on channel a contains every possible store. After the first output the observer’s knowledge set is reduced to include only thosestoresσwhereσ(x)=5.Thisisallowedbythepolicyatthatpoint. Bythetimethesecondoutputoccurs,thepolicyprohibitsanyfurtherflowfromx. However,since the attacker’sknowledgeset already includescomplete knowledgeof x,thesecondoutputdoesnotactuallychangetheattacker’sknowledgeatall,so(3)is satisfied(sincek(t·v)=k(t)).Thusapolicysemanticsbasedon(3)wouldacceptthis program even though it continuesto leak the value of x long after the flow has been revoked. Askarov and Chong address this by revisiting the as- sumptionthat an attacker’s knowledgeis necessarily deter- x→a; minedbythesimple functionoftraces(2) above.Consider out x on a; anattackerwhichforgetsthevalueofthefirstoutputinex- x6→a; ample2.Forthisattacker,thesecondoutputwouldcomeas while (true) a revalation, revealing the value of x all over again, in vi- out x on a; olationof the policy.AskarovandChong thusarriveat the Fig.2 intriguingobservationthatsecurityagainstamorepowerful attacker,onewhorememberseverythingthathappens,doesnotimplysecurityagainst alessresourcefulattacker,whomightforgetpartsoftheobservationsmade. Forgetfulattackersaremodelledasdeterministicautomata. Definition5 (Forgetful Attacker ⊲ § III.A [AC12]). A forgetful attacker is a tuple A=(S ,s ,δ ) where S is the set of attacker states; s ∈ S is the initial state; A 0 A A 0 A and δ : S ×Val → S the (deterministic) transitionfunction describinghowthe A A A attacker’sstatechangesduetothevaluesthattheattackerobserves. WewriteA(t)fortheattacker’sstateafterobservingtracet: A(ǫ)=s 0 A(t·v)=δ (A(t),v) A A forgetfulattacker’s knowledgeafter trace t is defined as the set of all initial stores thatproduceatracewhichwouldresultinthesameattackerstateA(t): Definition6 (ForgetfulAttackerKnowledge⊲§III.A[AC12]). k(A,c,a,t)={ρ|hc,ρi−−t→′ ∧A(t′)=A(t)} a (Notethat,inpreparationfortheformaldefinitionofthesecuritycondition,programc andchannelanowappearasexplicitparameters.) Theproposedsecurityconditionisstillessentiallyasgivenby(3),butnowrelative toaspecificchoiceofattacker.Statedinthenotationandstyleofthecurrentpaper,the formaldefinitionisasfollows. Definition7 (Knowledge-BasedSecurity⊲Def.1[AC12]).Commandcissecurefor policy D againstan attacker A onchannela forinitial store σ if for alltraces t and valuesvsuchthathc,σi−→t nhc′,σ′i−−v→1wehave a a ek(A,c,a,t·v)−ek(A,c,a,t)⊆{ρ|ρ6≡σ} where≡=D (c,σ,n). a Having relativized security to the power of an attacker’s memory, it is natural to consider the strong notion of security that would be obtained by requiring Def. 7 to holdforallchoicesof A. However,asshownin[AC12],thisexposesa problemwith themodel:thereareattackersforwhichevenwell-behavedprogramsareinsecureac- cordingtoDef.7. Example3. ConsideragainthefirstexamplefromtheIntroduction(Section1.1).Here, forsimplicity,weassumethatthevariablexisboolean,takingvalue0or1. 0 // x→a 1 q1 2 out x on a; // x6→a start q0 out 2 on a; 2 q2 It is intuitively clear that this program complies with the policy. However, as ob- served in [AC12], if we instantiate Def. 7 with the forgetful attacker displayed, the attacker’sknowledgeincreaseswiththesecondoutputwhenx=0. Afterobservingthevalue0,theattacker’sstate isA(0)=q . SinceA(ǫ)=q ,the 0 0 knowledgesetstillholdseverystorepossible.Afterthesecondobservation,onlystores wherex=0couldhaveledtostateq ,sotheknowledgesetshrinks(ie,theattacker’s 2 knowledgeincreases)atapointwherethepolicydoesnotallowit. This example poses a question which (so far as we are aware) remains unanswered: ifwe basea dynamicpolicysemanticsonDef.7,forwhichset ofattackersshouldwe requireittohold? Inthenextsectionwedefineaprogress-insensitivevariantofDef.7.Forthisvariant itseemsthatsecurityagainstallattackersisareasonablerequirementandinSection6 we show that progress-insensitivesecurity against all attackers is indeed enforced by ourtypesystem. 3.3 ProgressInsensitiveSecurity Since [VSI96], work on the formalisation and enforcementof information-flowpoli- cieshasgenerallydistinguishedbetweentwoflavoursofsecurity:termination-sensitive and termination-insensitive. Termination-sensitive properties guarantee that protected information is neither revealed by its influence on input-output behaviour nor by its influenceonterminationbehaviour.Termination-insensitivepropertiesallowthelatter flows and thus provide weaker guarantees. For systems with incremental output (as opposed to batch-processing systems) it is more appropriate to distinguish between progress-sensitive and progress-insensitive security. Progress-insensitive security ig- noresprogress-flows,whereaflowisregardedasaprogress-flowiftheinformationthat itrevealscanbeinferredsolelybyobservinghowmanyoutputsthesystemproduces. Twoexamplesofprogramswithprogress-flowsareasfollows: Example4. Programscontainingprogress-flows: // Program A // Program B out 1 on a; out 1 on a; while (x == 8) skip; if (x != 8) out 2 on a; out 2 on a; Letσandρdifferonlyonthevalueofx:σ(x)=4andρ(x)=8.Notethat,ifstartedin σ,bothprogramsproduceatraceoflength2(namely,thetrace1·2)whereas,ifstarted inρ,themaximumtracelengthis1.Thus,forbothprograms,observingjustthelength ofthetraceproducedcanrevealinformationaboutx.Notethat,sinceterminationisnot an observableeventin the semantics, A and B are actuallyobservablyequivalent;we givethetwo variantstoemphasisethatprogress-flowsmayoccurevenin theabsence ofloops. Inpractice,mostenforcementmechanismsonlyenforceprogress-insensitivesecu- rity.Thisisapragmaticchoicesince(a)itishardtoenforceprogress-sensitivesecurity without being overlyrestrictive (typically,all programswhich loop on protected data will be rejected), and (b) programswhich leak solely via progress-flows, leak slowly [AHSS08]. Recall that Knowledge-BasedSecurity (Def. 7) places a boundon the increase in anattacker’sknowledgewhichisallowedtoarisefromobservationofthenextoutput event. Askarov and Chong show how this can be weakened in a natural way to pro- videaprogress-insensitiveproperty,byartificiallystrengtheningthesupposedprevious knowledgetoalreadyincludeprogressknowledge.Theirdefinitionofprogressknowl- edgeisasfollows: Definition8 (ACProgressKnowledge⊲§III.A[AC12]). k+(A,c,a,t)={ρ|hc,ρi−−t′−·v→ ∧A(t′)=A(t)} a Substituting this (actually, its complement) in the “previous knowledge” position in Def.7providesAskarovandChong’snotionofprogress-insensitivesecurity: Definition9 (ACProgress-Insensitive(ACPI)Security⊲Def.2[AC12]).Command cisACProgress-InsensitivesecureforpolicyDagainstanattackerAonchannelafor initialstoreσifforalltracestandvaluesvsuchthathc,σi−→t nhc′,σ′i−−v→1wehave a a ek(A,c,a,t·v)−ek+(A,c,a,t)⊆{ρ|ρ6≡σ} where≡=D (c,σ,n). a Now consider again programs A and B above. These are examples of programs where the only flows are progress-flows. In general, we say that a program is quasi- constantifthereissomefixed(possiblyinfinite)tracetsuchthateverytraceproduced by the program is a prefix of t, regardless of the choice of initial store. Thus, for a quasi-constantprogram,the only possible observablevariationin observedbehaviour istracelength,soallflowsareprogress-flows.SincePIsecurityisintendedexplicitly to allow progress-flows, we should expect all quasi-constant programs to satisfy PI security,regardlessofthechoiceofpolicyandforallpossibleattackers.But,forDef.9, thisfailstohold,asshownbythefollowingcounterexample. Example5. Consider the program and attacker below. The attacker is a very simple bounded-memoryattackerwhichremembersjustthelastoutputseenandnothingelse (notevenwhetherithasseenanypreviousoutputs). // x6→a out 1 on a; out 1 on a; 1 q1 1 while (x) skip; out 1 on a; start q0 1 2 out 2 on a; 2 q2 2 Clearly,theprogramisquasi-constant.However,itisnotACPIsecureforthegiven attacker.Toseethis,supposethatx=0andconsiderthetracet=1·1·1.Theattacker has no knowledge at this point (k(t) is the set of all stores) since it does not know whetherithasseenone,twoorthree1’s.Itiseasilyverifiedthatk+(t)isalsotheset ofallstoresforthisattacker(intuitively,givingthisattackerprogressknowledgeinthe form k+ doesn’thelp it, since it still does not know which side of the loop has been reached).Butk(t·2)isnotthesetofallstores,sinceinstateq theattackerisableto 2 excludeallstoresforwhichx=1,thusACPIsecurityisviolated. Whathasgonewronghere?Theattackeritselfseemsreasonable.Wearguethatthereal problemliesinthedefinitionofk+(A,c,a,t).Asdefined,thisistheknowledgethatA wouldhaveinstateA(t) ifgivenjusttheadditionalinformationthatccanproduceat least one more output.But this takes no accountof any previousprogressknowledge which might have been forgottenby A. (Indeed, the above attacker forgetsnearly all such previous progress knowledge.) As a consequence, the resulting definition of PI securitymistakenlytreatssomeincreasesinknowledgeassignificant,eventhoughthey arisepurelybecausetheattackerhasforgottenpreviouslyavailableprogressknowledge. Our solution will be to re-define progressknowledgeto includewhat the attacker wouldknowifithadbeenkeepingcount.Tothisend,foranyattackerA = (S,s ,δ) 0 wedefineacountingvariantAω =(Sω,sω,δω),suchthatSω ⊆S×N,sω =(s ,0) 0 0 0 andδω((s,n),v)=(δ(s,v),n+1).Ingeneral,Aω willbeatleastasstronganattacker asA: Lemma2. ForallA,c,a,t: 1. k(Aω,c,a,t)⊆k(A,c,a,t) 2. ek(A,c,a,t)⊆ek(Aω,c,a,t) Proof. ItisiseasilyseenthatAω(t) = (q,n) ⇒ A(t) = q.ThusAω(t′) = Aω(t) ⇒ A(t′)=A(t),whichestablishespart1.Part2isjustthecontrapositiveofpart1. Ouralternativedefinitionofprogressknowledgeisthen: Definition10 (FullProgressKnowledge). k#(A,c,a,t)={ρ|hc,ρi−−t′−·v→ ∧Aω(t′)=Aω(t)} a OurcorrespondingPIsecuritypropertyis:

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.