ebook img

DTIC ADA440802: Expressing and Enforcing Distributed Resource Sharing Agreements PDF

12 Pages·0.34 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview DTIC ADA440802: Expressing and Enforcing Distributed Resource Sharing Agreements

Expressing and Enforcing Distributed Resource Sharing Agreements TaoZhaoandVijayKaramcheti DepartmentofComputerScience CourantInstituteofMathematicalSciences NewYorkUniversity taozhao,vijayk @cs.nyu.edu f g Abstract hasresultedinnewermodelsofresourcesharingthatspan multiple domains. For example, an increasing number of Advancesincomputingandnetworkingtechnology,and companies find themselves having to share informationin anexplosionininformationsourceshasresultedinagrow- their databases with others. Companies with limited bud- ing number of distributed systems getting constructed out getsmaywanttosharehardware(e.g.,networkbandwidth of resources contributed by multiple sources. Use of such anddisk capacity)withothers. Thegrowingpopularityof resources is typicallygovernedby sharingagreementsbe- application-specific providers (ASPs) exemplifies a situa- tweenowningprincipals,whichlimitbothwhocanaccessa tion where the ASP is sharing its resources with several resourceandinwhatquantity. Despitetheirincreasingim- clientorganizations. portance,existingresourcemanagementinfrastructuresof- In eachoftheabovescenarios, resourcesareownedby feronlylimitedsupportfortheexpressionandenforcement differentorganizationsandspanmultipleadministrativedo- of sharing agreements, typically restricting themselves to mains.Consequently,resourcesharingisgovernedbyshar- identifyingcompatibleresources. Inthispaper,wepresent ingagreementsbetweenparticipatingentities.Forexample, a novel approach building on the concepts of tickets and organizationAandBhaveresourcesharingagreementsbe- currencies to express resource sharing agreements in an tween them so that A can use 30% of B’s network band- abstract, dynamic, and uniform fashion. We also formu- width, and in return B can use 20% of the CPU power latetheallocationproblemofenforcingtheseagreementsas of A’s supercomputer. These agreements specify both the a linear-programming model, automatically factoring the obligations and privileges of participants, and executions transitiveavailabilityofresourcesviachainedagreements. of applicationsshould follow or be forced to follow those A case study modeling resource sharing among ISP-level agreements. Moreover, agreements must be enforced in webproxiesshowsthebenefitsofenforcingtransitiveagree- the presence of heterogeneousresource types and dynam- ments: worst-case waitingtimes ofclientsaccessingthese ically changing user set and resource availability. Also, proxiesimprovesbyuptotwoordersofmagnitude. sharingagreementscanbetransitive,i. e., oneprincipalis able to share resources from another via a chain of shar- ingagreementseventhoughthetwoprincipalsdonothave 1 Introduction any direct sharing agreements. Unfortunately, most ex- isting resource management infrastructures such as Con- With advances in computing and networking technol- dor [11], Globus [6], Legion [5], and others [2, 13, 15] ogy,moreandmoredistributedcomputersarebeinginter- provideverylimitedsupportforexpressingandefficiently connected to provide a large collection of computing and enforcingsuchsharingagreements. Thissupporttypically communication resources. There is an increasing desire takes the formof “matching”up requestswith compatible to make this resource collection sharable by participating resourcetypes and does not factor in any capacity restric- entities. Although current uses of such sharing focus on tions implied by the agreements. The enforcement of ca- distributedsystemsbuiltfromcomponentsbelongingtoei- pacityconstraintsisusuallytheresponsibilityofendpoints ther a single or a small numberof administrativedomains inthosesystems. (e.g., US supercomputer sites), the growth of the internet In this paper, we present a novelapproachfor express- 0-7803-9802-5/2000/$10.00 c2000IEEE. ing resource sharing agreements, which extends the con- (cid:13) 1 Report Documentation Page Form Approved OMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington VA 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE 3. DATES COVERED 2000 2. REPORT TYPE - 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Expressing and Enforcing Distributed Resource Sharing Agreements 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION Defense Advanced Research Projects Agency,3701 North Fairfax REPORT NUMBER Drive,Arlington,VA,22203-1714 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSOR/MONITOR’S ACRONYM(S) 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 12. DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release; distribution unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT see report 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF 18. NUMBER 19a. NAME OF ABSTRACT OF PAGES RESPONSIBLE PERSON a. REPORT b. ABSTRACT c. THIS PAGE 11 unclassified unclassified unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 cepts of tickets and currencies originally proposed in the 2 Expressing Sharing Agreements context of uniprocessor operating system scheduling [14]. Our approachabstracts out resourceheterogeneityby uni- Aresourcemanagementsystemrequirespreciseexpres- formlyrepresentingresourcecapacities in terms of tickets sionofthreekindsofinformation:resourceavailability,re- withdifferingvalues.Thus,diverseresourcessuchasanIn- sourcerequests,andagreementsbetweenusers. Resources telPentiumIICPUandanIntelPentiumIIICPUcanboth refer to both physical entities such as CPU and disk, as berepresentedasaCPUticketwithdifferentvalue. Agree- wellaslogicalentitiessuchasaccessrights. Bothresource ments themselves are captured in terms of other kinds of availability and resource requests are represented as vec- tickets that are issued by currencies. The value of a cur- tors,withentriesquantifyingthequantityorneedforeach rencyfluctuatesdynamicallyas a functionofchangingre- different kind of resource. Our approachexpresses actual sourceavailabilityandaffectstheactualvalueofthetickets. resourcecapacitiesandagreementsbetweenparticipantsus- Thisallowscapturingofbothabsoluteandrelativesharing ingauniformframework,permittingtheconvenientcompu- agreements. The schemeis complementedby a globalre- tation of dynamicresource availability for each individual source allocation algorithm that uses linear programming participant. techniquestoenforcetheseagreements,automaticallyfac- toring the transitive availability of resources via chained 2.1 ATaxonomyofAgreements agreements. When multiple scheduling choices are avail- able,thealgorithmchoosestheonethatperturbsglobalre- Sharingagreementsbetweentheownerofaresourceand sourceavailabilitytheleast. Byexposingresourcesharing itsusersdefinebothwhichuserscanaccesstheresourceand agreementstotheresourcescheduler,theschedulerisable anycapacityconstraintsgoverningsuchaccess. Forexam- tomakemoreinformeddecisionsthattakeboththeresource ple, an owner A might specify that its resources are avail- availabilityandresourcesharingagreementsintoaccount. ableforusebyusersBandCbutnotbyuserD. Moreover, AmightenableBtousealargerfractionoftheresourceas Toassess theadvantagesofsharingagreementsandthe comparedtoC. benefits from global allocation decisions, we describe a Agreementscanbeeithergrantingorsharinginnature, casestudymodelingresourcesharingamongISP-levelweb accordingtowhocanusetheresourcesaftertheagreement proxies. Our results, based on trace-driven simulation of isapplied. Theformerreferstothecasewherethegrantor client request streams, show that sharing agreements re- gives up the resource to the grantee, i.e., the grantor can- sultinhigherperformanceateachproxywithoutrequiring not use the resource itself after granting unless it revokes increased capacity investments, particularly when partici- theresourcefromthegrantee(agreementends). Withshar- pants exhibit resource utilization profiles that are skewed ingagreements,boththegrantorandgranteehavetherights intime(asmighthappenwithgeographically-distantprox- tousetheresource. Inthispaper,werestrictourattention ies). Moreover,enforcingtransitive agreementscan result toconsumableresourcessuchasCPUanddiskbandwidth, inperformanceimprovementsthatcompensateforthecost whichcanonlybeusedbyoneprincipalatatime. of redirecting requests to available resources: worst-case Anotherdimensionalongwhichagreementscanbeclas- client waiting times improve by up to two orders of mag- sifiedis theexpressionofthequantityconstraintsonshar- nitude. Simulation results also show that with global in- ingagreements.Agreementscanbeeitherabsoluteorrela- formationaboutresourceavailabilityand resourcesharing tive.Inabsoluteagreements,thequantityisaconstant(e.g. agreements,aresourceschedulercanmakebetterschedul- 10MBdiskspace),whileinrelativeagreements,thequan- ingdecisions. tityvariesasafunctionofavailableresources(e.g.50%of theavailableCPU).Relativeagreementsareusefulforen- Thetechniquesdescribedinthispaperarepartofaflex- suring that a certain fraction of a principal’s resourcesare ibleexecutionsubstratefordistributedapplicationsthatwe always available for its own use, even when the available areconstructingintheComputingCommunitiesproject[3] resourcesdecreasealotduetointensiveusage. at New York University. Other techniques (e.g., resource- constrainedsandboxing[4])beingdevelopedaspartofthis 2.2 TicketsandCurrencies project focus on the complementary problem of security concerns, e. g., ensuringthat a malicious applicationdoes notmisusearesourcethatithasbeenallocated. The primary challenge with expressing sharing agree- ments is to capture the heterogeneous nature and dynam- In the rest of this paper, we describe in turn agreement ically changing availability of resources that impact both expression(Section2),agreementenforcement(Section3), which resources are accessible using an agreement and in andthecasestudy(Section4). Section5discussesrelated what quantity. We use the concepts of tickets and cur- workandweconcludeinSection6. rencies, originally proposed in the context of a random- 2 ized uniprocessor scheduling mechanism called Lottery ticketbooststhevalueofcurrencyBto5 15 20,increas- Scheduling[14],toaddressthischallenge. ingtheamountofdiskresourcesavailabl+etoj=obssubmitted Ticketsareabstractentitiesthatdifferintypeandvalue. byprincipalB. PrincipalBinturnhasarelativeagreement Inourapproach,ticketsareusedtoencapsulatebothaccess with principal D, captured via relative ticket R-Ticket5 of andcapacityconstraintsgoverningresourceusage.Possess- facevalue60issuedbycurrencyBwithafacevalueof100. ingtherighttickettypepermitsaccesstotheresourceand Notethatthetruevalueofthisticketis20 60 100 12, theticketvaluedeterminestheresourcequantitythatcanbe whichimplicitlyintegratestheresourcesava(cid:2)ilabl=etop=rinci- accessed. Ticketscanbeeitherabsoluteorrelative. Anab- palDviaitsdirectagreementwithB,aswellasitstransitive soluteticketcapturestherighttouseanabsolutequantityof agreementwithA(viaB). resource(e.g.,10MBofdisk),whilethevalueofarelative Thus, theactualresourcecapacitiesin a systemare ex- ticketdependsontheissuingcurrency.Currenciesdenom- pressedusingabsoluteticketsfundingtheowner’scurrency inatetickets.Eachcurrencyisbacked(orfunded)bytickets andagreementsbetweenparticipantstaketheformofabso- andinturnissueitsowntickets. Agreementsbetweentwo luteorrelativeticketsissuedbyoneandbackingtheother’s entitiesAandBcanberepresentedbyA’scurrencyissuing currency. The quantity constraint of this sharing agree- atickettosupportB’scurrency,whichmeansBcanusepart ment is the value of the ticket. Notice that the real value oftheresourcesofA(seeFigure1). ofrelativeticketscanchangedynamicallyasmoresupport- The valueofan absoluteticket is its face value. Anda ingticketsjointheissuingcurrencyorassomesupporting relative ticket’s real value is computed by multiplying the ticketsleaveasaconsequenceofchangingresourcesharing value of the currency from which it is issued by its share agreements.Alsowecaninflateordeflatethecurrenciesby of all the amount issued by that currency. The value of a increasingordecreasingthenumberofunitsinthecurrency, currencyisdeterminedbythesummationofallthebacking similartoinflationcausedbythegovernmentprintingmore tickets(bothabsoluteonesandrelativeones). papermoney. Also,newcurrenciescanbecreatedinaddi- tiontothedefaultper-participantcurrencies:thesepermita A-Ticket3 participanttodecouplethequantityofresourcestransferred 3 alongsomesubsetofitsagreementsfromfluctuationsinan- othersubset. Currency A Currency B Currency C Currency D 1000 R-Ticket4 100 100 100 500 A-Ticket1 A-Ticket2 R-T6ic0ket5 R-T3i0ck0et6 10 15 R-Ticket7 40 Currency A1 Currency A2 10TB Disk User A 15TB Disk User B User C User D 500 100 R-Ticket8 60 R-Ticket3 R-Ticket4 Figure 1. Use of tickets and currencies to express 300 500 sharingagreements. Currency A Currency B Currency C Currency D 1000 100 100 100 R-Ticket5 A-Ticket1 A-Ticket2 50 10 15 Example1. Figure1showsanexampleoftheuseoftick- etsandcurrenciesforexpressingsharingagreements. This 10TB Disk User A 15TB Disk User B User C User D system has four principals, A, B, C, and D, and two disk resources of 10 TB and 15 TB capacity owned by princi- Figure2.Useofvirtualcurrencies. palsAandBrespectively. Theseresourcesarerepresented by two absolute tickets (A-Ticket1 and A-Ticket2), which fund currencies A and B, respectively. Principal A has an absoluteagreementwithprincipalCtoshare3TBofitsre- Example2. Figure2showsanexampleoftheuseofvir- sources,andarelativeagreementwithprincipalBtoshare tual currencies to isolate one subset of agreements from 50% of its available resources. The formeris capturedby fluctuationsofanothersubset.AsinExample1,thesystem anabsoluteticket(A-Ticket3)withvalue3andthelatterby hasfourprincipals,A,B,C,andD. Inadditiontofourde- arelativeticket(R-Ticket4)withafacevalueof500issued fault currenciesrepresentingtheir resources, two new cur- bycurrencyA, which has a face value of 1000. Thus, the rencies are created from tickets issued by currency A: A 1 realvalueofR-Ticket4is10 500 1000 5.Thisrelative and A , shown shaded in the figure. We call these cur- 2 (cid:2) = = 3 rencies virtual currencies, distinguished from default per- 3 Enforcing Sharing Agreements participant currencies. The value of virtual currencies is calculatedthesamewayasnormalcurrencies.SoinFigure Mostexistingresourcemanagementinfrastructurespro- 2,virtualcurrencyA hasthevalueofR-Ticket3,whichis3, vide only limited support for enforcing agreements, typi- 1 andvirtualcurrencyA hasthevalueofR-Ticket4,whichis cally providing a matching service that permits a request 2 5. Bothvirtualcurrenciesissuetickets,R-Ticket6fromA , tousecompatibleandaccessibleresources.Enforcingcon- 1 andR-Ticket7andR-Ticket8fromA .Theseticketsinturn straintsonresourcecapacityhastypicallybeentherespon- 2 fundcurrenciesC,D,andBrespectively,representingthree sibility of the end points. To enforce sharing agreements, agreementsbetweenAandeachofB,C,andD. Theagree- theschedulermustkeeptrackofdynamicallychangingre- mentsaredividedintotwosubsets: onecomprisingBand sourceavailabilityviabothdirectandtransitiveagreements, D,andtheotherC.Acanmakechangetooneoftheagree- and optimize a global criterion when faced with multiple ment subsets (e.g. by inflating the face value of currency choicesforhowtoallocateresources. A , or by issuing another ticket from A ) without having We have constructed a linear equation model for the 1 1 any influence on the other subset. Therefore, by creating problemofallocatingresourcestoarequestinadistributed virtual currencies, A can effectively decouple the quantity systeminthepresenceofsharingagreements. Thismodel ofresourcestransferredalongonesubsetofitsagreements works with the abstract notionof resources in the formof fromfluctuationsinanotherone. ticketsandcurrenciesdescribedearlier. Arequestforspe- As illustrated in the above two examples, using tick- cificquantitiesofoneormoreresourcesisviewedasare- ets and currencies permits us to express resource capaci- questforthesetoftickettypesrepresentingtheseresources ties and sharing agreements in an abstract, dynamic, and withticketvaluescapturingthedesiredquantities.Coupled uniform fashion. Abstract, because tickets and currencies resources,whichmustbeallocatedasagroup,canbebound quantifyresourcerightsindependentfromresourcedetails. togetherandviewedasanewtypeofresource. Thelinear Dynamic, since the fraction of a resource that tickets rep- equation model permits the scheduler to easily determine resentvariesdynamicallyinproportiontothecurrencythat (a) whether the principal submitting the job has sufficient issues the tickets. And uniform, because tickets homoge- resourcesavailable to it either directly or transitively (i.e., neouslyrepresentrightsforheterogeneousresources. sufficient-valued tickets of the requisite type), and (b) in theeventoftherebeingmultipleoptions,whichactualre- Although we have restricted our discussion here to the sourcestoallocatetheneededcapacityfrom. case where one resource is represented by a single ticket type, this mechanism can be extended to handle multiple 3.1 BasicModel viewsofthesameresourcesbyenablingresourcesbacking multiple ticket types. This is useful in several situations. Wefirstdescribeasimpleformofthismodel,whichonly For example, the disk bandwidth resource can be viewed considers a single resource type and only relative sharing astwokindsofresources: readbandwidthandwriteband- agreements. Supposetherearenusers,eachofwhomown width. Thismorecomplexcaseisasubjectoffuturework a resourcewith actual capacityV 0. The matrix S cap- andisnotdealtwithfurtherinthispaper. i turesalloftheagreementsbetween(cid:21)principals;S indicates Despitethegeneralityaffordedbyourexpressionmech- ij thevalueoftherelativeticketissuedbycurrencyiandback- anism, in practice, we expect most sharing agreements to ingcurrency j. Forexample,S 03signifiesthatuseri fallintooneofthefollowingstructures: agreesto share 30%of its availiajb=le r:esourceswith user j. ThreeconstraintsontheelementsofmatrixSare: S 0, ii Complete Each participant has sharing agreements with S 0and∑ S 1. = ij 1 k n ik everyotherparticipant.Thissituationismostlikelyto T(cid:21)o determin(cid:20)e(cid:20)wheth(cid:20)er or not a user has sufficient re- happenwhenthereareasmallnumberofparticipants. sourcesavailabletosatisfyitsjobrequest,wefirstcalculate the resource flow from one currencynode to another. Let I m denotethe resourceamountthat issues fromcurrency Sparse Everyparticipantonlyhassharingagreementswith i(j ) a relatively small number other participants. This nodeiandbackscurrencynode j throughatmostmlevels structure happens in situations where the number of oftransitiveagreements.Wehave(seeFigure3): participantsisrelativelarge. Ii(jm)=Ii(jm(cid:0)1)+Vi(cid:2) 1 k ∑k n Sik1Sk1k2(cid:1)(cid:1)(cid:1)Skm(cid:0)1j 1 m 1 Hierarchical In this structure, users are divided into (cid:20)kp ;k(cid:1)(cid:1)q(cid:1); p(cid:0) q(cid:20) groups. Insideagroup,usershavecompleteresource k6=p i(8j 6=p ) 6= ; (8 ) sharing. Betweengroupstherearehigherlevelsparse The constraints under the summation ensure that there is sharingagreements. nocyclealongthetransitiveagreementpath. 4 Flow along one path with length m: CA0 CA x (3) = (cid:0) k2 Vi x Sik1 x Sk1k2... x Skm-1j ∑0(cid:20)(Vi(cid:0)Vi0)(cid:20)Ii(An(cid:0)1) (4) V V x (5) k i i0 k1 m-2 km-1 (Ci((cid:0)θ(cid:0))(cid:20))C=i0(cid:20)Ci (6) Thevariableθistheglobalmetricwewanttominimize. i j Therearen n 1 variablesI n 10,nvariablesC,nvari- Flows along all paths with i(j(cid:0) ) i0 (cid:2)( (cid:0) ) V length <=m-1: I(m-1) ablesV and1variableθ,leadingtoatotalof n2 n 1 i ij i0 variables. Usinglinearprogramming[8],weca(nm+inim+ize) θ and thereby obtain the allocation schedule. The com- Figure3.Flowofresourcesbetweennodeiandnode plexity of the linear programming model can be reduced jviaatransitivechainofagreementsinvolvinginter- byexploitingadditionalstructureincommonlyencountered mediatenodesk k k . 1; 2;:::; m(cid:0)1 agreementgraphsasdescribedinthefollowingsection. 3.2 ExtensionstotheBasicModel Although the formula appears very involved, it can be Intheabovemodel,we onlyconsiderasingleresource rewrittenas: type in a resource request. In practice, resource requests Ii(jm) Vi Ti(jm) usuallyincludemultipleresourcetypesatthesametime.In = (cid:2) thiscase,arequestforktypesofresourcesisintheformofa whereT m isaconstantwhichinvolvesonlytermsofele- i(j ) vector r r r ,inwhichr istheamountrequested 1 2 k i mentsinagreementmatrixS. for res<ource; ty;p::e:;i. >To schedule this request,we need to The transitive closure of agreements is given by I n 1 i(j(cid:0) ) solveklinearsystems,oneforeachresourcerequested,and foralli j. Thus,userihasaccesstoaresourceamountC allocate resources according to the results. One problem i where ; with this scheme is that it does not handle resources that C V ∑I n 1 mustbeallocatedtogether.Forexample,CPUandmemory i i k(i(cid:0) ) = +k i resources need to be on the same machine and cannot be 6= Having determined that there are sufficient resources allocatedseparately. Onewaytosolvethelatteristobind available to the user to satisfy its job request, the second thesetypesofresourcesintoanewtypeofresourcesothat schedulingproblemistodeterminewhichactualresources theyarealwaysallocatedtogether. to obtain the requiredcapacityfrom. In generalthis deci- In the basic model, we have a restriction on the agree- siondependsonseveralfactorssuchasthecostofborrow- mentmatrix: ∑ S 1. Thisrestrictionis toprevent 1 k n ik ingresourcesfromadifferentsiteandconcernsoffairness. aparticipantfrom(cid:20)s(cid:20)haring(cid:20)moreresourceswithothersthan Here,werestrictourattentiontooptimizingaglobalmetric it possesses. For example, suppose A has 10 units of re- thatcapturesthenotionofperturbingfutureresourceavail- sources. If there were no such restriction, A could share abilityinthesystemtheleast. Intuitively,thegoalhereisto 60% of its resourceswith B and 60% withC. If B shared leavethesysteminastatewhereithassufficientresources 100%ofitsresourcewithC,Cwouldhave6unitsresources tosatisfyfuturerequestsindependentofwhichprincipalis directlyfromAandanother6viaB,whichexceedswhatA makingtherequest. hasintotal!Butinsomesituationsitisdesirabletohavethis Addingadditionalconstraintstothelinearprogramming kindof“overdraft”anditdoesnotconflictwiththeseman- model helps solve this problem. Let V be the amount tics of sharing. In order to lift the restriction, we need to i0 of resources left with principal i after the current allo- modifyourcalculationsslightly,namely,changetheequa- cation has taken place, changing the resource availability tion: for user i from C to C. Our objective is to minimize I m V T m i i0 i(j ) i i(j ) θ max C C . The additionalconstraintscan be = (cid:2) 1 i n i i0 to ex=pressed(cid:20)as(cid:20)be(low(cid:0),wh)ereI n 10 referstoI n 1 afterallo- I m V K m i(j(cid:0) ) i(j(cid:0) ) i(j ) i i(j ) cation,Areferstotheprincipalmakingthecurrentrequest, = (cid:2) where andxtotherequestedamount: T m : T m 1 K m i(j ) i(j )< Ii(jn(cid:0)1)0 =Vi0(cid:2)Ti(jn(cid:0)1) (1) i(j )=( 1 : Ti(jm)(cid:21)1 C V ∑I n 10 (2) Therefore,intheaboveexamplethequantityofresourcesC i0= i0+k i k(i(cid:0) ) canobtainislimitedto10insteadof12. 6= 5 The basic model considers only relative sharing agree- 4.1 SimulationModel ments. To take absolute sharing agreements into account, weneedanabsoluteagreementmatrixAinadditiontothe The simulation model is depicted in Figure 4. Each relativeagreementmatrixS. A indicates thevalueof the proxyserveswebcontenttoitslocalcustomers. Arequest ij absoluteticketissuedbycurrencyiandbackingcurrency j. from a customer consumes resources such as CPU, disk, Weextendformulasinthebasicmodeltodealwithabsolute memory and network bandwidth of the proxy server. To ticketsinthesystemasfollows. improvetheirservice(responsetimeisthemainconcernin Let this case) to customers, ISP-level resource sharing agree- mentspermitproxyserverstoshareeachothersresources. I n 1 A : I n 1 A V Uki= Vk(ki(cid:0) )+ ki : Ikk((iin(cid:0)(cid:0)1))++Akkii<(cid:21)Vkk Stopseactiifiscfyalalyll,itfhtehceulsotcoamlreersso’urercqeuseostfsainprtoimxye,atrheenportoexnyoucgahn ( redirectsomeoftheserequeststootherproxyservers. Theresourcecapacityofuserithenbecomes ∑ C V U i i ki Internet = +k i 6= In practice, the complexity of the linear programming modelcanbereducedbyexploitingadditionalstructurein the agreement graph. For example, one can use faster al- Agreement Agreement ISP 1 ISP 2 ISP 3 gorithmstodealwith sparsematriceswhenthe agreement Global Resource structureissparse. Inthecaseofahierarchicalagreement Front End Front End Front End Sheduler structure,wecanusetechniquesmotivatedbymulti-gridre- finement: oncearequestcomestoagroup,andthatgroup cannotsatisfytherequest,weuseLPtofindthedistribution ofresourcesamonggroups;basedonthedistributionresult, Customer Custormer Customer Customer werunLPinside eachgrouptofurtherrefinetheresource allocation,iteratingthisprocessasrequired. Figure4.Agroupofcooperatingwebproxyservers We are in the progress of implementing this model in withresourcesharingagreements a cluster-level resource management system built on top . of Globus, which would allows us to further validate our model. Theresourcemanagementsystemhastwocompo- Aglobalresourceschedulerstoresinformationaboutthe nents: a centralized global resource manager (GRM) and sharingagreementsandkeepstrackofavailableresourcesat multiple local resource managers (LRM). The GRM pro- eachproxyserverusingtheticketsandcurrenciesapproach videsservicestomanagesharingagreementsandtosched- describedinSection2. Forsimplicity,all proxyserverre- ule resources among local resource managers. LRMs are sources are collapsed together into a single “general” re- responsibleforprovidingresourceavailabilityinformation source,thatdeterminestheprocessingpoweroftheserver. totheGRMdynamically,andfulfillingresourceallocation Arequestarrivesattheserverfront-endandasksforaspe- according to the GRM’s decisions. The architecture also cificamountofthisresource.Givenourfocusoncharacter- permitssplittingoftheGRMsintomultiplelevels,eachre- izingthequalitativeadvantagesofsharingagreements,we sponsibleforasubsetoftheLRMs. assumeaprioriknowledgeofper-requestresourcerequire- ments using a simple linear model based on the response 4 CaseStudy: ResourceSharingamongISP- length: arequestproducingaresponseoflengthxrequires levelWebProxies server resources a bx (in the experiments reported here a 01 seconds an+d b 10 6 seconds; also to avoid ex- (cid:0) To assess the advantages of resource sharing under tre=mel:y long response l=engths from causing spikes in the agreementsandstudytheeffectsofvariousparameterssuch waitingtime,wesetthemaximumresourcesneededperre- as the density of agreements and the impact of transitive quest to be c 30 seconds). When the resource require- agreements, we report on results obtained using a trace- mentsofreque=stsqueuedupataproxy’sfront-endexceeda basedweb proxysimulator that simulates the workloadof threshold,theglobalschedulerisconsultedtodecidehowto a group of ISP-level web proxies that cooperate using re- redirectsomeoftheserequeststootherproxies.Thesched- source sharing agreements. This simulator models an in- ulermakesresourceallocationdecisionsenforcingtheshar- frastructureforsharedaccesstoweb-basedservicesthatis ingagreementsbysolvingthelinearsystemdescribedinthe beingcurrentlydevelopedbytheauthors. previoussection. 6 Theinputtothesimulatorisaper-proxystreamofHTTP d) 250 n ISP-4, Complete, Level=9, Share=0.1 requests, generated from the UC Berkeley Home IP Web o c e Traces[1]. Thetrace consists of18days’worthof HTTP e (s 200 tracescollectedinNovember1996andcomprisesmorethan m Ti Gap=900 9millionreferences. Intheresultsreportedhere,wefocus g 150 n Gap=1800 ona24-hourperiodobtainedbyaveragingthe18-daytrace. aiti Gap=3600 W 100 4.2 SimulationResults 50 ThesolidlineinFigure5showstheaveragenumberof 0 requestsduringeach10-minuteslotofa24-hourperiod.We 0 2 4 6 8 10 12 14 16 18 20 22 canseethattheproxyloadisheaviestaroundmidnightand Time of the Day (hour) lightestaroundtheearlymorninghours. Thedottedlinein Figure5showstheaveragewaitingtimeofarequestforthe Figure 6. Averagewaiting time per request with re- simulator parameters above; as expected the waiting time source sharing for different “gap” times (skew) be- peaksatmidnightwithsomerequestsseeingawaittimeof tween the proxyservers. The agreementstructureis asmuchas250seconds. a complete graph between 10 servers: each server shares10%ofitsresourceswitheveryotherserver. quests 56000000 235000 econd) e s # of R 4000 200 Time ( Figure 7 compares this performance with the average g waiting time obtained when sharing is disabled, but the 3000 # of Requests 150 aitin proxyserverhasmoreprocessingpower(correspondingto W 2000 Waiting Time 100 an increased capacity investment). We can see that 25%- 35%moreresourcesarerequiredtomatchtheperformance 1000 50 obtainedbyresourcesharing. 0 0 0 2 4 6 8 10 12 14Tim1e6 of1 t8he D20ay 2(h2our) ond) 4.5 Sharing with 100% capacity ec 4 No sharing with 125% capacity s e ( 3.5 No sharing with 135% capacity Figure5.Thenumberofrequestsandaveragewaiting m Ti 3 timeperrequestwithoutresourcesharing. g n 2.5 Waiti 2 1.5 1 0.5 Overall Benefits of Resource Sharing Because the re- 0 questtrafficfluctuateswithtime,ifdifferentISPsareindif- 0 2 4 6 8 10 12 14 16 18 20 22 ferentgeographicalregionssothattheyexhibit“rushhour” Time of the Day (hour) behavior at different times, busy servers can benefit from resourcesharingbyborrowingresourcesfromidleservers. Figure7.Averagewaitingtimeswithandwithoutre- Figure6showstheimpactofresourcesharingagreements sourcesharingfordifferentprocessingcapabilitiesof betweenagroupof10ISPsontheaveragewaitingtimeofa theproxyserver. clientrequestataparticularISP,parameterizedfordifferent amounts of time skew between the client request streams. The agreement structure is a complete graph where each servershares10%ofits resourceswitheveryotherserver. Figure6showsthatwithagapof3600seconds,theaverage Benefits from Transitive Agreements The formulation waitingtimedropsdramaticallyfrom250secondstobelow inSection3automaticallyfactorsintransitiveagreements. 2seconds,demonstratingtheabilityofsharingagreements Toassesstheadditionalperformancebenefitsthatcanresult tospreadoutpeakloads. fromdoing so, we comparethe average waiting times ob- 7 tainedfordifferentlevelsoftransitivityinthecontextoftwo When three or more levels of transitive agreementsare agreementstructures,acompletegraphandacyclicloop.In considered, the worst-case waiting time drops to about 2 thelatter,eachserveronlyhasasharingagreementwithone seconds in all three configurations. This shows that fac- otherserver. Consideringonlyasingleleveloftransitivity toring in transitive agreements yields robust performance. isequivalenttoenforcingonlydirectagreements. Note that considering longer chains of agreements yields small incremental benefit because of an exponential de- crease in the amount of resources accessible along the ond) 3 ISP-4, Complete, Gap=3600, Share=0.1 chain. c se2.5 Time ( 2 Level=1 ond)40 ISP-4, Loop Skip=1, Gap=3600, Share=0.8 Waiting 1.5 Level=9 me (sec3305 1 g Ti25 Level=1 0.5 Waitin20 LLeevveell==93 15 0 10 0 2 4 6 8 10 12 14 16 18 20 22 Time of the Day (hour) 5 0 0 2 4 6 8 10 12 14 16 18 20 22 Figure8.Averagewaitingtimewithresourcesharing Time of the Day (hour) for different levels of transitivity of indirect agree- ments for a complete graph agreement structure be- Figure9.Averagewaitingtimewithresourcesharing tween 10ISPs whereeach ISP shares10% ofits re- for different levels of transitivity of indirect agree- sourcewitheveryotherISP. ments where the agreement structure is a loop with each ISP sharing 80% of its resources with the next one. The sharing neighbors are one-hour time zone Figure8showsthatinthecompletegraphcase,resource away. sharinghelps butthe incrementalimprovementbyconsid- ering indirect transitive agreements is small. This is ex- plainedbythefactthatalloftheserversarealreadyreach- ablefromtherequestingserverusingdirectagreements.To furtherinvestigatethebenefitsoftransitiveagreements,we EffectofRedirectionCost Thepreviousresultsassumed ran the simulator on three different loop based agreement that the cost of redirection was negligible. Here we con- structures. Figures 9, 10 and 11 show the results of these sider the impact on waiting time when each redirectedre- simulations. questmustincurafixedoverheadthatiseither0.1seconds Although all three agreement structures are loops in or0.2seconds. Thesecostsareapproximatelythesameas which each ISP has sharing agreement with one neighbor or double the average processing time. Figure 12 shows ISP,thetimezonegap(skip)betweentwoneighborsaredif- that in the completeagreementgraphcase, the addedcost ferentinthesethreeconfigurations.Aswecanseefromthe hasnegligibleimpactontheaveragewaitingtime. Thisis figures,thisdifferenceisreflectedinthesimulationresults. because only a small numberof requests (less than 1.5%) Theworst-case waiting time whenconsideringonlydirect areredirected. Evenat peaktime,this amountis less than agreements (level=1) is 35 seconds in Figure 9 (skip=1), 6%.Althoughthewaitingtimeoftheserequestshassignif- anddroppedto7secondsinFigure10(skip=3)andfurther icantpenalty,thispenaltyhasnegligibleimpactonoverall to about3 secondsin Figure11(skip=7). This is because performance. Even for the particular request, the redirec- of the skip value of the loop structure. By enforcingonly tionpaysoffbecausewithoutredirectionthisrequestwould directagreements,eachISPcanonlyuseresourcesfromits suffer much longer delay. Thus, the benefits of resource neighbor. In Figure9, the neighboris only one-hourtime sharingcancompensateforfairlylargeoverheadsinusing zone away for each ISP, it is quite likely that once one of remoteresources. themisbusyanotheristoo.ThusoneISPmaynotgetmuch helpfromitsneighbor. Bypushingdirectneighborfurther Centralizedv.s. End-pointEnforcement Weuseacen- as in Figure 10 and Figure 11, we can get more resource tralized resource manager and linear programming model sharingbenefits. toscheduleresourcesinthebeliefthatwithglobalresource 8 d) 2.5 n ISP-4, Complete, Gap=3600, Level=9 o c e e (s 2 m Cost=0 ond) 8 ISP-4, Loop Skip=3, Gap=3600, Share=0.8 ng Ti 1.5 CCoosstt==00..12 e (sec 67 Waiti 1 m g Ti 5 LLeevveell==13 0.5 n aiti 4 Level=9 W 0 3 0 2 4 6 8 10 12 14 16 18 20 22 2 Time of the Day (hour) 1 0 Figure 12. Average waiting time of a request as a 0 2 4 6 8 10 12 14 16 18 20 22 functionofredirectioncost. Time of the Day (hour) Figure10.Averagewaitingtimeswithresourceshar- ingfordifferentlevelsoftransitivityofindirectagree- availabilityinformationandalltransitiveagreementsinfor- mentswheretheagreementstructureisaloopandthe mationinhand,thecentralizedresourceschedulercanmake sharingneighborsarethree-hourtimezoneaway. betterdecisionsandimprovetheresourceutilization.Inthis experiment,wecomparetheperformanceusingourscheme withaschemewhereagreementenforcementisperformed attheendpoints. Theagreementstructureisacompletegraphwhereeach ISP shares 20% of its resources with neighbors one-hour time zone away, 10% with neighbors two-hour time zone away,5%withthosethreehoursawayand3%withfurther neighbors.Thebasicschemeweusedredistributesrequests queuedupataproxy’sfront-endtoallotherISPs.Thenum- nd) 3.5 berofrequestsredistributedisproportionaltothequantity o ISP-4, Loop Skip=7, Gap=3600, Share=0.8 ec 3 of sharing agreements with other ISPs. Therefore, when s e ( anISPisbusy,ittendstoredirectmorerequeststonearby Tim 2.5 Level=1 ISPsthanfarawayISPs. Figure13showsacomparisonof ng 2 Level=3 thisschemewiththelinearprogrammingschemediscussed Waiti 1.5 Level=9 inSection3. As we can see fromthe figure, the linear programming 1 scheme reduces the average waiting time by more than 0.5 50% at traffic peak time. This is because the non-linear schemetendstoredistributerequeststonearbyISPsnomat- 0 terwhethertheyarebusyornot,whilelinearprogramming 0 2 4 6 8 10 12 14 16 18 20 22 Time of the Day (hour) schemetakesboththeresourceavailabilitystatusandshar- ingagreementsintoaccountwhenmakingdecisions. Figure11.Averagewaitingtimeswithresourceshar- ingfordifferentlevelsoftransitivityofindirectagree- 5 Related Work mentswheretheagreementstructureisaloopandthe sharingneighborsareseven-hourtimezoneaway. Mostlarge-scaleresourcemanagementinfrastructuresin currentuseprovideonlylimitedsupportforexpressingand enforcingresourcesharingagreements. Condor[11],adistributedbatchprocessingsystemrun- ning on a cluster of of workstations, permits agreements between entities to be specified as part of the requirement 9

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.