ebook img

CalvinFS: Consistent WAN Replication and Scalable Metadata Management for Distributed File ... PDF

15 Pages·2015·0.89 MB·English
by  
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview CalvinFS: Consistent WAN Replication and Scalable Metadata Management for Distributed File ...

CalvinFS: Consistent WAN Replication and Scalable Metadata Management for Distributed File Systems Alexander Thomson, Google; Daniel J. Abadi, Yale University https://www.usenix.org/conference/fast15/technical-sessions/presentation/thomson This paper is included in the Proceedings of the 13th USENIX Conference on File and Storage Technologies (FAST ’15). February 16–19, 2015 • Santa Clara, CA, USA ISBN 978-1-931971-201 Open access to the Proceedings of the 13th USENIX Conference on File and Storage Technologies is sponsored by USENIX CalvinFS: Consistent WAN Replication and Scalable Metadata Management for Distributed File Systems AlexanderThomson Daniel J.Abadi Google† YaleUniversity [email protected] [email protected] Abstract storageacrossmanycommoditymachineswithinadata- Existing file systems, even the most scalable systems center,andthentokeephotbackupsofallcriticalsystem thatstorehundredsofpetabytes(ormore)ofdataacross components on standby, ready to take over in case the thousands of machines, store file metadata on a single maincomponentfails. serverorviaashared-diskarchitectureinordertoensure However, natural disasters, configuration errors, consistencyandvalidityofthemetadata. hunters, and squirrels sometimes render entire datacen- This paper describes a completely different approach tersunavailableforspansoftimerangingfromminutesto forthedesignofreplicated,scalablefile systems, which days[13,15]. Forapplicationswithstringentavailability leveragesa high-throughputdistributed database system requirements,replicationacrossmultiple geographically for metadata management. This results in improved separateddatacentersisthereforeessential. scalability of the metadata layer of the file system, as For certain classes of data storage infrastructure, sig- file metadata can be partitioned (and replicated) across nificantstrideshavebeenmadeinprovidingvastlyscal- a (shared-nothing) cluster of independent servers, and ablesolutionsthatalsoachievehighavailabilityviaWAN operations on file metadata transformed into distributed replication. For example,replicatedblockstores, where transactions. blocksare opaque, immutable,and entirelyindependent In addition, our file system is able to support stan- objectsarefairlyeasytoscaleandreplicateacrossdata- dard file system semantics—includingfully linearizable centerssincetheygenerallydonotneedtosupportmulti- randomwrites by concurrentusers to arbitrarybyte off- block operationsor any kind of locality of access span- setswithinthesamefile—acrosswidegeographicareas. ning multiple blocks. NoSQL systems such as Cassan- Such high performance,fully consistent, geographically dra [12], Dynamo [8], and Riak [2] have also managed distributedfilessystemsdonotexisttoday. toachievebothscaleandgeographicalreplication,albeit We demonstrate that our approach to file system de- through reduced replica consistency guarantees. Even signcanscaletobillionsoffilesandhandlehundredsof somedatabasesystems,suchastheF1system[19]which thousandsofupdatesandmillionsofreadspersecond— GooglebuiltontopofSpanner[7]havemanagedtoscal- while maintaining consistently low read latencies. Fur- ably process SQL queries and ACID transactions while thermore,suchadeploymentcansurviveentiredatacen- replicatingacrossdatacenters. teroutageswithonlysmallperformancehiccupsandno Unfortunately,filesystemshavenotachievedthesame lossofavailability. levelofscalable,cross-datacenterimplementation.While 1 Introduction many distributed file systems have been developed to scaletoclustersofthousandsofmachines,thesesystems Today’sweb-scaleapplicationsstoreandprocessincreas- donotprovideWAN replicationinamannerthatallows inglyvastamountsofdata,imposinghighscalabilityre- continuousoperationintheeventofafulldatacenterfail- quirementsonclouddatastorageinfrastructure. ureduetothedifficultiesofprovidingexpectedfilesys- The most common mechanism for maximizing avail- tem semanticsand tools(linearizableoperations,hierar- abilityofdatastorageinfrastructureistoreplicatealldata chicalaccesscontrol,standardcommand-linetools,etc.) †ThisworkwasdonewhiletheauthorwasatYale. acrossgeographicaldistances. 1 USENIX Association 13th USENIX Conference on File and Storage Technologies (FAST ’15) 1 In addition to lacking support for geographical repli- we call CalvinFS, has a different set of advantages and cation, modern file systems—even those known for disadvantages relative to traditional distributed file sys- scalability—utilize a fundamentally unscalable design tems. In particular, our system can handle a nearly un- for metadata management in order to avoid high syn- limited number of files, and can support fully lineariz- chronization costs necessary to maintain traditional file ablerandomwritesbyconcurrentuserstoarbitrarybyte systemsemanticsforfileanddirectorymetadata,includ- offsetswithin a file thatisconsistently replicatedacross ing hierarchical access control and linearizable writes. wide geographic areas—neither of which is possible in Hence,whiletheyareabletostorehundredsofpetabytes theabove-citedfilesystemdesigns.However,oursystem of data (or more) by leveraging replicated block stores is optimizedfor operationson single files. Multiple-file tostorethecontentsoffiles, theyrelyonanassumption operationsrequiredistributedtransactions,andwhileour thattheaveragefile sizeisverylarge,whilethenumber underlying database system can handle such operations of unique files and directories are comparatively small. athighthroughput,thelatencyofsuchoperationstendto Theythereforerunintoproblemshandlinglargenumbers belargerthanintraditionaldistributedfilesystems. ofsmallfiles,asthesystembecomesbottleneckedbythe 2 Background: Calvin metadatamanagementlayer[20,27]. Inparticular,mostmoderndistributedfilesystemsuse As described above, we horizontally partition metadata oneoftwosynchronizationmechanismstomanagemeta- for our file system across multiple nodes, and file oper- dataaccess: ationsthat needto atomicallyeditmultiple metadatael- • Aspecialmachinededicatedtostoringandmanaging ementsare run as distributedtransactions. We extended allmetadata. GFS,HDFS,Lustre,Gluster,UrsaMi- the Calvin database system to implement our metadata nor, Farsite, and XtreemFSare examplesoffile sys- layer,sinceCalvinhasproventobeabletoachievecon- temsthattakethisapproach[10,21,18,1,3,4,11]. sistent geo-replicated and linear distributed transaction Thescalabilityofsuchsystemsareclearlyfundamen- scalability to hundreds of thousands of transactions per tallybottleneckedbythemetadatamanagementlayer. secondacrosshundredsofmachinesperreplica,evenun- • Ashared-diskabstractionthatcoordinatesallconcur- der relatively high levels of lock contention [24]. The rent access. File systems that rely on shared disk remainderofthissectionwillprovideabriefoverviewof for synchronization include GPFS, PanFS, and xFS Calvin’sarchitectureandexecutionprotocol. [17, 26, 22]. Such systems generally replicate data A Calvin deployment consists of three main compo- acrossmultiplespindlesforfaulttolerance.However, nents: a transaction request log, a storage layer, and a thesetypicallyrelyonextremelylow(RAID-localor scheduling layer. Eachofthesecomponentsprovidesa rack-local) synchronization latencies between repli- cleaninterfaceandimplementationsthatcanbeswapped cateddisksinordertoefficientlyexposeaunifieddisk in and out. The log stores a global totally-ordered se- address space. Concurrent disk access by multiple quenceoftransactionrequests. Eachtransactionrequest clientsaresynchronizedbylocking,introducingper- inthelogrepresentsaread-modify-writeoperationonthe formance limitations for hot files [17]. Introducing contentsofthestoragelayer;the particularimplementa- WAN latency synchronization times into lock-hold tionofthestoragelayerplusanyargumentsloggedwith durationswouldsignificantlyincreasetheseverityof the request define the semantics of the operation. The theselimitations. schedulinglayerhasthe joboforchestratesthe(concur- rent) executionof loggedtransactionrequests in a man- In this paper, we describe the design of a distributed nerthatisequivalenttoadeterministicserialexecutionin file system that is substantially different from any of exactlytheordertheyappearinthelog. the above-citedfile systems. Our system is most distin- Foreachofthesethreecomponents,wedescribehere guished by the metadata management layer which hor- the specific implementation of the component that we izontally partitions and replicates file system metadata usedinthemetadatasubsystemofCalvinFS. acrossashared-nothingclusterofservers,spanningmul- tiplegeographicregions. Filesystemoperationsthatpo- Log tentiallyspanmultiplefilesordirectoriesaretransformed Thelogimplementationwe usedconsistsofa largecol- intodistributedtransactions,andprocessedviaatransac- lectionof“front-end”servers,anasynchronously-repli- tion scheduling and replication managementlayer of an cateddistributedblockstore,andasmallgroupof“meta- extensibledistributeddatabasesysteminordertoensure log”servers. Clientsappendrequeststothelogbysend- propercoordinationoflinearizableupdates. ingthemtoafront-endserver,whichbatchesitwithother Duetotheuniquenessofourdesign,oursystem,which incomingrequestsandwritesthebatchtothedistributed 2 2 13th USENIX Conference on File and Storage Technologies (FAST ’15) USENIX Association blockstore. Onceitissufficientlyreplicatedintheblock Calvin’s scheduler implementation uses a protocol store (2 outof3 replicashave ackedthe write, say), the calleddeterministiclocking,whichresemblesstricttwo- front-end server sends the batch’s unique block id to a phaselocking,exceptthattransactionsarerequiredtore- meta- log server. The meta-log servers, which are typ- questalllocksthattheywillneedintheirlifetimesatomi- ically distributed across multiple datacenters, maintain cally,andintherelativeorderinwhichtheyappearinthe a Paxos-replicated “meta-log” containing a sequence of log. Thisprotocolisdeadlock-freeandserializable,and block ids referencing request batches. The state of the furthermoreensuresthatexecutionisequivalentnotonly log at any time is consideredto be the concatenationof to some serial order, but to a deterministic serial execu- batchesintheorderspecifiedbythe“meta-log”. tioninlogorder. Alllockmanagementisperformedlocallybyasched- StorageLayerThestoragelayerencapsulatesallknowl- uler,andschedulerstracklockrequestsonlyfordatathat edge about physical datastore organization and actual resides at the associated storage node (according to the transactionsemantics. Itconsistsofacollectionof“stor- placement manager). When transactions access records age nodes”, each of which runs on a different machine spanning multiple machines, Calvin forwards the entire in the cluster and maintains a shard of the data. Valid transaction request to all schedulers guarding relevant storagelayerimplementationsmustinclude(a)readand storagenodes. Ateachparticipatingscheduler,oncethe writeprimitivesthatexecutelocallyatasinglenode,and transaction has locked all local records, the transaction (b)aplacementmanagerthatdeterminesatwhichstorage proceedstoexecuteusingthefollowingprotocol: nodestheseprimitivesmustberunwithgiveninputargu- ments. Compoundtransactiontypesmayalsobedefined 1. Perform all local reads. Read all records in the that combine read/write primitives and arbitrary deter- transaction’sread-setthatarestoredatthelocalstor- ministic applicationlogic. Each transaction requestthat agenode. appearsinthelogcorrespondstoaprimitiveoperationor 2. Serveremotereads. Forwardeachlocalreadresult compoundtransaction. Primitives and transactions may fromstep1toeveryotherparticipant. returnresultstoclientsuponcompletion,buttheirbehav- iormaynotdependanyinputsotherthanargumentsthat 3. Collectremotereadresults. Waittoreceiveallmes- are logged with the request and the current state of the sagessentbyotherparticipantsinstep21. underlyingdata(asdeterminedbyreadprimitives)atex- 4. Execute transaction to completion. Once all read ecutiontime. results have been received, execute the transaction ThestoragelayerforCalvinFSmetadataconsistsofa to completion, applying writes that affect recordsin multiversion key-value store at each storage node, plus thelocalstoragenode,andsilentlydroppingwritesto a simple consistent hashing mechanism for determin- datathatisnotstoredlocally(sincethesewriteswill ing data placement. The compound transactions imple- beappliedbyotherparticipants). mentedbythestoragelayeraredescribedinSection5.1. Upon completion, the transaction’slocksare released Scheduler and the results are sent to the client that originally sub- Each storage node has a local scheduling layer compo- mittedthetransactionrequest. nent(calleda“scheduler”)associatedwithitwhichdrives Akeycharacteristicoftheaboveprotocolisthelackof localtransactionexecution. adistributedcommitprotocolfordistributedtransactions. Theschedulinglayertakesanunusualapproachtopes- This is a result of the deterministic nature of process- simistic concurrency control. Traditional database sys- ingtransactions—anyfailednodecanrecoveritsstateby temstypicallyscheduleconcurrenttransactionexecution loadingarecentdatatcheckpointandthenreplayingthe bycheckingthesafetyofeachreadandwriteperformed logdeterministically.Therefore,doublecheckingthatno by a transaction immediately before that operation oc- nodefailedoverthecourseofprocessingthetransaction curs, pausing execution as needed (e.g., until an earlier isunnecessary. Thelackofdistributedcommitprotocol, transaction releases an already-held lock on the target combinedwiththedeadlock-freepropertyoftheschedul- record). Each Calvin scheduler, however, examines a ingalgorithmgreatlyimprovesthescalabilityofthesys- transactionbeforeitbeginsexecutingatall,decideswhen temandreduceslatency. it is safe to execute the whole transaction based on its read-write sets (which can be discovered automatically or annotated by the client), and then hands the transac- 1If all read results are not received within a specified timeframe, send additional requests toparticipants toget the results. If there is tionrequesttotheassociatedstoragenode,whichisfree stillnoanswer,alsosendrequeststootherreplicasoftheunresponsive toexecuteitwithnoadditionaloversight. participant. 3 USENIX Association 13th USENIX Conference on File and Storage Technologies (FAST ’15) 3 2.1 OLLP Main-memory metadata store. Current metadata en- triesforallfiles anddirectoriesmustbestoredin main- Certainfilesystemoperations—notablyrecursivemoves, memoryacrossashared-nothingclusterofmachines. renames,deletes, andpermissionchangesonnon-empty directories—were implemented by bundling together Potentially many small files. The system must handle manybuilt-intransactionsintoasinglecompoundtrans- billionsdistinctfiles. action. Itwasnotalwayspossibletoannotatethesecom- Scalable read/write throughput. Read and write poundtransactionrequestswiththeirfullread-andwrite- throughput capacity must scale near-linearly and must sets (as required by Calvin’s deterministic scheduler) at notdependonreplicationconfiguration. the time the recursive operation was initiated. In these Tolerating slow writes. High update latencies that ac- cases, we made use of Calvin’s Optimistic Lock Loca- commodateWANroundtripsforthepurposesofconsis- tion Prediction (OLLP) mechanism [24] as we describe tentreplicationareacceptable. furtherinSection5.2. Linearizableandsnapshotreads. Whenreadingafile, With OLLP, an additional step is added to the trans- clientsmustbeabletospecifyoneofthreemodes,each action execution pipeline: all transaction requests go withdifferentlatencycosts: throughanAnalyzephasebeforebeingappendedtothe • Fulllinearizableread. Ifa clientrequiresfullylin- log. ThepurposeoftheAnalyzephaseistodetermine earizablereadsemanticswhenreadingafile,theread theread-andwrite-setsofthetransaction.Storescanim- mayberequiredtogothroughthesamelog-ordering plement custom Analyze logic for classes of transac- processasanyupdateoperation. tions whose read- and write-sets can be statically com- puted from the arguments supplied by the client, or the • Very recent snapshot read. For manyclients, very Analyze function can simply do a “dry run” of the low-latency reads of extremely recent file system transaction execution, but not apply any writes. In gen- snapshots are preferable to higher-latency lineariz- eral, this is done at no isolation, and only at a single able reads. We specifically optimize CalvinFS for replica,tomakeitasinexpensiveaspossible. thistypeofreadoperation,allowingforupto400ms Once the Analyze phase is complete, the transac- of staleness. (Note that this only applies to read- tionisappendedtothelog,anditcanthenbescheduled only operations. Read-modify-write operations on and executedto completion. However, it is possible for metadata—suchaspermissionschecksbeforewriting a transaction’sread-and write-sets to grow between the toafile—arealwayslinearizable.) Analyzephaseandtheactualexecution(calledtheRun • Client-specified snapshot read. Clients can also phase) due to changes in the contents of the datastore. specify explicit version/timestamp bounds on snap- Inthiscase,theworkerexecutingtheRunphasenotices shotreads. Forexample,aclientmaychoosetolimit thatthetransactionisattemptingtoreadorwritearecord stalenesstomakesurethatarecentwriteisreflected that did not appear in its read- or write-set (and which inanewread,evenifthisrequiresblockinguntilall wasthereforenotlockedbytheschedulerandcannotbe earlier writes are applied at the replica at which the safely accessed). It then aborts the transaction and re- read is occurring. Or a client may choose to per- turnsan updatedread-/write-setannotationto the client, form snapshot read operations at a historical times- who may then restart the transaction, this time skipping tampforthepurposesofauditing,restoringabackup, Analyzephase. orotherhistoricalanalyses. Sinceonlycurrentmeta- dataentriesforeachfile/directoryarepinnedinmem- 3 CalvinFSArchitecture ory at all times, it is acceptable for historical snap- shot reads to incur additional latency when digging CalvinFS was designed for deployments in which file upnow-defunctversionsofmetadataentries. dataandmetadataareboth(a)replicatedwithstrongcon- sistencyacrossgeographicallyseparateddatacentersand Hash-partitioned metadata. Hash partitioning of file (b) partitioned across many commodity servers within metadata based on full file path is preferable to range- each datacenter. CalvinFS therefore simultaneously ad- orsubtree-partitioning,becauseittypicallyprovidesbet- dresses the availability and scalability challenges de- ter load balancing and simplifies data placement track- scribedabove—whileprovidingstandard,consistentfile ing. Nonetheless,identifyingthecontentsofadirectory systemsemantics. shouldonlyrequirereadingfromasinglemetadatashard. We engineered CalvinFS around certain additional Optimizeforsingle-fileoperations. Thesystemshould goalsanddesignprinciples: beoptimizedforoperationsthatcreate,delete,modify,or 4 4 13th USENIX Conference on File and Storage Technologies (FAST ’15) USENIX Association readonefileordirectoryatatime2. Recursivemetadata 4.2 Block Storageand Placement operationssuch asdirectorycopies, moves, deletes, and Each block is assigned a globally unique ID, and is as- owner/permissionchangesmust be fully supported(and signedtoablock“bucket”byhashingitsID.Eachbucket shouldnotrequiredata blockcopying)butthe metadata is then assigned to a certain number of block servers subsystemneednotbeoptimizedforsuchoperations. (analogousto GFS Chunkservers [10]) at each datacen- CalvinFSstoresfilecontentsinanon-transactionaldis- ter, depending on the desired replication factor for the tributedblockstoreanalogoustothecollectionofchunk system. Eachblockserverstoresitsblocksinfilesonits serversthatmakeupaGFSdeployment. Weuseourex- localfilesystem. tension of Calvin described aboveto store all metadata, The mapping of buckets to block servers is main- track directorynamespaces, and map logicalfiles to the tainedinaglobalPaxos-replicatedconfigurationfileand blocksthatstoretheircontents. changesonlywhenneededduetohardwarefailures,load Both the block store and the metadatastore are repli- balancing,addingnewmachinestothecluster,andother cated acrossmultiple datacenters. In our evaluation,we globalconfigurationchanges. EveryCalvinFSnodealso used three physically separate (and geographically dis- caches a copy of the bucket map. This allows any ma- tant) datacenters, so our discussion below assumes this chine to quicklylocate a particular block by hashing its typeofdeploymentandreferstoeachfullsystemreplica GUID to find the bucket, then checkingthe bucket map as beingin its owndatacenter. However, the replication tofindwhatblockserversstorethatbucket. Intheevent mechanismsdiscussed herecanjustaseasily beusedin whereaconfigurationchangecausesthiscachedtableto deploymentswithinasinglephysicaldatacenterbydivid- return stale data, the node will fail to find the bucket at ingitintomultiplelogicaldatacenters. the specified server, query the configurationmanagerto AswithGFSandHDFS,clientsaccessaCalvinFSde- updateitscachedtable,thenretry. ployment not via kernel mounting, but via a provided Intheeventofamachinefailure,eachbucketassigned client library, which provides standard file access APIs to the failed machine is reassigned to a new machine, andfileutils[10,21].Notechnicalshortcomingprevents whichcopiesitsblocksfromanon-failedserverthatalso CalvinFS from being fully mountable, but implementa- storedthereassignedbucket. tionofthisfunctionalityremainsfuturework. Toavoidexcessivefragmenting,abackgroundprocess periodicallyscans the metadatastore andcompactsfiles 4 The CalvinFSBlockStore that consist of many small blocks. Once a compacted Althoughthemainfocusofourdesignismetadataman- fileisasynchronouslyre-writtentotheblockstoreusing agement,certainaspectsofCalvinFS’sblockstoreaffect largerblocks,themetadataisupdated—aslongasthefile metadata entry formatand thereforewarrantdiscussion. contents haven’tchanged since this compaction process Mostofthesedecisionsweremadetosimplifythetasks began.Ifthatpartofthefilehaschanged,thenewlywrit- ofimplementing,benchmarking,anddescribingthesys- tenblockisdiscardedandthecompactionprocessrestarts tem; other designs of scalable block stores would also forthefile. workwithCalvinFS’smetadataarchitecture. 5 CalvinFSMetadata Management 4.1 Variable-SizeImmutable Blocks As in many other file systems, the contents of a Calv- The CalvinFS metadata manager logically contains an inFSfilearestoredinasequenceofzeroormore blocks. entry for every version (current and historical) of ev- Unlike most others, however, CalvinFS does not set a ery file and directoryto appearin the CalvinFS deploy- fixed block size—blocks may be anywhere from 1 byte ment.Themetadatastoreisstructuredaskey-valuestore, to10megabytes. A1-GBfilemaythereforelegallycon- whereeachentry’skeyistheabsolutepathofthefile or sistofanywherefromonehundredtoonebillionblocks, directorythatitrepresents,anditsvaluecontainsthefol- althoughstepsaretakentoavoidthelattercase. lowing: Furthermore, blocks are completely immutable once • Entrytype. Specifieswhethertheentryrepresentsa written. When appending data to a file, CalvinFS does fileoradirectory3. notappendto the file’s finalblock—rather,a new block • Permissions. CalvinFSusesamechanismtosupport containing the appended data (but not the originaldata) POSIXhierarchicalaccesscontrolthatavoidsfullfile iswrittentotheblockstore,andthenewblock’sIDand system tree traversalwhencheckingpermissionsfor sizeareaddedtothemetadataentryforthefile. 2Notethatthisstillinvolvesmanydistributedtransactions. Forex- 3Althoughweseenomajortechnicalbarriertosupportinglinkingin ample,creatingordeletingafilealsoupdatesitsparentdirectory. CalvinFS,addingsupportforsoftandhardlinksremainsfuturework. 5 USENIX Association 13th USENIX Conference on File and Storage Technologies (FAST ’15) 5 an individual file by additionally storing all ances- KEY: tordirectories’permissions(upthroughthe / direc- /home/calvin/fs/paper.tex VALUE: tory)valuesintree-ascendingorderineachmetadata type: file entry. permissions: rw-r--r-- calvin users • Contents. For directories, this is a list of files and ancestor- permissions: rwxr-xr-x calvin users sub-directories immediately contained by the direc- rwxr-xr-x calvin users tory. Forfiles,thisisamappingofbyterangesinthe rwxr-xr-x root root (logical)filetobyterangeswithinspecific(physical) rwxr-xr-x root root blocksintheblockstore. Forexample,ifafile’scon- contents: 0x3A28213A 0 65536 0x6339392C 0 65536 tents are represented by the first 100 bytes of block 0x7363682E 0 34061 Xfollowedbythe28bytesstartingatbyteoffset100 ofblockY,thenthecontentswouldberepresentedas Since paper.tex is a file rather than a directory, [(X,0,100),(Y,100,28)]4. its contents field contains block ids and byte offset To illustrate this structure, consider a directory fs ranges in those blocks. We see here that paper.tex in user calvin’s home directory, which contains is about 161 KB in total size, and its contents are a the source files for, say, an academic paper. The concatenationof byte ranges[0,65536),[0,65536),and calvinfs-ls util (analogous to ls -lA) yields the [0,34061)inthreespecifiedblocksintheblockstore. followingoutput: Storing all ancestor directories’ permissions in each metadata entry eliminates the need for distributed per- $ calvinfs-ls /home/calvin/fs/ missions checks when accessing individual files, but drwxr-xr-x calvin users ... figures -rw-r--r-- calvin users ... ref.bib comeswithatradeoff:whenmodifyingpermissionsfora -rw-r--r-- calvin users ... paper.tex nonemptydirectory,thenewpermissioninformationhas tobeatomicallypropagatedrecursivelytoalldescendents TheCalvinFSmetadataentryforthisdirectorywouldbe: of the modified directory. We discuss our protocol for handlingsuchlargerecursiveoperationsinSection5.2. KEY: /home/calvin/fs 5.1 Metadata StorageLayer VALUE: type: directory As mentioned above, the metadata management system permissions: rwxr-xr-x calvin users inCalvinFSisaninstanceofCalvinwithacustomstor- ancestor- permissions: rwxr-xr-x calvin users agelayerimplementationthatincludescompoundtrans- rwxr-xr-x root root actionsaswellasprimitiveread/writeoperations. Itim- rwxr-xr-x root root plementssixtransactiontypes: contents: figures ref.bib paper.tex • Read(path) returns the metadata entry for speci- We see that the entry contains permissions for the di- fiedfileordirectory. rectory, plus permissions for three ancestor directories: • Create{File,Dir}(path)createsanewempty /home/calvin,/home,and/,respectively.Sincethe file or directory. This updates the parent directory’s path(intheentry’skey)implicitlyidentifiesthesedirec- entryandinsertsanewentryforthecreatedfile. tories,theyneednotexplicitlynamedinthevaluepartof • Resize(path, size)afile. Ifafilegrowsasa thefield. resultofaresizeoperation,allbytespasttheprevious Since permissions checks need not access ancestor filelengtharebydefaultsetto0. directory entries and the contents field names all files and subdirectories contained in the directory, the • Write(path, file offset, source, calvinfs-ls invocation above only needed to read source offset, num bytes) writes a speci- that one metadata entry. Note that unlike POSIX-style fied number of bytes to a file, starting at a specified ls -lA,however,thecommandabovedidnotshowthe offsetwithinthefile. Thesourcedatawrittenmustbe sizes of each file. To output those, additional metadata asubsequenceofthecontentsofablockintheblock entrieshavetoberead. Forexample,themetadataentry store. forpaper.texlookslikethis: • Delete(path)removesafile(oranemptydirec- tory). As with the file creationoperation, the parent 4Thisparticular example might comeabout bythe filebeing cre- directory’sentryisagainmodified,andthefile’sentry atedcontaining 128bytes inblockY,thenhavingthefirst100bytes overwrittenwiththecontentsofblockX. isremoved. 6 6 13th USENIX Conference on File and Storage Technologies (FAST ’15) USENIX Association • Edit permissions(path, permissions) theend-to-endprocessofexecutingasimpleoperation— ofafileordirectory,whichmayincludechangingthe creatinganewfileandwritingastringtoit: ownerand/orgroup. echo "import antigravity" >/home/calvin/fly.py Each of these operationtypesalso takes as partof its inputthe userand groupIDs ofthe caller, andperforms The first step is for the client to submit the request to a the appropriate POSIX-style permissions checking be- CalvinFS front-end—aprocessthatrunson every Calv- foreapplyinganychanges. AnyPOSIX-stylefilesystem inFSserverandorchestratestheactualexecutionofclient interactioncanbeemulatedbycomposingofmultipleof requests,thenreturnstheresultstotheclient. these six built-in operations together in a single Calvin WriteFileData transaction. Afterreceivingtheclientrequest,thefront-endbeginsby Three of these six operations (read, resize, write) ac- performingthewritebyinsertingadatablockintoCalv- cessonlyasinglemetadataentry. Creatingordeletinga inFS’sblockstorecontainingthedatathatwillbewritten file ordirectory,however,touchestwo metadata entries: tothefile. Thefirststephereistoobtainanew,globally the newly created file/directory and its parent directory. unique64-bitblockidβfroma blockstore server. βis Changing permissions of a directory may involve many hashedtoidentifythebucketthattheblockwillbelongto, entries, since all descendants must be updated, as ex- and the front-endthen looks up in its cached configura- plained above. Since entries are hash-partitionedacross tionfilethesetofblockserversthatstorethatbucket,and many metadata stores on differentmachines, the create, sendsa blockcreationrequestinterfacenodenowsends delete, and change permissions (of a non-empty direc- ablockwriterequest(β→ import antigravity) tory) operations necessarily constitute distributed trans- toeachofthoseblockservers. actions. Onceaquorumoftheparticipatingblockservers(2out Other operations, such as appendingto, copying, and of3inthiscase)haveacknowledgedtothefront-endthat renamingfilesareconstructedbybundlingtogethermul- theyhavecreatedandstoredtheblock,thenextstepisto tiplebuilt-inoperationstobeexecutedatomically. updatethemetadatatoreflectthenewlycreatedfile. 5.2 Recursive Operations onDirectories ConstructMetadataOperation Since our system does not provide a single built-in op- Recursivemetadataoperations(e.g.,copyingadirectory, eration that both creates a file and writes to it, this op- changingdirectorypermissions)inCalvinFSuseCalvin’s eration is actually a compound request specifying three built-inOLLPmechanism. Themetadatastorefirstruns mutationsthatshouldbebundledtogether: thetransactionatnoisolationinAnalyzemodetodis- cover the read set without actually applying any muta- • createfile/home/calvin/fly.py tions. This determines the entire collection of metadata • resizethefileto18bytes. entriesthatwillbeaffectedbytherecursiveoperationby • writeβ:[0,18)tobyterange[0,18)ofthefile traversing the directory tree starting at the “root” of the Once this compound transaction request (let’s call itα) operation—themetadataentryforthedirectorythatwas is constructed, the front-end is ready to submit it to be passedastheargumenttotheprocedure. appliedtothemetadatastore. Once the fullread/write set is determined, it is added as an annotation to the transaction request, which is re- AppendTransactionRequesttoLog peatedinRunmode,duringwhichthedirectorytreeisre- The first step in applying metadata mutation is for the traversedfromtheoperationtocheckthattheread/write CalvinFS front-end to append α to the log. The Calv- set of the operation has not grown (e.g., due to a newly inFS front-end sends the request to a Calvin log front- inserted file in a subtree). If the read- and write-sets end,whichappendsαtoitscurrentbatchoflogentries, havegrownbetweentheAnalyzeandRunsteps,OLLP whichhassomegloballyuniqueidγ. Whenbatchγfills (deterministically) aborts the transaction, and restarts it upwithrequests(orafteraspecifiedduration),itiswrit- againinRunmodewithanappropriatelyupdatedanno- ten out another asynchronously replicated block store. tation. Again, the log front-end waits for a majority of block serverstoacknowledgeitsdurability,andthendoestwo 6 The Life ofanUpdate things: (a) it submits the batch id γ to be appended to thePaxos-replicatedmetalog,and(b)itgoesthroughthe To illustrate how CalvinFS’s various components work batchinorder,forwardingeachtransactionrequesttoall together in a scalable, fault-tolerantmanner, we present metadatashardsthatwillparticipateinitsexecution. 7 USENIX Association 13th USENIX Conference on File and Storage Technologies (FAST ’15) 7 ApplyUpdatetoMetadataStore thenewfile’smetadataatQ . However,steps4through 0 EachCalvinmetadatashardisconstantlyreceivingtrans- 6 dependon the outcomeof step 1 (as well as 3), so P 0 action requests from various Calvin log front- ends— and Q do need to coordinate in their handling of this 0 howeveritreceivestheminacompletelyunspecifiedor- mutation request. The two shards therefore proceed as der. Therefore,italsoreadsnewmetalogentriesasthey follows: are successfully appended, and uses these to sort the P0 Q0 transactionrequestscomingin fromall ofthe log front- Checkparentdir Checkfilepermissions; ends, forming the precise subsequence of the log con- permissions(step1). abortifnotOK(step3). tainingexactlythosetransactionsinwhoseexecutionthe shardwillparticipate. Now,thesequencerateachmeta- Sendresult(OK Receiveparentdirectory datastorageshardcanprocessrequestsinthecorrector- orABORT)toQ . permissionscheck 0 der. resultfromP. 0 Our example update α reads and modifies IfresultwasOK, two metadata records: /home/calvin and / updateparentdir IfreceivedresultisOK, home/calvin/fly.py. Suppose that these are metadata(step2). performsteps4through6. stored on shards P and Q, respectively. Note that each Bothshardsbeginwithpermissionschecks(step1for metadata shard is itself replicated multiple times—once P andstep3forβ). Supposethatbothcheckssucceed. in each datacenter in the deployment—but since no 0 NowαsendsanOKresultmessagetoβ. βreceivesthe further communication is required between replicas to resultmessage,andnowbothshardsexecutetheremain- executeα, let us focus on the instantiationsof P and Q deroftheoperationwithnofurthercoordination. inasingledatacenter(P andQ indatacenter0,say). 0 0 Note that we were discussing datacenter 0’s P and Both P and Q receive request α in its entirety 0 0 Q metadata shards. Metadata shards (P ,Q ), (P,Q ), and proceed to perform their parts of it. At P, 1 1 2 2 0 etc.,inotherdatacentersindependentlyfollowthesesame α requests a lock on record /home/calvin from steps. Since each shard deterministically processes the the local scheduler; at Q , α requests a lock on 0 same request sequence from the log, metadata state re- /home/calvin/fly.py. At each machine, α only mains strongly consistent: import antigravity is startsexecutingonceithasreceiveditslocallocks. written to /home/calvin/fly.py identically at every Before we walk throughthe executionofαat P and 0 datacenter. Q , let us first review the sequence of logical steps that 0 therequestneedstocomplete: 7 Performance Evaluation 1. Check parent directory permissions. Abort trans- CalvinFS is designed to address the challenges of (a) action if /home/calvin does not exist or is not distributing metadata management across multiple ma- writable. chines, and (b) wide area replicationfor fault tolerance. 2. Update parent directory metadata. If fly.py is In exploring the scalability and performance character- notcontainedin/home/calvin’scontents,addit. istics of CalvinFS, we therefore chose experiments that 3. Checkfilepermissions. Ifthefileexistsandisnota explicitlystressedthemetadatasubsystemtoitslimits. writablefile,abortthetransaction. WAN replication. All results shown here used deploy- 4. Createfilemetadataentry. Ifnometadataentryex- mentsthatreplicatedalldataandmetadatathreeways— istsfor/home/calvin/fly.py,createone. acrossdatacentersinOregon,Virginia,andIreland. Many small data blocks. In order to test the perfor- 5. Resizefilemetadataentry. Updatethemetadataen- mance of CalvinFS’s metadata store (as opposed to the try to indicate a length of 18 bytes. If it was pre- more easily scalable block storage component), we fo- viously longer than 18 bytes, this truncates it. If it cusedmainlyonupdate-heavyworkloadsinwhich99.9% was previously shorter (or empty), it is extended to offileswere1KBorsmaller. Obviously,mostrealworld 18bytes,paddedwithzeros. file systems typically deal with much larger files; how- 6. Update file metadata entry’s contents. ever, by experimentingon smaller files we were able to Write β : [0,18) to the byte range [0,18) of test the ability of the metadata store to handle billions /home/calvin/fly.py, overwriting any of files while keeping the cluster size affordably small previouslyexistingcontentsinthatrange. enough for our experimental budget. Obviously, larger Notethatsteps1and2involvetheparentdirectorymeta- fileswouldrequireadditionalhorizontalscalabilityofthe data entry at P, while steps 3, 4, 5, and 6 involve only blockstore;howeverthisisnotthefocusofourwork. 0 8 8 13th USENIX Conference on File and Storage Technologies (FAST ’15) USENIX Association WeusethissetuptoexamineCalvinFS’smemoryusage, file) to require approximately twice the amount of total throughputcapacity,latency,andfaulttolerance. memory to store metadata (assuming the same level of metadata replication). Of course, by partitioning meta- 7.1 Experimental Setup dataacrossmachines,CalvinFSrequiresfarlessmemory AllexperimentswererunonEC2High-CPUExtra-Large permachine. instances5. Each deploymentwas split equally between Our largest deployment—300 machines—held 3 bil- AWS’s US-West (Oregon), US-East (Virginia), and EU lion files (and therefore 9 billion total metadata entries) (Ireland)regions. Blockandmetadatareplicationfactors inatotalof1.3TBofmainmemory. Thislargenumber were set to 3, and buckets, metadata shards, and Paxos offilesisfarbeyondwhatHDFScanhandle[27]. groupmemberswereplacedsuchthateachobject(meta- data entries, log blocks, data blocks, and Paxos log and 7.3 Throughput Capacity metalogentries)wouldbestoredonceineachdatacenter. Next, we examined the throughput capacity (Figure 1) Each machine served as (a) a block server (contain- andlatencydistributions(Figure2)ofreadingfiles,writ- ing 30 buckets), (b) a log front-end, and (c) a metadata ingtofiles,andcreatingfilesinCalvinFSdeploymentsof shard. Inaddition,onerandomlyselectedmachinefrom varyingsizes. For each measurement, we created client each datacenter participated in the Paxos group for the applicationsthatissuedrequeststoreadfiles,createfiles, Calvin metalog. We ran our client load generation pro- andwritetofiles—butwithdifferentfrequencies.Forex- gramonthesamemachines(butitdidnotuseanyknowl- perimentsonreadthroughput,98%ofallclientrequests edgeaboutdataormetadataplacementwhengenerating were reads, with 1% of operations being file creations requests, so veryfew requestscouldbe satisfied locally, and1%beingwritestoexistingfiles. Similarly,forwrite especiallyinlargedeployments). benchmarks, clients submitted 98% write requests, and We ran each performance measurement on deploy- for append benchmarks, clients submitted 98% append ments of seven different sizes: 3, 6, 18, 36, 75, 150, requests. For allworkloads, clients chose whichfiles to and 300 total machines. As mentioned above, we had read,write,andcreateusingaGaussiandistribution. a limited budget for running experiments, so we could OncekeyfeatureofCalvinFSisthatthroughputisto- not exceed 300 machines. However, we were able to tally unaffected by WAN replication (and the latencies store billions of files across these 300 machinesby lim- of message passing between datacenters). This is be- iting the file size. Our results can be translated directly causeonceatransactionisreplicatedtoalldatacentersby tolargerclustersthathavemoremachinesandlargerfiles theCalvinlogcomponent(whichhappensbeforerequest (andthereforethesametotalnumberoffilestomanage). executionbegins),nofurthercross-datacentercommuni- We compareourfindingsdirectlytoHDFSperformance cation is required to execute the transaction to comple- measurementspublishedbyYahooresearchers[20]. tion. Therefore, we only experimentwith the three dat- 7.2 FileCounts andMemory Usage acenter case of Oregon, Virginia, and Ireland for these set of experiments—changing datacenter locations (or AftercreatingeachCalvinFSdeployment,wecreated10 even removing WAN replication entirely) has no effect millionfilespermachine.Filesizesrangedfrom10bytes on throughputresults. Latency, however, is affected by to 1MB, with an averagesize of 1kB. 90% of files con- themetalogPaxosagreementprotocolacrossdatacenters, tainedonlyoneblock,and99.9%offileshadatotalsize whichwediscussinSection7.4below. of under 1kB. Most file names (including full directory paths)werebetween25and50byteslong. ReadThroughput We found that total memory usage for metadata was For many analytical applications, extremely high read approximately 140 bytes per metadata entry—which is throughput is extremely important, even if it comes at closely comparable to the per-file metadata overheadof the cost of occasionally poor latencies for reads of spe- HDFS[28]. UnlikeHDFS,however,themetadatashards cific files. On the other hand, being able to rely on did not store an in-memory table of block placement consistent read latencies vastly simplifies the develop- data, since Calvin uses a coarser-grained bucket place- mentof distributedapplicationsthatfaceend-users. We ment mechanism instead. We would therefore expect thereforeperformedtwoseparatereadthroughputexper- anHDFS-likefilesystemdeployment(with˜1blockper iments: oneinwhichwefullysaturatedthesystemwith readrequests,resultingin“flaky”latency,andoneatonly 5EachEC2High-CPUExtra-Largeinstancecontains7GBofmem- partialloadthatyieldsreliable(99.9thpercentile)latency ory, 20 EC2 Compute Units (8 virtual cores with 2.5 EC2Compute (Figures 1a and 1b). Because each datacenter stores a Units each withthe equivalent CPU capacity ofa 1.0-1.2GHz2007 Opteronor2007Xeonprocessor),and1690GBofinstancestorage full,consistentreplicaofalldataandmetadata,eachread 9 USENIX Association 13th USENIX Conference on File and Storage Technologies (FAST ’15) 9

Description:
server or via a shared-disk architecture in order to ensure consistency and .. As with GFS and HDFS, clients access a CalvinFS de- ployment not .. mains strongly consistent: import antigravity is written to . This effect is similar to.
See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.