ebook img

Enabling the P2P JXTA Platform for High-Performance Networking Grid Infrastructures PDF

12 Pages·2017·0.39 MB·English
by  
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Enabling the P2P JXTA Platform for High-Performance Networking Grid Infrastructures

Enabling the P2P JXTA Platform for High-Performance Networking Grid Infrastructures Mathieu Jan, Gabriel Antoniu, David A. Noblet To cite this version: Mathieu Jan, Gabriel Antoniu, David A. Noblet. Enabling the P2P JXTA Platform for High- Performance Networking Grid Infrastructures. Intl. Conf. on High Performance Computing and Communications (HPCC’05), Sep 2005, Sorrento, Italy. pp.429-440. ￿inria-00000976￿ HAL Id: inria-00000976 https://hal.inria.fr/inria-00000976 Submitted on 9 Jan 2006 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Enabling the P2P JXTA platform for high-performance networking grid infrastructures GabrielAntoniu(cid:0) ,MathieuJan(cid:0) ,andDavidA.Noblet(cid:1) (cid:2) IRISA/INRIA CampusdeBeaulieu,35042Rennescedex,France [email protected] (cid:3) UniversityofNewHampshire,DepartmentofComputerScience Durham,NewHampshire03824-3591,USA Abstract. Asgridsizesincrease,theneedforself-organizationanddynamicre- configurationsisbecomingmoreandmoreimportant,andthereforetheconver- genceofgridcomputingandPeer-to-Peer(P2P)computingseemsnatural.Grid infrastructuresaregenerallyavailableasafederationofSAN-basedclustersin- terconnectedbyhigh-bandwidthWANs.However,P2Psystemsareusuallyrun- ningontheInternet,withanonhierarchicalnetworktopology,whichmayraise theissueoftheadequacyoftheP2Pcommunicationmechanismsongridinfras- tructures.ThispaperevaluatesthecommunicationperformanceoftheJXTAP2P platformoverhigh-performanceSANsandWANs,forbothJ2SEandCbindings. Weanalyzetheseresultsand weevaluate solutions abletoimprove theperfor- manceofJXTAonsuchnetworkinggridinfrastructures. Keywords:highperformancenetworking,gridcomputing,P2P,JXTA. 1 Using P2Ptechniquestobuild grids Nowadays,scientificapplicationsrequiremoreandmoreresources,suchasprocessors, storage devices, network links, etc. Grid computing providesan answer to this grow- ing demand by aggregatingresources made available by various institutions. As their sizesaregrowing,gridsexpressanincreasingneedforflexibledistributedmechanisms allowingthemtobeefficientlymanaged.SuchpropertiesareexhibitedbyPeer-to-Peer (P2P)systems,whichhaveproventheirabilitytoefficientlyhandlemillionsofintercon- nectedresourcesinadecentralizedway.Moreover,thesesystemssupportahighdegree ofresourcevolatility.TheideaofusingP2Papproachesforgridresourcemanagement hasthereforeemergedquitenaturally[1,2]. TheconvergenceofP2Pandgridcomputingcanbeapproachedinseveralways.For instance,P2Pservicescanbeimplementedontopofbuildingblocksbasedoncurrent gridtechnology(e.g.,byusinggridservicesasacommunicationlayer[3]).Conversely, P2P libraries can be used on physical grid infrastructures, for example, as an under- lyinglayerfor higher-levelgridservices[4].Thisprovidesawaytoleveragescalable P2Pmechanismsforresourcediscovery,resourcereplicationandfaulttolerance.Inthis paper,wefocusonthissecondapproach. 2 UsingP2Psoftwareonphysicalgridinfrastructuresisachallengingproblem.Grid applicationsoftenhaveimportantperformanceconstraintsthataregenerallynotausual requirementforP2Psystems.Onecrucialissueinthiscontextistheefficiencyofdata transfers.UsingP2Plibrariesasbuildingblocksforgridservices,forinstance,requires the efficient use of the capacities of the networks available on grid infrastructures: System-AreaNetworks(SANs)andWide-AreaNetworks(WANs).Often,agridisbuilt asaclusterfederation.SANs,suchasGigaEthernetorMyrinet(which typicallypro- videGb/sbandwidthanda fewmicrosecondslatency),are usedfor connectingnodes inside a givenhigh-performancecluster; whereas WANs, with a typical bandwidthof 1Gb/sbuthigherlatency(typicallyoftheorderof10-20ms),areusedbetweenclus- ters.ThisisclearlyanunusualdeploymentscenarioforP2Psystems,whichgenerally targettheedgesoftheInternet(thosewithlow-bandwidthandhigh-latencylinks,such asDigitalSubscriberLine,orDSL,connections).Therefore,itisimportanttoask:are P2Pcommunicationmechanismsadequateforausageinsuchacontext?Isitpossible toadaptP2Pcommunicationsystemsinordertobenefitfromthehighpotentialoffered bythesehigh-performancenetworks? As an example, JXTA [5] is the open-source project on which, to the best of our knowledge, most of the few attempts for realizing P2P-grid convergence have been based(seetherelatedworkbelow).Inits2.0version,JXTAconsistsofaspecificationof sixlanguage-andplatform-independent,XML-basedprotocolsthatprovidebasicser- vicescommontomostP2Papplications,suchaspeergrouporganization,resourcedis- covery,andinter-peercommunication.Toourknowledge,thispaperisthefirstattempt to discuss the appropriateness of using the JXTA P2P platform for high-performance computingongridinfrastructures,byevaluatingtowhatextentitscommunicationlay- ersareabletoleveragehigh-performance(i.e.Gigabit/s)networks.Adetaileddescrip- tionofthecommunicationslayersofJXTAcanbefoundin[6].Thispaperfocuseson the evaluation of JXTA-J2SE and JXTA-C3 over both SANs and WANs. It also dis- cusseswaystointegratesomeexistingsolutionsforimprovingtherawperformance. Theremainderofthepaperisorganizedasfollows.Section2introducestherelated work: wediscusssomeJXTA-basedattemptsforusingP2Pmechanismsto buildgrid servicesandwementionsomeperformanceevaluationsofJXTA. Section3describes indetailtheexperimentalsetupusedforbothSANandWANbenchmarks.Sections4 and5presentthebenchmarkresultsofJXTAoverthesetwotypesofnetworks.Finally, Section6concludesthepaperanddiscussessomepossiblefuturedirections. 2 Related Work SeveralprojectshavefocusedontheuseofJXTAasasubstrateforgridservices.The CogKitJXTAProject[7]andtheJXTA-Grid[8]projectaretwoexamples.However, noneoftheseprojectsarebeingactivelydevelopedandnotonehasreleasedanyproto- types.TheService-orientedPeer-to-PeerArchitecture[4](SP2A)projectaimsatusing P2Proutingalgorithmsforpublishinganddiscoveringgridservices.SP2Aisbasedon two specifications: the Open Grid Service Infrastructure (OGSI) and JXTA. None of 3TheonlytwobindingscomplianttoJXTA’sspecificationsversion2.0 3 theprojectsabovehaspublishedperformanceevaluationssofar.Finally,JUXMEM[9] proposestouseJXTAinordertobuildagriddata-sharingservice.Alloftheseprojects mentionedabovesharetheideaofusingJXTAasalow-levelinteractionsubstrateover agridinfrastructure.SuchanapproachbringsforththeimportanceofJXTA’scommu- nicationsperformance. JXTA’s communication layers have so far only been evaluated at a cluster level, or over the Internet via DSL connections, but not over grid architectures with high-performance clusters interconnected with high-bandwidth WANs. The perfor- mance of JXTA-J2SE communication layers has been the subject of many pa- pers [10,11,12,13,14,15] and has served as reference for comparisons with other P2P systems [16,17,18].ThemostrecentevaluationofJXTA-J2SEis[6],whichalsopro- videsanevaluationofJXTA-C,butonlyoverFastEthernetLANnetworks.However, it gives hints on how to use JXTA in order to get good performance on this kind of networks. 3 Description ofthe ExperimentalSetup Forallreportedmeasurementsweuseabidirectionalbandwidthbenchmark(between two peers), based on five subsequent time measurements of an exchange of 100 con- secutive message-acknowledgment pairs sampled at the application level. We chose thistestasitisawell-establishedmetricforbenchmarkingnetworkingprotocols,and because of its ability to yield information about important performance characteris- tics such as bandwidth and latency. Benchmarks were executed using versions 2.2.1 and2.3.2of theJ2SE bindingof JXTA. Forthe Cbinding, the CVSheadof JXTA-C from the 18th of January 2005 was used. Both bindings were configured to use TCP as the underlying transport protocol. When benchmarks are performed using JXTA- J2SE, the Sun Microsystems Java Virtual Machine (JVM) 1.4.2 is used and executed with -server -Xms256M -Xmx256M options. The use of other JVMs is explic- itly noted and executed with equivalent options. Also note that when the Java bind- ing of JXTA is benchmarked, an additional warm-up phase based on 1000 consecu- tivemessage-acknowledgmentpairsisperformed.Finally,theJXTA-Cbenchmarksare compiledusinggcc3.3.3withtheO2levelofoptimization.Testswereperformedon thefollowingtwotypesofnetworks,typicallyusedforbuildinggridinfrastructures. SANbenchmarks. ThenetworksusedfortheSANbenchmarksareGigaEthernetand Myrinet(GMdriver,version2.0.11).WhenthenetworklayerisMyrinet,nodesconsist ofmachinesusing2.4GHzIntelPentiumIVprocessors,outfittedwith1GBofRAM each,andrunninga2.4versionLinuxkernel.ForGigaEthernet,nodesconsistofma- chinesusingdual2.2GHzAMDOpteronprocessors,alsooutfittedwith1GBofRAM each, and running a 2.6 version Linux kernel. Since direct communication amongst nodes of a SAN-based cluster is available, direct communications between peers has also been configured. Note that this is allowed by JXTA specifications and therefore doesnotrequiretheuseofadditionalpeers. WAN benchmarks. The platform used for the WAN benchmarks is the Grid’5000 Frenchnationalgridplatform[19].TestswereperformedbetweentwooftheGrid’5000 4 160 120 Java Socket Java Socket JXTA socket 2.2.1 (512 KB) JXTA socket 2.2.1 (512 KB) 140 JXTA unicast pipe 2.2.1 JXTA unicast pipe 2.2.1 JXTA endpoint service 2.2.1 100 JXTA endpoint service 2.2.1 120 MB/s) 100 MB/s) 80 Throughput ( 6800 Throughput ( 4600 40 20 20 0 0 0.51 2 4 8 16 64 256 1MB 8MB 0.51 2 4 8 16 64 256 1MB 8MB Message size in KB Message size in KB Fig.1.BandwidthofeachlayerofJXTA2.2.1ascomparedtoJavasocketsoveraMyrinetnet- work(left)andaGigaEthernetnetwork(right). clusterslocatedinRennesandToulouse.Oneachside,nodesconsistofmachinesusing dual2.2GHzAMDOpteronprocessors,outfittedwith1GBofRAMeach,andrunning a2.6versionLinuxkernel.Thetwositesareinterconnectedthrougha1Gb/slink,with an average measured latency of 11.2 ms. Note that as direct communication between nodesispossiblewithintheGrid’5000testbed,weconfiguredJXTApeerstoenabledi- rectexchanges.AsfortheSANbenchmarks,thisallowedbyJXTAspecification,even if this is clearly an unusual deployment scenario for P2P systems, where direct com- munication between peers is the exception rather than the rule (because of firewalls, etc.).However,letusstressthatonsomegridsdirectcommunicationisonlyavailable betweenclusterfront-ends.Inthatcase,additionalevaluationswouldbenecessary. Communication protocols. JXTA communication layers provide three basic mecha- nismsforinter-peercommunication,withdifferentlevelsofabstraction.Theendpoint service is JXTA’s lowest, point-to-point communication layer which provides an ab- straction for available underlying transport protocols. Messages sent by this layer are comprisedofaseriesofnamedandtypedmessageelements[20].Theseelementsmay berequiredbyhighercommunicationlayersoraddedbytheapplication(e.g.themes- sagepayload).Thepipeservice,builtontopoftheendpointlayer,providesvirtualcom- municationchannels(orpipes),whicharedynamicallyboundtopeerendpointsatrun- time,thusallowingdeveloperstoabstractthemselvesfromdynamic,runtimechanges of physical network addresses. In this paper,we focus on point-to-point pipes, called unicastpipes.Finally,ontopofpipes,theJXTAsocketsaddadata-streaminterface,and implementreliabilityguarantees.JXTAsocketsextendtheBSDsocketAPI,whilestill preserving the main feature of pipes: independence from the physicalnetwork. How- ever,itshouldbenotedthatthislayerisnotpartofthecorespecificationsofJXTA.It iscurrentlyonlyavailableinJXTA-J2SE. 4 PerformanceEvaluationofJXTAoverSystem-Area Networks This section analyzes the performance of JXTA’s communications layers on SANs. Note that for Myrinet, the Ethernet emulation mode of GM 2.0.11 is used and con- figured with jumbo frames.This mode allows Myrinet to carry any packettrafficand 5 160 120 Java Socket Java Socket JXTA socket 2.3.2 (512 KB) JXTA socket 2.3.2 (512 KB) 140 JXTA unicast pipe 2.3.2 JXTA unicast pipe 2.3.2 JXTA endpoint service 2.3.2 100 JXTA endpoint service 2.3.2 120 MB/s) 100 MB/s) 80 Throughput ( 6800 Throughput ( 4600 40 20 20 0 0 0.51 2 4 8 16 64 256 1MB 8MB 0.51 2 4 8 16 64 256 1MB 8MB Message size in KB Message size in KB Fig.2.BandwidthofeachlayerofJXTA2.3.2ascomparedtoJavasocketsoveraMyrinetnet- work(left)andaGigaEthernetnetwork(right). protocols that can be transported by Ethernet, including TCP/IP. Although this capa- bility is bought at the cost of losing the main advantage of a Myrinet network (e.g. OS-bypassmode),itallowsthe samesocket-basedbenchmarksto berun unmodified. Onthisconfiguration,thebandwidthandlatencyofplainsocketsisaround155MB/s and60 srespectively,whereasonGigaEthernetitisaround115MB/sfortheband- (cid:0) width and 45 s for the latency (average values between C and Java sockets). These (cid:0) valuesareusedasareferenceperformancebound. 4.1 AnalysisofJXTA-J2SE’sPerformance VersionofJXTA JXTA-J2SE2.2.1 JXTA-J2SE2.3.2 JXTA-C Network MyrinetGigaEthernetMyrinetGigaEthernetMyrinetGigaEthernet Endpointservice 890 s 357 s 624 s 294 s 635 s 322 s (cid:1) (cid:1) (cid:1) (cid:1) (cid:1) (cid:1) Unicastpipe 1.9ms 834 s 1.7ms 711 s 1.7ms 727 s (cid:1) (cid:1) (cid:1) JXTAsocket 3.3ms 1.3ms 2.4ms 977 s (cid:1) Table1.LatencyresultsforJXTA-J2SEandJXTA-C. JXTA-J2SEendpointservice. Figure1showsthattheendpointserviceofJXTA2.2.1 nearly reaches the bandwidth of plain sockets over SAN networks: 145 MB/s over Myrinet and 101 MB/s over Giga Ethernet. However, Figures 2 also shows that the bandwidth of the JXTA 2.3.2 endpoint layer has decreased: drops of 32 MB/s over Myrinetand20MB/soverGigaEthernetareobserved.Theselowerbandwidthsaffect allversionsofJXTAaboveitsrelease2.2.1andareexplainedbyanewimplementation of the endpoint layer that shipped with JXTA 2.3. The profiling of JXTA has pointed out that this drop of performance is due to the mechanism used for limiting the size ofmessages sentbythe endpointlayer. Moreover,since JXTA2.3,the limithasbeen lowered to 128 KB of application-level payload (larger messages are dropped). This limitationwas introducedintoJXTA toin orderto promotesomefairness in resource sharingamongpeersonthenetwork,forinstancewhenmessagesmustbestoredonre- laypeers(thetypeofpeerrequiredtoexchangemessagesthroughfirewalls).However, 6 asnorelaypeersareneededwhenusingSANs,weremovedthislimit.Table1shows thatlatencyresultsofJXTA-J2SEhaveimprovedsinceversion2.2.1.Thelatencyofthe JXTA2.3.2endpointserviceoverGigaEthernetreachesa valueunder300 s.More- (cid:0) over,itgoesdownevenfurtherto268 sand229 swhenusingtheSUN1.5andIBM (cid:0) (cid:0) 1.4.1JVMs,respectively.ThedifferencebetweenMyrinetandGigaEthernetresultsis duetothehardwareemployed,astheEthernetemulationmodeisusedforMyrinet. JXTA-J2SEunicastpipe. Inaddition,Figure1and2demonstrateabandwidthdegra- dationforJXTA-J2SE.Forexample,whileJXTA2.2.1unicastpipeattainsagoodpeak bandwidth of 136.8 MB/s over Myrinet, its 2.3.2 counterpart reaches a bandwidth of only 106.5MB/s. Asimilarperformancedegradationcanbe observedonGigaEther- net.However,theshapeofthecurveofunicastpipes2.2.1onGigaEthernethasnotbeen explainedsofar.Wesuspectthefirstdropisduetoaschedulingproblem.Ontheother hand,thereasonofthedropat128KBofapplicationpayloadisstillunknown.Atthe samepayloadsize,asmallerdropforJXTAunicastpipes2.3.2overGigaEthernetcan beobserved,butnolinkhavebeenestablishedwiththepreviouslymentioneddrop,as thisdropalsooccursattheendpointlevel.Overall,thesmallperformancedegradation ascomparedtotheendpointlayerisexplainedbythecompositionofapipemessage: thepresenceofanXMLmessageelementrequiringacostlyparsingpreventsthislayer from reaching the performanceof the endpointlayer. Moreover,as shownon table 1, thisextraparsingrequiredforeachpipemessagealsoaffectslatencyresults:compared to the endpointlayer,latencies increasebymore than400 s. However,unicast pipes (cid:0) arestillabletoachievelatenciesinthesub-millisecondrange,atleastonGigaEthernet. JXTA-J2SEsockets. Asopposedtopreviouslayers,JXTAsocketsarefarfromreaching theperformanceofplainJavasockets.Indeed,intheirdefaultconfiguration(e.g.withan outputbuffersizeof16KB),JXTAsockets2.2.1,forinstance,attainapeakbandwidth of12MB/soveraMyrinetnetwork.Asforunicastpipes,asimilarlowbandwidthresult isreportedonGigaEthernet.Wewereabletosignificantlyimprovethebandwidthand achieve92MB/sbyincreasingthesizeoftheoutputbufferto512KB,asshownonFig- ures1and2.Asfortheunicastpipes,theirregularshapeofJXTAsockets2.2.1curves has not been explained so far. Again,we suspect the first drop is due to a scheduling problem.Thenextdropmaybeduetosomemessagelosseswhenthemessagesizeis aroundthesizeoftheoutputbuffer,sincemanyreliabilityissueshavebeenfixedupto JXTA 2.3.2. Table 1 highlights the progress being made by JXTA on latency,as only JXTASockets2.3.2onGigaEthernetisabletoreachalatencyunderonemillisecond. Discussion. Inconclusion,JXTA-J2SE2.2.1communicationlayersareabletonearly saturateSANs,butonlyattheendpointandpipelevels.Themeasurementsrevealedthat the bandwidth of JXTA 2.2.1 is higher than JXTA 2.3.x. Latency results have largely improved since JXTA 2.2.1, but without reaching reasonably good performance for SANs.Finally,thisevaluationhasalso highlightedthat,intheirdefaultconfiguration, JXTAsocketsachieveaverypoorbandwidth.However,thisresultcansignificantlybe improvedbyincreasingtheoutputbuffersize.ThisrequirestheJXTAsocketprogram- mer to explicitly set this parameter in the user code. Based on these results, we can 7 concludethatJXTA-J2SEcanbeadaptedinordertobenefitfromthepotentialoffered bySANs,atleastonthebandwidthside. 4.2 AnalysisofJXTA-C’sPerformance 160 120 C socket C socket JXTA−C endpoint service JXTA−C endpoint service 140 JXTA−C unicast pipe JXTA−C unicast pipe 100 120 MB/s) 100 MB/s) 80 Throughput ( 6800 Throughput ( 4600 40 20 20 0 0 0.51 2 4 8 16 64 256 1MB 4MB 0.51 2 4 8 16 64 256 1MB 4MB Message size in KB Message size in KB Fig.3.BandwidthofeachlayerofJXTA-CascomparedtoCsockets over aMyrinet network (left)andaGigaEthernetnetwork(right). Figure 3 shows the bandwidth measurements of all the communications layers of JXTA-C over SANs. Note that, as in the previous section, C sockets are used as an upperreferencebound.ThepeakbandwidthsoftheendpointserviceoverMyrinetand GigaEthernetare100MB/sand92.5MB/s,respectively.Theupperlayer(unicastpipe) reachesbandwidthsof97.6MB/sand87.7MB/soverMyrinetandGigaEthernet,re- spectively. These unsatisfactory results are due to memory copies being used in the implementationoftheendpointlayerofJXTA-C.Table1highlightsreducedlatencies, especiallyonGigaEthernet,ascomparedtoresultspublishedin[6]:820 sfortheend- (cid:0) point layerand1.99msfor thepipe layer.Toachievethisimprovement,we modified the implementation of the endpoint layer of JXTA-C by disabling TCP packetaggre- gation mechanism, as it adds significantly to the latency. Consequently,the buffering mechanismisnowperformedwithintheendpointlayerallowingoneTCPpackettobe sentforasingleJXTAmessagewithaminimallatency.Thesemodificationshavebeen committedintotheCVSofJXTA-Candarepubliclyavailable. Basedonthisevaluation,wecanconcludethat,intheircurrentimplementation,the communication layers of JXTA-C are not able to saturate SANs. The non-zero copy implementation of the endpointlayer preventsJXTA-C from approachingGb/s band- widths available over SANs. Note, however, that JXTA-C is in the process of being revived;sowebelievethattheperformanceofJXTA-Cwillincreaseinthenearfuture suchthatitwillbeabletoefficientlyuseSANs. 4.3 FullyexploitingSANcapacities InallpreviouslyreportedevaluationsbasedonMyrinet,theEthernetemulationmode of GM is used. However, this removes the ability to by-pass the IP stack of the OS andintroducesunneededoverhead.Consequently,communicationlayersareunableto 8 fullyexploitthecapacitiesofferedbyMyrinet:full-duplexbandwidthsofnearly2Gb/s and latencies of less than 7 s thanks to zero-copy communication protocols. Padi- (cid:0) coTM[21]isahigh-performanceframeworkfornetworkingandmulti-threadingwhich allowsmiddlewaresystemstotransparentlytakeadvantageofsuchfeatures.Inthissec- tion,wefocusonthevirtualsocketsfeatureofferedbyPadicoTM,whichprovidesaway todirectlyaccessGMnetworkinterfaces.Thisisachievedbydynamicallymapping,at runtime,standardsocketsfunctionsontopofGMAPIfunctions,withoutgoingthrough theTCP/IPstack.Zero-copyisthereforepossibleandallows,forexample,plainsock- etstotransparentlyreachabandwidthofmorethan230MB/sandlatencyof15 son (cid:0) Myrinet,comparedto160MB/sand51 swithoutPadicoTM. (cid:0) We have successfully ported JXTA-C to PadicoTM, without changing one line of code of JXTA-C. We only performed some minor modifications inside the OS- independentlayerusedbyJXTA-C:theApachePortableRuntime(APR).Wechanged from the default Posix thread library on which APR is based, to the Marcel [22] threadlibraryusedbyPadicoTM.However,thesemodificationscouldbeautomatically achievedbyasinglesedcommand.Animprovementof32MB/sforthebandwidthof JXTA-C’sendpointlayerhasbeenmeasuredresultinginapeakbandwidthof140MB/s, thusreachingover1Gb/s.Onthelatencyside,nosignificantimprovementshavebeen observed,asthenon-zerocopycommunicationlayerspreventsJXTA-Cfromfullyben- efitingfromtheOS-bypassfeatureofMyrinet.NotethatwedidnotusePadicoTMwith JXTA-J2SE,sincePadicoTMcurrentlysupportsonlytheopen-sourceKaffeJVM.Un- fortunately,thisJVMisnotcompliantwithJavaspecificationversion1.4and,therefore, isunabletoruntheJavabindingofJXTA. In conclusion, our experiments with PadicoTM show that JXTA could fully ben- efit from the potential performance of SAN networks if: 1) the implementation of all JXTAcommunicationlayersshouldrespectazero-copypolicy,2)PadicoTMaddssup- portforJXTA-compatibleJVMs(e.g.complianttoversion1.4ofJava’sspecifications). However,webelievethattheseissueswillbesolvedinthenearfuture. 5 PerformanceEvaluationofJXTAoverWide-Area Networks We performed the same type of measurements on WAN networks. Note that we had totunenetworksettingsofnodesusedforthisbenchmark.OurdefaultmaximumTCP buffer size initially set to 131072 bytes was limiting the bandwidth to only 7 MB/s. Based on the law, we computed a theoretical maximum size of (cid:0)(cid:2)(cid:1)(cid:4)(cid:3)(cid:6)(cid:5)(cid:8)(cid:7)(cid:10)(cid:9)(cid:11)(cid:5)(cid:13)(cid:12)(cid:15)(cid:14)(cid:17)(cid:16)(cid:18)(cid:5)(cid:8)(cid:19)(cid:21)(cid:20)(cid:22)(cid:1)(cid:4)(cid:23) 1507328 bytes and increased this value by an arbitrary factor of 1.2. Therefore, we set the maximum TCP buffersizes oneach node to 1959526bytes; ttcp configured with this value measured a raw TCP bandwidth of 107 MB/s, a reasonable level of performance. JXTA-J2SE’sperformances. As for SAN Giga Ethernet benchmarks, Figure 4 shows that the endpoint layer and unicast pipes of JXTA-J2SE are able to perform similarly toplainsocketsoverahigh-bandwidthWANof1Gb/s.Thislevelofperformancewas reached by modifyingJXTA-J2SE’s codein order to properly set TCPbuffersizes to 1959526bytesbeforebindingsocketsonbothsides.Usingthedefaultsetting,aband- widthofonly6MB/swasreachedforJXTA2.2.1andlessthan1MB/sforJXTA2.3.2. 9 AsopposedtoSANbenchmarks,bothversionsofJXTA-J2SEachievethesameperfor- mance.ThiscanbeexplainedbythethefactthatthehigherlatencyofWANshidesthe costofthemechanismimplementedforlimitingthesizeofJXTAmessages.Figures4 alsopointsoutthesameperformancedegradationforJXTAsocketsasforSANbench- marks.However,JXTAsocket2.3.2achievesahigherbandwidthcomparedtoits2.2.1 counterpart.PerformancedropsofunicastpipesandJXTAsocketsforJXTA-J2SE2.2.1 formessagesizeof4MBhavenotbeenexplainedsofar. 120 120 Java Socket Java Socket JXTA socket 2.2.1 (512 KB) JXTA socket 2.3.2 (512 KB) JXTA unicast pipe 2.2.1 JXTA unicast pipe 2.3.2 100 JXTA endpoint service 2.2.1 100 JXTA endpoint service 2.3.2 MB/s) 80 MB/s) 80 Throughput ( 4600 Throughput ( 4600 20 20 0 0 0.51 2 4 8 16 64 256 1MB 8MB 0.51 2 4 8 16 64 256 1MB 8MB Message size in KB Message size in KB Fig.4.BandwidthofeachlayerforJXTA-J2SE2.2.1(left)and 2.3.2(right)compared toJava socketsoverahigh-bandwidthWAN. JXTA-C’sperformances. Figure5showssimilarresultsforthecommunicationlayers of JXTA-C overWANs compared to the SAN benchmarks: both layers reacha band- widthslightlyabove80MB/s,foramessagesizeof2MB.Theobservedperformance degradationafterthismessagesizehasnotbeenexplained. 120 C Socket JXTA−C endpoint service JXTA−C unicast pipe 100 MB/s) 80 Throughput ( 4600 20 0 0.51 2 4 8 16 64 256 1MB 4MB Message size in KB Fig.5.BandwidthofeachlayerforJXTA-CcomparedtoCsocketsoverahigh-bandwidthWAN. Discussion. Based on this evaluation, we can conclude that JXTA’s communication layers,whenusedonhigh-bandwidthWANs,areabletoreachthesamebandwidthsas for SAN benchmarks. Both versions of JXTA-J2SE (and not only JXTA-J2SE 2.2.1) are able to efficiently use the bandwidth available on links used for interconnecting sitesofagrid,whereasthenon-zerocopycommunicationlayerspreventsJXTA-Cfrom saturatingthistypeoflinks.

Description:
platform over high-performance SANs and WANs, for both J2SE and C the evaluation of JXTA-J2SE and JXTA-C3 over both SANs and WANs.
See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.