Lecture Notes in Computer Science 1769 EditedbyG.Goos,J.HartmanisandJ.vanLeeuwen 3 Berlin Heidelberg NewYork Barcelona HongKong London Milan Paris Singapore Tokyo Günter Haring Christoph Lindemann Martin Reiser (Eds.) Performance Evaluation: Origins and Directions 1 3 SeriesEditors GerhardGoos,KarlsruheUniversity,Germany JurisHartmanis,CornellUniversity,NY,USA JanvanLeeuwen,UtrechtUniversity,TheNetherlands VolumeEditors GünterHaring UniversitätWien InstitutfürAngewandteInformatikundInformationssysteme Lenaugasse2/8,1080Wien,Austria E-mail:[email protected] ChristophLindemann UniversitätDortmund,InformatikIV August-Schmidt-Straße12,44227Dortmund,Germany E-mail:[email protected] MartinReiser GMDInstitutfu¨rMedienkommunikation,SchloßBirlinghoven Postfach1316,53754SanktAugustin,Germany E-mail:[email protected] Cataloging-in-PublicationDataappliedfor DieDeutscheBibliothek-CIP-Einheitsaufnahme Performanceevaluation:originsanddirections/Gu¨nterHaring...(ed.).- Berlin;Heidelberg;NewYork;Barcelona;HongKong;London; Milan;Paris;Singapore;Tokyo:Springer,2000 (Lecturenotesincomputerscience;Vol.1769) ISBN3-540-67193-5 CRSubjectClassification(1998):C.4,B.1-4,D.4,C.2,D.2,K.6 ISSN0302-9743 ISBN3-540-67193-5Springer-VerlagBerlinHeidelbergNewYork Thisworkissubjecttocopyright.Allrightsarereserved,whetherthewholeorpartofthematerialis concerned,specificallytherightsoftranslation,reprinting,re-useofillustrations,recitation,broadcasting, reproductiononmicrofilmsorinanyotherway,andstorageindatabanks.Duplicationofthispublication orpartsthereofispermittedonlyundertheprovisionsoftheGermanCopyrightLawofSeptember9,1965, initscurrentversion,andpermissionforusemustalwaysbeobtainedfromSpringer-Verlag.Violationsare liableforprosecutionundertheGermanCopyrightLaw. Springer-VerlagisacompanyinthespecialistpublishinggroupBertelsmannSpringer ©Springer-VerlagBerlinHeidelberg2000 PrintedinGermany Typesetting:Camera-readybyauthor,dataconversionbySteingraeberSatztechnikGmbH,Heidelberg Printedonacid-freepaper SPIN:10719732 06/3142 543210 Preface Performanceevaluationhasbeenadisciplineofcomputerscienceforsomethirty years. To us, it seemed to be time to take stock of what we - the performance evaluation community - were doing. Towards this end, we decided to organize a workshoponPerformanceEvaluationofComputerSystemsandCommunication Networks, which was held at the international conference and research center for computer science, Schlo(cid:25) Dagstuhl, Germany, September 15-19, 1997. The participantsdiscussed,amongotherthings,thefollowingfundamentalquestions: What are the scienti(cid:12)c contributions of performance evaluation? What is its relevance in industry and business? What is its standing in academia? Where is the (cid:12)eld headed? What are its success stories and failures? What are its current burning questions? During this workshop, working groups focused on areas like performance eval- uation techniques and tools, communication networks, computer architecture, computer resource management, as well as performance of software systems (see http://www.ani.univie.ac.at/dagstuhl97/). The participants summarized the past of performance evaluation and projected future trends and needs in the (cid:12)eld. It was identi(cid:12)ed that - as in many other sciences - at the beginning therewastheobservationofthebehaviorofsystems,generallybymeasurements, followed by the development of theories to explain the observed behavior. Es- pecially in system modeling, based on these theories, methodologies have been developed for behavior prediction. At that stage, measurement changed its role frompure phenomenologicalobservationto model-drivenparameterestimation. Basedonaseriesofhighlysuccessfulcasestudies,toolmethodologyimplemented inversatilesoftwarepackageshasbeendevelopedtomakethetheoreticalresults amenableto practitioners.Originallyfocusedoncomputer systemsandcommu- nications, the scope of performance evaluation broadened to include processor architecture,parallelanddistributedarchitectures,operatingsystems,database systems, client-server computing, fault tolerant computing, and real-time sys- tems. With the growing size and complexity of current computer and communi- cation systems, both hardware and software, the existing methodologies and software packages for performance modeling seem to reach their limits. It is interestingto observethatforthese complexinteroperablesystemsthe method- ological cycle starts again with measurements to describe the system behavior, and understand it at a qualitative level. High performance parallel processing systems and the Internet tra(cid:14)c are good examples for this pattern of scienti(cid:12)c progress. An example is the discovery of self-similar tra(cid:14)c that spurred a large number of observational and empirical studies. VIII Preface During three decades of research, performance evaluation has achieved a rich body of knowledge. However, the participants of the Dagstuhl workshop felt, that the results are not as crisply focused as those in other prominent (cid:12)elds of computer science. Therefore, the project of writing a monograph on Performance Evaluation - Origins and Directions was begun with the following basic ideas. This monograph should document the history, the key ideas and importantsuccessstoriesofperformanceevaluationanddemonstratetheimpact of performance evaluation on di(cid:11)erent areas through case studies. The book should generate interest in the (cid:12)eld and point to future trends. It is aimed at senior undergraduate and graduate students (for seminar reading), faculty members(forcoursedevelopment),andengineers(asorientationaboutavailable methodsandtools).Thecontributionsshouldbeonanacademiclevel,havingan essay-characterandtheyshouldbehighlyreadable.Theyshouldalsobeaslively and stimulating as possible, where the mathematicaltopics shouldbe presented on the appropriate level, concentrating on concepts and ideas, avoiding di(cid:14)cult technicalities. Aseditorswe areproudthatwe havesucceededinrecruitingtopresearchers in the (cid:12)eld to write speci(cid:12)c chapters. The spectrum of topical areas is quite broad comprising work from an application as well as from a methods point-of- view. Application topics include software performance, scheduling, parallel and distributed systems, mainframe and storage systems, World Wide Web, wire- lessnetworks,availabilityandreliability,databasesystems,andpollingsystems. Methodologicalchapterstreatnumericalanalysismethods,product-formqueue- ing networks, simulation, mean-value analysis and workload characterization. Altogether the volume contains 19 topical chapters. Additionally, it contains onecontributionontheroleofperformanceevaluationinindustry,andpersonal accountsoffourkey contributorsdescribingthe genesisofbreakthroughresults. We hope that the book will serve as a reference to the (cid:12)eld of performance evaluation. For the (cid:12)rst time, it presents a comprehensive view of methods, ap- plications, and case studies expressed by authors who have shaped the (cid:12)eld in a major way. It is our expectation, that the volume will be a catalyst for the further evolution of performance evaluation and help to establish this exciting discipline in the curriculum, in industry, and in the IT world at large. October 1999 Gu¨nter Haring Christoph Lindemann Martin Reiser Foreword The issue of computer performance evaluation must have concerned computer scienceandengineeringresearchersaswellassystemdesigners,systemoperators, and end users ever since computer systems came into existence. But it was not until around 1970 that \performance evaluation", as a clearly identi(cid:12)able discipline within the computer science/engineering (cid:12)eld, established itself. This was the period when IBM introduced its successful 360 series, and time-shared operating systems were the main focus of the computer community. The IFIPWorkingGroup7.3\Computersystemmodeling"wasinitiatedby Paul Green (IBM Yorktown), Erol Gelenbe (INRIA, France), and myself (IBM Yorktown). The ACM Sigmetrics group was also established around that time. The1970swasareallyexcitingperiodwhenthenewcomputerperformancecom- munity playeda majorrolein advancingthe state ofthe artin queuingnetwork models:a majorextensionofnetworkswasmade with \product-form"solutions by Baskett-Chandy-Muntz-Palacios (1975) and a number of other works on re- latedsubjects.LenKleinrock’stwovolumesQueuingSystems(1975,1976)made majorcontributionsbyprovidingbothcomputerandnetworkcommunitieswith the state-of-the-art knowledge in queuing theory. Reiser-Sauer-McNair’s e(cid:11)orts atIBM Yorktownin creatingQNET andRESQ modeling packagesanddissem- inating them to both industrial and academic communities was a major land- mark,andpromptedsimilare(cid:11)ortsundertakenatAT&T,INRIA,andelsewhere. Morerecently,softwarepackagesfortheuser-friendlyspeci(cid:12)cationandautomat- ed analysis of stochastic Petri nets like the DSPNexpress package by Christoph Lindemann (1995) made signi(cid:12)cant impact both in academia and industry. The MVA algorithm devised by Martin Reiser and Steve Lavenberg (1980) made large scale modeling computationally feasible. Je(cid:11) Buzen, who published a seminal paper on the centralservermodel (1973)was amongthe (cid:12)rstsuccess- ful entrepreneurs to demonstrate a viable business opportunity in performance modeling.TheintroductionofgeneralizedstochasticPetrinetsbyMarcoAjmone Marsan,GianfrancoBalbo,andGianniConte(1984)allowedthehigh-levelspec- i(cid:12)cationofdiscrete-eventsystemswithexponentialevents(i.e.,Markovianmod- els) and their automated solution process by tool support. A recent landmark in performance modeling of communication networks and the Internet consti- tuted the discoveryof self-similar tra(cid:14)c by Walter Willinger and his co-workers at Bellcore (1995). Four personal accounts of these key contributors describing where and how these research results have been discovered are included in this book. In1981,aninternationaljournal\PerformanceEvaluation"(NorthHolland) was created with me as its founding editor-in-chief. Together with the regularly held IFIP WG 7.3’s Performance Symposium and ACM’s Sigmetrics Confer- ences, the journal established the identity of our performance community, and thanks to the dedicated e(cid:11)orts by succeeding editors (Martin Reiser, and now WernerBux)thejournalhasmaintaineditsroleasamajorforumforthearchival VI Foreword literature of our research community. Through this journal and the aforemen- tioned symposia and through more recent additions such as Performance Tool ConferencesregularlyorganizedbyGu¨nterHaringandothers,wehavebeenquite successful in embracing our outstanding colleagues from related (cid:12)elds, such as operations research and applied probability. As the paradigm of computing has changed from time-shared mainframe computers to networks of personal computers and workstations, the focus of performance modeling and evaluation has shifted from CPUs to networks and distributeddatabases.WiththerapidlygainingdominanceoftheInternet,wire- less access networks, and optical network backbones, we will witness profound changes and increasing complexities in associated performance issues. For ex- ample, as tra(cid:14)c over the Internet expands exponentially, and end user’s desire to access the Web servers from anywhere and at any time grows rapidly, in- teractions among clients and web proxy servers should present major technical challenges to network service providers, and hence great opportunities for our performance community. Tom Leighton of MIT and his colleagues have seized suchanopportunitybydevelopinganetworkofproxyserver’scachecalled\free flow" and have turned their research project into a rapidly expanding business. Iexpectsimilaropportunitieswillaboundinthecomingdecade,asmobileusers andavarietyof\informationappliances"willbeinterconnected,forminganever expanding information network. Itisquite timely thatthis specialvolumeon\PerformanceEvaluation"con- tributed by a number of outstanding colleagues is going to be published at this excitingperiod,whenthecomplexityandsophisticationofinformationnetworks continue to multiply and challenge our ability to evaluate, predict, and improve system performance. I congratulate Gu¨nter Haring, Christoph Lindemann, and Martin Reiser on their success in compiling these important articles into an archival form. November 1999 Hisashi Kobayashi Table of Contents Position Paper Performance Evaluation in Industry: A Personal Perspective::::::::::::: 3 Stephen S. Lavenberg, Mark S. Squillante I Topical Area Papers Mainframe Systems :::::::::::::::::::::::::::::::::::::::::::::::: 17 Je(cid:11)rey P. Buzen Performance Analysis of Storage Systems ::::::::::::::::::::::::::::: 33 Elizabeth Shriver, Bruce K. Hillyer, Avi Silberschatz Ad Hoc, Wireless, Mobile Networks: The Role of Performance Modeling and Evaluation :::::::::::::::::::::::::::::::::::::::::::::::::::: 51 Mario Gerla, Manthos Kazantzidis, Guangyu Pei, Fabrizio Talucci, Ken Tang Trace-Driven Memory Simulation: A Survey::::::::::::::::::::::::::: 97 Richard A. Uhlig, Trevor N. Mudge Performance Issues in Parallel Processing Systems ::::::::::::::::::::: 141 Luiz A. DeRose, Mario Pantano, Daniel A. Reed, Je(cid:11)rey S. Vetter Measurement-Based Analysis of Networked System Availability :::::::::: 161 Ravishankar K. Iyer, Zbigniew Kalbarczyk, Mahesh Kalyanakrishnan Performance of Client/Server Systems :::::::::::::::::::::::::::::::: 201 Daniel A. Menasc(cid:19)e, Virgilio A. F. Almeida Performance Characteristics of the World Wide Web ::::::::::::::::::: 219 Mark E. Crovella Parallel Job Scheduling: A Performance Perspective :::::::::::::::::::: 233 Shikharesh Majumdar, Eric W. Parsons Scheduling of Real-Time Tasks with Complex Constraints::::::::::::::: 253 Seonho Choi, Ashok K. Agrawala X Table of Contents Software Performance Evaluation by Models::::::::::::::::::::::::::: 283 Murray Woodside Performance Analysis of Database Systems:::::::::::::::::::::::::::: 305 Alexander Thomasian Performance Analysis of Concurrency Control Methods ::::::::::::::::: 329 Alexander Thomasian Numerical Analysis Methods :::::::::::::::::::::::::::::::::::::::: 355 William J. Stewart Product Form Queueing Networks ::::::::::::::::::::::::::::::::::: 377 Simonetta Balsamo Stochastic Modeling Formalisms for Dependability, Performance and Performability ::::::::::::::::::::::::::::::::::::::::::::::::: 403 Katerina Go(cid:20)seva-Popstojanova, Kishor Trivedi Analysis and Application of Polling Models ::::::::::::::::::::::::::: 423 Hideaki Takagi Discrete-Event Simulation in Performance Evaluation::::::::::::::::::: 443 David M. Nicol Workload Characterization Issues and Methodologies ::::::::::::::::::: 459 Maria Calzarossa, Luisa Massari, Daniele Tessera II Personal Accounts of Key Contributors From the Central Server Model to BEST/1(cid:13)c :::::::::::::::::::::::::: 485 Je(cid:11)rey P. Buzen Mean Value Analysis: A Personal Account :::::::::::::::::::::::::::: 491 Martin Reiser The Early Days of GSPNs :::::::::::::::::::::::::::::::::::::::::: 505 Marco Ajmone Marsan, Gianfranco Balbo, Gianni Conte The Discovery of Self-similar Tra(cid:14)c :::::::::::::::::::::::::::::::::: 513 Walter Willinger Author Index ::::::::::::::::::::::::::::::::::::::::::::::::: 529
Description: