Richard Jones (Ed.) S S o C R A ECOOP 2014 – 6 Object-Oriented 8 5 8 S Programming C N L 28th European Conference Uppsala, Sweden, July 28 – August 1, 2014 Proceedings 123 Lecture Notes in Computer Science 8586 CommencedPublicationin1973 FoundingandFormerSeriesEditors: GerhardGoos,JurisHartmanis,andJanvanLeeuwen EditorialBoard DavidHutchison,UK TakeoKanade,USA JosefKittler,UK JonM.Kleinberg,USA AlfredKobsa,USA FriedemannMattern,Switzerland JohnC.Mitchell,USA MoniNaor,Israel OscarNierstrasz,Switzerland C.PanduRangan,India BernhardSteffen,Germany DougTygar,USA DemetriTerzopoulos,USA GerhardWeikum,Germany Advanced Research in Computing and Software Science SublineofLecturesNotesinComputerScience SublineSeriesEditors GiorgioAusiello,UniversityofRome‘LaSapienza’,Italy VladimiroSassone,UniversityofSouthampton,UK SublineAdvisoryBoard SusanneAlbers,UniversityofFreiburg,Germany BenjaminC.Pierce,UniversityofPennsylvania,USA BernhardSteffen,UniversityofDortmund,Germany DengXiaotie,CityUniversityofHongKong JeannetteM.Wing,MicrosoftResearch,Redmond,WA,USA Richard Jones (Ed.) ECOOP 2014 – Object-Oriented Programming 28th European Conference Uppsala, Sweden, July 28 – August 1, 2014 Proceedings 1 3 VolumeEditor RichardJones SchoolofComputing UniversityofKent Canterbury,Kent,CT27NF,UK E-mail:[email protected] ISSN0302-9743 e-ISSN1611-3349 ISBN978-3-662-44201-2 e-ISBN978-3-662-44202-9 DOI10.1007/978-3-662-44202-9 SpringerHeidelbergNewYorkDordrechtLondon LibraryofCongressControlNumber:2014943419 LNCSSublibrary:SL2–ProgrammingandSoftwareEngineering ©Springer-VerlagBerlinHeidelberg2014 Thisworkissubjecttocopyright.AllrightsarereservedbythePublisher,whetherthewholeorpartof thematerialisconcerned,specificallytherightsoftranslation,reprinting,reuseofillustrations,recitation, broadcasting,reproductiononmicrofilmsorinanyotherphysicalway,andtransmissionorinformation storageandretrieval,electronicadaptation,computersoftware,orbysimilarordissimilarmethodology nowknownorhereafterdeveloped.Exemptedfromthislegalreservationarebriefexcerptsinconnection withreviewsorscholarlyanalysisormaterialsuppliedspecificallyforthepurposeofbeingenteredand executedonacomputersystem,forexclusiveusebythepurchaserofthework.Duplicationofthispublication orpartsthereofispermittedonlyundertheprovisionsoftheCopyrightLawofthePublisher’slocation, inistcurrentversion,andpermissionforusemustalwaysbeobtainedfromSpringer.Permissionsforuse maybeobtainedthroughRightsLinkattheCopyrightClearanceCenter.Violationsareliabletoprosecution undertherespectiveCopyrightLaw. Theuseofgeneraldescriptivenames,registerednames,trademarks,servicemarks,etc.inthispublication doesnotimply,evenintheabsenceofaspecificstatement,thatsuchnamesareexemptfromtherelevant protectivelawsandregulationsandthereforefreeforgeneraluse. Whiletheadviceandinformationinthisbookarebelievedtobetrueandaccurateatthedateofpublication, neithertheauthorsnortheeditorsnorthepublishercanacceptanylegalresponsibilityforanyerrorsor omissionsthatmaybemade.Thepublishermakesnowarranty,expressorimplied,withrespecttothe materialcontainedherein. Typesetting:Camera-readybyauthor,dataconversionbyScientificPublishingServices,Chennai,India Printedonacid-freepaper SpringerispartofSpringerScience+BusinessMedia(www.springer.com) Preface It is an honour and a pleasure to present the proceedings of the 28th European Conferenceon Object-OrientedProgramming(ECOOP),the premierEuropean conference on object-oriented programming and related theory and practice of software development. As always, ECOOP is characterised by the strength of its technical programme, and this year was no exception. As a change, ECOOP 2014 followed the model of many other leading computer science conferences to use an External Review Committee (ERC) and Light Double-Blind Reviewing (LDBR). So why did we do this? The ERCwasrecruitedto provideadditionalexpertiseandbe the solejudge of submissions from Programme Committee (PC) members. This makes the process of judging PC papers transparent and also removes the awkwardness in the PC meeting of working with those whose papers one has just judged. We all have biases. Some are conscious, other unconscious. It is just human nature. As reviewers we have a duty to do our best to prevent these biases colouring our assessments of the papers we review. We want to judge whether a paper should be accepted for publication on the basis of the paper alone. We need systems that put us in the best possible position to do that, not only in order to be fair to authors but to ensure that it is the best papers that are accepted. The evidence that non-blind assessments disadvantage is accepted in many fields;examplescanbefoundinKathrynMcKinley’spersuasivecase[SIGPLAN Notices43(8),2008]fordouble-blindreviewing.Whyshouldcomputersciencere- viewing be any different?RichardSnodgrass’sanalysis[SIGMOD Record35(3), 2006] of single- v. double-blind reviewing concludes that Rebecca Blank’s 1991 summary [American Economic Review 81(5):10411067, 1991] remains true,“If notfullyconvincing,however,thereisatleastadisturbingamountofevidencein thesestudiesthatisconsistentwiththe hypothesisofrefereebiasinsingle-blind reviewing.” Suppose I am reviewing a paper. I see from the front page that it is by Alan Turing.IhavetremendousrespectforAlan’spreviouswork.StraightawayIhave what psychologistscall an ‘anchor point’. The HarvardLaw School Programon Negotiationdefines anchoringas“acognitivebias; it is the commonhumanten- dency to rely too heavily on the first piece of information offered (the ‘anchor’) when making decisions. Once an anchor is set, other judgments are made by adjustingawayfromthatanchor,andthereisabiastowardinterpretingotherin- formationaroundtheanchor”[http://www.pon.harvard.edu/tag/anchor/].Sup- pose another paper on my pile is from an author and an institution neither of which I recognise. Again, I have an anchor, this time a negative one. I have VI Preface no doubt thatreviewersusually overcomethese anchorsto make a soundjudge- ments.Butoftenthisisatthecostofadditionalreviewingtime.BeforeIconclude that Alan’s paper is actually poor, I will probably have spent much longer than normal to ensure that I really have understood it. Equally, because I fear that I might be prejudiced against the unknown author, I am likely to spend extra time bending over backwards to ensure that I am really being fair. ECOOP 2014 used light double-blind reviewing, whereby authors’ names were withheld from a reviewer until they have submitted their initial review. At that point, the authors’ identities were revealed and the reviewer was free to investigate their work further, update their review, etc. The prime aim of LDBRistoremovetheinitialanchorpoint;itisnottostriveforperfection.But why not use fully blind reviewing, whereby authors identities are not revealed at all during the review process? In his report [SIGPLAN Notices 47(4a), 2012] as Programme Chair of POPL 2012, Mike Hicks argues that LDBR helps with mistaken judgements based on identity, and avoids potential abuses such as arguing for a friend’s paper. It also helps to check that any author-supplied conflicts are valid. ECOOP 2014 received 101 submissions, with authors from 29 countries; 11% of authors were women. Each paper was reviewed by at least 4 review- ers; where necessary, further reviews were solicited from PC, ERC or external reviewers.411reviewswereproduced. Authors weregivenanopportunity to re- spondtoreviews,afterwhichtherewasanintensiveperiodofdiscussionthrough CyberChairPRO.The ERC met online to determine the fate of PC submissions shortlybeforethePCmeetinginCanterbury.The27papersaccepted(only1PC submission) were written by authors from 13 countries. 30% of accepted papers includedatleastonefemale author(matchingthe 31%ofsubmissions).The PC made two distinguished paper awards to Safely Composable Type-Specific Lan- guages by Cyrus Omar, Darya Kurilova, Ligia Nistor, Benjamin Chung, Alex Potanin and Jonathan Aldrich; and Stream Processing with a Spreadsheet by Mandana Vaziri, Olivier Tardieu, Rodric Rabbah, Philippe Suter and Martin Hirzel. The final programme included four keynote talks: two from the winners of the 2014 Dahl-Nygaard Senior Award, Robert France and William Cook; one fromthe winnerofthe 2014Dahl-NygaardJuniorAward,Tudor Gˆırba;andthe fourth from Luca Cardelli, who was invited by the PC. Every conference depends on the quality of the research it presents. I would liketothankalltheauthorswhosubmittedtheirworktoECOOP2014(well,all except the ‘author’ who submitted a paper generated by SCIgen). I would also like to pay tribute to members of the PC and the ERC. I was truly impressed with the care and time they put into producing reviews of very high quality, and doing so on time. It was a honour to work with all of you. I would also Preface VII like to thank the ECOOP2014OrganisingChair, TobiasWrigstad; the Artifact Evaluation Co-Chairs, Camil Demetrescu and Erik Ernst; and Richard van de Stadt for his excellent support through CyberChairPRO. May 2014 Richard Jones Artifacts Thisisthe secondyearwhereArtifactEvaluation(AE)waspartofthe ECOOP publication process, and similar processes are being adopted at several other top conferences. AE is a process where artifacts associated with the published papers—software, data, proofs, videos, etc.—are submitted, reviewed, and ac- cepted or rejected by an Artifact Evaluation Committee (AEC). The long-term goal is to foster a culture of reproducibility of experimental results by consid- ering software artifacts as first-class citizens, a perspective that has long been missing at software conferences. Following the AE tradition, the ECOOP 2014 AEC was entirely formed by junior outstanding researchers. The ECOOP 2014 AE process introduced two novelties. First, authors were invitedtoincludeintheirpapersaone-pageappendixdescribingtheartifact,its goals, and the requirements for installing and running it. Second, accepted arti- facts were collected as supplementary material on the publisher’s digital library for permanent and durable storage. The aim of artifact evaluation is to enhance and deepen the information providedtothecommunityabouttheresearchresultsdescribedintheassociated papers, thus improving the perspectives for confirming those research results under similar or different conditions, and for creating derived results. Artifacts are reviewed and accepted even if they cannot be made available to the public, e.g., because of confidentiality requirements or intellectual property difficulties, but it is certainly the intention that they should be made available if possible. The Artifact Evaluation process was similar in complexity to the paper re- viewingprocess,butnotidentical.Eachartifactwasindependentlyevaluatedby three AEC members. First, each reviewer would ‘kick the tires’ of the artifact in order to check that it could be reviewedat all; this ruled out corrupt artifact archivefilesandsimilarlow-levelproblemsthatoughtnotcauseabadreviewfor the artifact and could easily be resolved. The approach used was to go through the ‘Getting Started Guide’ for the artifact, which was a mandatorypart of the submission, andthen getfeedback fromthe artifactsubmitters to eliminate any low-level problems. In the second phase, the reviewers evaluated the artifact and wrote the re- views. Each reviewer read the paper and wrote a summary providing a brief characterization of the context for the artifact. In the artifact evaluation, re- viewers focused on four key questions: (1) Is the artifact consistent with the pa- per? (2)Is the artifact complete? (3)Is the artifact well documented? and(4)Is the artifact easy to reuse? The AEC members decided on acceptance or rejec- tion, and provided the review text itself, containing characterizations of strong and weak sides of the artifact as well as advice about potential improvements. Many updates were applied to the reviews, reflecting that the discussions gave rise to new insights and changed evaluations. During the discussions, all AEC X Artifacts members not conflicted with each artifact could see all reviews and discussions, thus allowing for a calibration of the reviews across different artifacts. Among the 27 papers accepted at ECOOP 2014, we received 13 artifacts for evaluation.Ofthose,theAECaccepted11andrejected2.Itshouldbenotedthat a high acceptance rate is natural for the AE process, because it only included artifacts related to papers that had already been accepted for publication at the conference. The reason for having a firewall between paper acceptance and artifactevaluationwasthatthe latterwasnotsupposedto influence the former. As the AE process evolves, it is possible that this will change in the future, but currently a strict separationis intended, and it was enforced by postponing the entire AE process until decisions about paper acceptance had been reached. The papers with accepted artifacts in this proceedings are marked with a rosette representing the seal of approval by the AEC, and the table of contents contains a similar but smaller mark on these papers. We were glad to note that this year all accepted artifacts were collected on SpringerLink. The AE process is currently under development, and we learned a lot from former AE organizers. In particular, we relied on the guidelines by Shriram Krishnamurthi, Matthias Hauswirth, Steve Blackburn, and Jan Vitek published in the foundational on-line article Artifact Evaluation for Software Conferences availableathttp://www.artifact-eval.org.The Artifact Evalu- ation Artifact effort by Steve Blackburn and Matthias Hauswirth, available at theaddresshttp://evaluate.inf.usi.ch/artifacts/aea,wasalsoofinspira- tion.AwarmacknowledgementgoestoJanVitekandtoShriramKrishnamurthi for many useful suggestions and comments. We wish to thank the Programme CommitteeChairRichardJonesandtheOrganizingChairTobiasWrigstadfora fruitful cooperation.We acknowledgeAnna KramerfromSpringerforendorsing the idea of making artifacts available free of charge on the SpringerLink digi- tal library and Stephan Brandauer for efficiently handling the AE pages of the ECOOP 2014 Website. We are also indebted to Richard van de Stadt for his help with the CyberChair conference management system, which was tailored to support this year’s AE process. We warmly acknowledge the impressive ef- fort of AEC members: they did the hardest part of the job with dedication and enthusiasm. Finally, we deeply thank all authors for packaging and document- ing their artifacts for ECOOP 2014 and for making them publicly available on SpringerLink;webelievethatthisisaninvaluableservicetothecommunitythat deserves to be commended. We hope that readers will enjoy the published artifacts and will find them useful for their future work. May 2014 Camil Demetrescu Erik Ernst Organization ECOOP2014wasorganizedbyUppsalaUniversitetandthe UniversityofKent, under the auspices of AITO (Association Internationale pour les Technologies Objets) and in cooperation with ACM SIGPLAN and ACM SIGSOFT. In-Cooperation Organising Chair Tobias Wrigstad Uppsala Universitet, Sweden Programme Chair Richard Jones University of Kent, UK Workshop Chair Nate Nystrom University of Lugano, Switzerland Poster and Demo Chair Wolfgang Ahrendt Chalmers University of Technology, Sweden Artifact Evaluation Chairs Camil Demetrescu Sapienza University of Rome, Italy Erik Ernst Aarhus University, Denmark