ebook img

Parallel Architectures and Bioinspired Algorithms PDF

287 Pages·2012·6.222 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Parallel Architectures and Bioinspired Algorithms

Studies in Computational Intelligence 415 Editor-in-Chief Prof.JanuszKacprzyk SystemsResearchInstitute PolishAcademyofSciences ul.Newelska6 01-447Warsaw Poland E-mail:[email protected] Forfurthervolumes: http://www.springer.com/series/7092 Francisco Fernández de Vega, José Ignacio Hidalgo Pérez, and Juan Lanchares (Eds.) Parallel Architectures and Bioinspired Algorithms ABC Editors FranciscoFernándezdeVega JuanLanchares CentroUniversitariodeMérida FacultaddeInformática UniversidaddeExtremadura UniversidadComplutensedeMadrid Mérida CalledelProfesorJoséGarcía Spain Madrid Spain JoséIgnacioHidalgoPérez FacultaddeInformática UniversidadComplutensedeMadrid CalledelProfesorJoséGarcía Madrid Spain ISSN1860-949X e-ISSN1860-9503 ISBN978-3-642-28788-6 e-ISBN978-3-642-28789-3 DOI10.1007/978-3-642-28789-3 SpringerHeidelbergNewYorkDordrechtLondon LibraryofCongressControlNumber:2012933085 (cid:2)c Springer-VerlagBerlinHeidelberg2012 Thisworkissubjecttocopyright.AllrightsarereservedbythePublisher,whetherthewholeorpartof thematerialisconcerned,specificallytherightsoftranslation,reprinting,reuseofillustrations,recitation, broadcasting,reproductiononmicrofilmsorinanyotherphysicalway,andtransmissionorinformation storageandretrieval,electronicadaptation,computersoftware,orbysimilarordissimilarmethodology nowknownorhereafterdeveloped.Exemptedfromthislegalreservationarebriefexcerptsinconnection with reviews or scholarly analysis or material supplied specifically for the purpose of being entered and executed on a computer system, for exclusive use by the purchaser of the work. Duplication of this publication or parts thereof is permitted only under the provisions of the Copyright Law of the Publisher’slocation,initscurrentversion,andpermissionforusemustalwaysbeobtainedfromSpringer. PermissionsforusemaybeobtainedthroughRightsLinkattheCopyrightClearanceCenter.Violations areliabletoprosecutionundertherespectiveCopyrightLaw. Theuseofgeneraldescriptivenames,registerednames,trademarks,servicemarks,etc.inthispublication doesnotimply,evenintheabsenceofaspecificstatement,thatsuchnamesareexemptfromtherelevant protectivelawsandregulationsandthereforefreeforgeneraluse. Whiletheadviceandinformationinthisbookarebelievedtobetrueandaccurateatthedateofpub- lication,neithertheauthorsnortheeditorsnorthepublishercanacceptanylegalresponsibilityforany errorsoromissionsthatmaybemade.Thepublishermakesnowarranty,expressorimplied,withrespect tothematerialcontainedherein. Printedonacid-freepaper SpringerispartofSpringerScience+BusinessMedia(www.springer.com) Contents Introduction................................................... 1 FranciscoFerna´ndezdeVega,J.IgnacioHidalgo,JuanLanchares CreatingandDebuggingPerformanceCUDAC ..................... 7 W.B.Langdon Optimizing Shape Design with Distributed Parallel Genetic ProgrammingonGPUs ......................................... 51 SimonHarding,W.Banzhaf CharacterizingFault-ToleranceinEvolutionaryAlgorithms........... 77 Daniel Lombran˜a Gonza´lez, Juan Luis Jime´nez Laredo, FranciscoFerna´ndezdeVega,JuanJulia´nMereloGuervo´s ComparisonofFrameworksforParallelMultiobjectiveEvolutionary OptimizationinDynamicProblems ............................... 101 MarioCa´mara,JulioOrtega,FranciscodeToro An Empirical Study of Paralleland Distributed ParticleSwarm Optimization .................................................. 125 LeonardoVanneschi,DanieleCodecasa,GiancarloMauri TheGeneralizedIslandModel.................................... 151 DarioIzzo,MarekRucin´ski,FrancescoBiscani EvolutionaryAssociativeMemoriesthroughGeneticProgramming .... 171 JuanVillegas-Cortez,GustavoOlague,HumbertoSossa,CarlosAvile´s ParallelArchitecturesforImprovingthePerformanceofaGABased TradingSystem ................................................ 189 Iva´nContreras,J.IgnacioHidalgo,LauraNun˜ez-Letamend´ıa,YiyiJiang VI Contents A Knowledge-Based Operator for a Genetic Algorithm which OptimizestheDistributionofSparseMatrixData ................... 219 Una-May O’Reilly, Nadya Bliss, Sanjeev Mohindra,Julie Mullen, EricRobinson EvolutiveApproachesforVariableSelectionUsingaNon-parametric NoiseEstimator................................................ 243 AlbertoGuille´n,DusˇanSovilj,MarkvanHeeswijk,LuisJavierHerrera, AmauryLendasse,He´ctorPomares,IgnacioRojas AChemicalEvolutionaryMechanismforInstantiatingService-Based Applications................................................... 267 MaurizioGiordano,ClaudiaDiNapoli AuthorIndex ..................................................... 287 Introduction Francisco Fern´andez de Vega, J. Ignacio Hidalgo, and Juan Lanchares For many years, computer performance improvement was based on techno- logicalinnovationsthatallowedtodramaticallyincreasethechip’stransistor count.Moreover,architecturalprogressaimedatorganizingprocessorsstruc- ture have allowed to overcome traditional sequential execution of programs by exploiting instruction level parallelism. Yet, last decade has shown that Moore’s Law is reachingits natural breaking point and maintaining the per- formance improvement rate by decreasing transistor’s size will no longer be possible. Main manufacturers have thus decided to offer more processor ker- nelsinasinglechip,openingthe wayto themulti coreera.Examplesofthat are: the Intel core i3 (2 cores), i5 (4 cores), and i7 (4 cores) architectures, AMD Zambezi, phenom iii (8 cores), phenom ii (6 cores). Butthisisnottheonlyeffortcomingfromthehardwareindustry.Another clearexample are the GraphicsProcessingUnits (GPUs). Initially conceived for speeding up image processing, those systems have become a standardfor parallel execution of applications in the last few years. On the other hand, the development of internet has leaded to the emer- gence of the Cloud concept and cloud computing technology, which pro- vides distributed computing and storageresourcesto be effortlessly accessed through the web. A number of abilities must be considered when deploying cloudapplications:remote computersavailability,applications dependability and fault tolerance, to name but a few. Summarizing, the possibility of us- ing parallel architectures is a common practice today, not only for the most complex systems but also when the simplest ones are deployed. On the other hand, bioinspired algorithms are being also influenced by this paradigm shift: research is moving from sequential implementations to Francisco Ferna´ndezdeVega Universidad de Extremadura,Spain J. Ignacio Hidalgo · Juan Lanchares Universidad Complutense de Madrid, Spain F.F.deVegaetal.(Eds.):ParallelArchitectures&BioinspiredAlgorithms,SCI415,pp.1–6. springerlink.com (cid:2)c Springer-VerlagBerlinHeidelberg2012 2 F.F. de Vega, J.I. Hidalgo, and J. Lanchares parallel and distributed versions of these typically population based tech- niques that inherently exploit the parallel nature of the underlying models. Although the benefit of using structured populations was foreseen by the pioneers, severaldecades have been necessaryfor the industry to provide ac- cessible commodities for parallel and distributed implementations -including GPUs,multiandmanycores,clouds,etc.-thusproducingthegrowofatrend in the field. The combination of Parallel Architectures and Bioinspired Al- gorithms is attracting an attention that will continue growing in the coming years. We are thus editing this book with the goal of gathering examples of best practices when combining bioinspired algorithms with parallel architectures. Leading researchersin the field have contributed: some of the chapters sum- marize work that has been ongoing for several years, while others describe more recent exploratory work. The book thus offers a map with the main paths already explored and new ways towards the future. We hope this volume will be of value to both specialists in Bioinspired Algorithms, Parallel and Distributed Computing, as well as computer sci- ence students trying to understand the present and the future of Parallel Architectures and Bioinspired Algorithms. This book is a collective effort, and we must thank all the contributing authors, whose effortand dedicationhave givenrise to the presentwork.We alsothankinstitutionsthathavefundedoureffort,ExtremaduraGovernment and EDERF under project GR10029 and Spanish Ministry of Science and Techonology, project TIN2011-28627-C04-03. Lastbutnotleastweappreciatetheencouragement,supportandpatience offered by Professor Janusz Kacprzyk, as well as by Springer during the editing process. Road Map This book is organized in chapters that shows some of the best know ef- forts for exploiting the parallel nature of Bioinspiredalgorithms in combina- tionwithparallelcomputerarchitectures.Thechaptersarelogicallygrouped aroundanumberoftopics:hardware,algorithmsandapplications.Although no explicit sections have been established, readers can follow this path se- lecting those chaptersthat better fits with their interest. Onthe other hand, a sequential reading will provide a general view of the field going from had- waretosoftwareandapplications.Thereminderofthissectionincludesbrief summaries of each chapter. Chapter1.CreatingandDebuggingPerformanceCUDACbyW.B.Langdon General Purpose computation on Graphic Hardware has attracted the attention of researchers that routinely apply Evolutionary Algorithms to hard real-life problems. The large number of processing cores included in Introduction 3 standard GPUs allows us to obtain large speedups when parallel applica- tionsarerunonthem.Nevertheless,thenewmodelrequiresextraskillsfrom programmers. Although manufacturers provide frameworks and languages specifically devoted to program GPGPU applications, a number of issues must be considered for properly developing parallel EAs that profit from GPUs. This chapter presents various practical ways of testing, locating and removing bugs in parallel general-purpose computation on graphics hard- wareGPGPUapplications,withattentiontotherelationshipwithstochastic bioinspiredtechniques,suchasgeneticprogramming.Theauthorpresentsthe experience on software engineering lessons learnt during CUDA C program- mingandwaystoobtainhighperformancefromnVidiaGPUandTeslacards including examples ofboth successful andless successful recentapplications. Chapter 2. Optimizing Shape Design with DistributedParallel Genetic Pro- gramming on GPUs by Simon Harding, W. Banzhaf This chapter applies a special version Cartesian Genetic Programming to optimize shape desing. Optimized shape design is used for such applications as wing design in aircraft, hull design in ships, and more generally rotor optimizationinturbomachinerysuchasthatofaircraft,ships,andwindtur- bines.Byapplyingself-modifyingCartesianGeneticProgramming(SMCGP) -which is well suited for distributed parallel systems, authors evolve shapes with specific criteria, such as minimized drag or maximized lift. GPUs are employed for fitness evaluation, using aN implementation of fluid dynamic solver. Chapter 3. Characterizing Fault-tolerance in Genetic Algorithms and pro- grammingbyD.Lombran˜aGonz´alez,JuanL.Laredo,F.Fern´andezdeVega and J.J. Merelo GeneticAlgorithms(GAs)andGeneticProgramming(GP)areasub-class of EvolutionaryAlgorithms (EAs). In both classes,when the complexity is a keyproblem,alargeamountofcomputingresources-andtime-arerequired. In order to reduce execution time, both GAs and GP can benefit from par- allel and distributed computing infrastructure. One of the most popular dis- tributedinfrastructureistheDesktopGridSystem(DGS).Thetermdesktop grid is used to refer to distributed networks of heterogeneous single systems thatcontributeidle processorcyclesforcomputing.InDGSs, computersjoin the system, contribute some resources and leave it afterwards causing a col- lectiveeffectknownaschurn.ChurnisaninherentpropertyofDGSsandhas to be taken into account when designing applications, as these interruptions (computer powered off, busy CPUs, etc.) are interpreted by the application as a failure. To cope with failures, researchers have studied different mecha- nisms to circumvent them or restore the system once a failure occurs. These techniquesareknownasFault-Tolerancemechanismsandenforcethatanap- plicationbehaveinawell-definedmannerwhenafailureoccurs.Thischapter isasummaryoftheobtainedresultsforParallelGAsandGP,presentingthe study of fault-tolerance in PGAs and PGP in order to know if it is feasible 4 F.F. de Vega, J.I. Hidalgo, and J. Lanchares to run them in parallel or distributed systems, without having to implement any fault tolerance mechanism. Chapter 4. Comparison of Frameworks for Parallel Multiobjective Evolu- tionary Optimization in Dynamic Problems by Mario Ca´mara,Julio Ortega, Francisco de Toro The main feature of Dynamic Multi-Objective optimization problems (DMO) is that the optimization is performed in dynamics environments so the cost function and constraints are time dependent. The main inter- est in this kind of problems is the wide range of real world applications with socio-economic relevance that have this feature. In this chapter the authors present two frameworks for Dynamic Multi Objective Evolutionary Algorithms (MOEA). The first is a generic master-worker framework called parallel dynamic MOEA (pdMOEA), that allows the execution of the dis- tributedfitnesscomputationmodelandtheconcurrentexecutionmodel.The second one, a fully distributed framework called pdMOEA+, is an improve- ment that avoid bottleneck caused by the master processor in pdMOEA . Bothapproacheshavetime constraintsinordertoreachthe solutions.These frameworks are used to compare the performance of four algorithms: SFGA, SFGA2,SPEA2andNSGA-II.Theauthorsalsoproposeamodeltointerpret the advantages of parallel processing in MOEA Chapter 5. An Empirical Study of Parallel and Distributed Particle Swarm Optimization by Leonardo Vanneschi, Giancarlo Mauri and Daniele Codecasa. Particle swarm optimization (PSO) is a bioinspired heuristic based on the social behavior of flocks of birds or shoals of fish. Among its features includes easy implementation, intrinsic parallelism and few parameters to adjust. This is the reason why in recent times the researchers are focusing their interest in these algorithms. In the chapter the authors present four parallel and distributed PSO methods that are variants of multi-swarm and attractive/repulsive PSO. Different features are added in order to study the algorithms performance. In the Particle Swarm Evolver (PSE) the authors useageneticalgorithminwhichtheindividualsareswarms.Nexttheauthors present the Repulsive PSE (RPSE) that added a repulsive factor. The third proposal is the Multi-warm PSO (MPSO) using an island model, where the swarms interact by means of particle migration at regular time steps. And finally, a variationof MPSOthat incorporatesa repulsive componentnamed Multi-swarmRepulsivePSO(MRPSO).Tostudythedifferentalgorithmsthe authorusedasetoftheoreticalhandtailoredtestfunctionsandfivecomplex real-life applications showing that the best proposal is the MRSPO. Chapter 6. The generalized Island ModelbyDarioIzzoandMarekRucinski and Francesco Biscani Authors introduce in this chapter the generalized island model, study- ing the effects on several well-known optimization metaheuristics: Differen- tial Evolution, Genetic Algorithms, Harmony Search, Artificial Bee Colony, Introduction 5 ParticleSwarmOptimizationandSimulatedAnnealing. A number ofbench- markproblemsareemployedtocomparemulti-startschemeswithmigration. An heterogeneous model is analyzed, which includes several “archipelagos” with different optimization algorithms on different islands. Chapter 7. Genetic Programming for the Evolution of Associative Memo- ries by J. Villegas-Cortez, G. Olague, H. Sossa, C. Avil´es Naturalsystemsapplylearningduringtheprocessofadaptation,asaway of developing strategies that help to succeed them in highly complex scenar- ios. In particular, it is said that the plans developed by natural systems are seenasafundamentalaspectinsurvival.Today,thereisahugeinterestinat- tempting to replicatesome oftheir characteristicsby imitating the processes ofevolutionandgeneticsinartificialsystemsusingtheverywell-knownideas of evolutionary computing. For example, some models for learning adaptive processarebasedontheemulationofneuralnetworksthatarefurtherevolved by the application of an evolutionary algorithm. This chapter presents the evolutionof Associative Memories (AMs), which demonstrates useful for ad- dressing learning tasks in pattern recognition problems. AMs can be consid- eredaspartofartificialneuralnetworks(ANN)althoughtheirmathematical formulation allows to reach specific goals. A sequential implementation has beenapplied;nevertheless,theunderlyingcoevolutionaryapproachwillallow to easily benefit from parallel architectures, thus emulating natural parallel behavior of associative memories. Chapter 8. Parallel Architectures for Improving the Performance of a GA based trading System by Ivan Contreras, J.Ignacio Hidalgo, Laura Nunez- Letamenda and Yiyi Jiang Theuseofautomatictradingsystemsarebecomingmorefrequent,asthey canreachahighpotentialforpredictingmarketmovements.Theuseofcom- puter systemsallowsto manageahuge amountofdatarelatedto the factors that affect investment performance (macroeconomic variables, company in- formation,industryindicators,marketvariables,etc.),whileavoidingpsycho- logical reactions of traders when investing in financial markets. Movements in stockmarketsare continuousthroughouteachday,which requirestrading systems must be supported by more powerful engines, since the amount of datatoprocessgrows,whiletheresponsetimerequiredtosupportoperations isshortened.Thischapterexplainstwoparallelimplementationsofatrading system based on evolutionary computation: a Grid Volunteer System based on BOINC and an implementation using a Graphic Processing Unit (GPU) in combination with a CPU. Chapter 9. A Knowledge-Based Operator for a Genetic Algorithm which Optimizes the Distribution of Sparse Matrix Data by Una-May O’Reilly, Nadya Bliss, Sanjeev Mohindra, Julie Mullen, Eric Robinson A framework for optimizing the distributed performance of sparse ma- trix computations is presented in this chapter. An optimal distribution of

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.