I Robot Localization and Map Building Robot Localization and Map Building Edited by Hanafiah Yussof In-Tech intechweb.org Published by In-Teh In-Teh Olajnica 19/2, 32000 Vukovar, Croatia Abstracting and non-profit use of the material is permitted with credit to the source. Statements and opinions expressed in the chapters are these of the individual contributors and not necessarily those of the editors or publisher. No responsibility is accepted for the accuracy of information contained in the published articles. Publisher assumes no responsibility liability for any damage or injury to persons or property arising out of the use of any materials, instructions, methods or ideas contained inside. After this work has been published by the In-Teh, authors have the right to republish it, in whole or part, in any publication of which they are an author or editor, and the make other personal use of the work. © 2010 In-teh www.intechweb.org Additional copies can be obtained from: [email protected] First published March 2010 Printed in India Technical Editor: Sonja Mujacic Cover designed by Dino Smrekar Robot Localization and Map Building, Edited by Hanafiah Yussof p. cm. ISBN 978-953-7619-83-1 V Preface Navigation of mobile platform is a broad topic, covering a large spectrum of different technologies and applications. As one of the important technology highlighting the 21st century, autonomous navigation technology is currently used in broader spectra, ranging from basic mobile platform operating in land such as wheeled robots, legged robots, automated guided vehicles (AGV) and unmanned ground vehicle (UGV), to new application in underwater and airborne such as underwater robots, autonomous underwater vehicles (AUV), unmanned maritime vehicle (UMV), flying robots and unmanned aerial vehicle (UAV). Localization and mapping are the essence of successful navigation in mobile platform technology. Localization is a fundamental task in order to achieve high levels of autonomy in robot navigation and robustness in vehicle positioning. Robot localization and mapping is commonly related to cartography, combining science, technique and computation to build a trajectory map that reality can be modelled in ways that communicate spatial information effectively. The goal is for an autonomous robot to be able to construct (or use) a map or floor plan and to localize itself in it. This technology enables robot platform to analyze its motion and build some kind of map so that the robot locomotion is traceable for humans and to ease future motion trajectory generation in the robot control system. At present, we have robust methods for self-localization and mapping within environments that are static, structured, and of limited size. Localization and mapping within unstructured, dynamic, or large-scale environments remain largely an open research problem. Localization and mapping in outdoor and indoor environments are challenging tasks in autonomous navigation technology. The famous Global Positioning System (GPS) based on satellite technology may be the best choice for localization and mapping at outdoor environment. Since this technology is not applicable for indoor environment, the problem of indoor navigation is rather complex. Nevertheless, the introduction of Simultaneous Localization and Mapping (SLAM) technique has become the key enabling technology for mobile robot navigation at indoor environment. SLAM addresses the problem of acquiring a spatial map of a mobile robot environment while simultaneously localizing the robot relative to this model. The solution method for SLAM problem, which are mainly introduced in this book, is consists of three basic SLAM methods. The first is known as extended Kalman filters (EKF) SLAM. The second is using sparse nonlinear optimization methods that based on graphical representation. The final method is using nonparametric statistical filtering techniques known as particle filters. Nowadays, the application of SLAM has been expended to outdoor environment, for use in outdoor’s robots and autonomous vehicles and aircrafts. Several interesting works related to this issue are presented in this book. The recent rapid VI progress in sensors and fusion technology has also benefits the mobile platforms performing navigation in term of improving environment recognition quality and mapping accuracy. As one of important element in robot localization and map building, this book presents interesting reports related to sensing fusion and network for optimizing environment recognition in autonomous navigation. This book describes comprehensive introduction, theories and applications related to localization, positioning and map building in mobile robot and autonomous vehicle platforms. It is organized in twenty seven chapters. Each chapter is rich with different degrees of details and approaches, supported by unique and actual resources that make it possible for readers to explore and learn the up to date knowledge in robot navigation technology. Understanding the theory and principles described in this book requires a multidisciplinary background of robotics, nonlinear system, sensor network, network engineering, computer science, physics, etc. The book at first explores SLAM problems through extended Kalman filters, sparse nonlinear graphical representation and particle filters methods. Next, fundamental theory of motion planning and map building are presented to provide useful platform for applying SLAM methods in real mobile systems. It is then followed by the application of high-end sensor network and fusion technology that gives useful inputs for realizing autonomous navigation in both indoor and outdoor environments. Finally, some interesting results of map building and tracking can be found in 2D, 2.5D and 3D models. The actual motion of robots and vehicles when the proposed localization and positioning methods are deployed to the system can also be observed together with tracking maps and trajectory analysis. Since SLAM techniques mostly deal with static environments, this book provides good reference for future understanding the interaction of moving and non-moving objects in SLAM that still remain as open research issue in autonomous navigation technology. Hanafiah Yussof Nagoya University, Japan Universiti Teknologi MARA, Malaysia VII Contents Preface V 1. Visual Localisation of quadruped walking robots 001 Renato Samperio and Huosheng Hu 2. Ranging fusion for accurating state of the art robot localization 027 Hamed Bastani and Hamid Mirmohammad-Sadeghi 3. Basic Extended Kalman Filter – Simultaneous Localisation and Mapping 039 Oduetse Matsebe, Molaletsa Namoshe and Nkgatho Tlale 4. Model based Kalman Filter Mobile Robot Self-Localization 059 Edouard Ivanjko, Andreja Kitanov and Ivan Petrović 5. Global Localization based on a Rejection Differential Evolution Filter 091 M.L. Muñoz, L. Moreno, D. Blanco and S. Garrido 6. Reliable Localization Systems including GNSS Bias Correction 119 Pierre Delmas, Christophe Debain, Roland Chapuis and Cédric Tessier 7. Evaluation of aligning methods for landmark-based maps in visual SLAM 133 Mónica Ballesta, Óscar Reinoso, Arturo Gil, Luis Payá and Miguel Juliá 8. Key Elements for Motion Planning Algorithms 151 Antonio Benitez, Ignacio Huitzil, Daniel Vallejo, Jorge de la Calleja and Ma. Auxilio Medina 9. Optimum Biped Trajectory Planning for Humanoid Robot Navigation in Unseen Environment 175 Hanafiah Yussof and Masahiro Ohka 10. Multi-Robot Cooperative Sensing and Localization 207 Kai-Tai Song, Chi-Yi Tsai and Cheng-Hsien Chiu Huang 11. Filtering Algorithm for Reliable Localization of Mobile Robot in Multi-Sensor Environment 227 Yong-Shik Kim, Jae Hoon Lee, Bong Keun Kim, Hyun Min Do and Akihisa Ohya 12. Consistent Map Building Based on Sensor Fusion for Indoor Service Robot 239 Ren C. Luo and Chun C. Lai VIII 13. Mobile Robot Localization and Map Building for a Nonholonomic Mobile Robot 253 Songmin Jia and AkiraYasuda 14. Robust Global Urban Localization Based on Road Maps 267 Jose Guivant, Mark Whitty and Alicia Robledo 15. Object Localization using Stereo Vision 285 Sai Krishna Vuppala 16. Vision based Systems for Localization in Service Robots 309 Paulraj M.P. and Hema C.R. 17. Floor texture visual servo using multiple cameras for mobile robot localization 323 Takeshi Matsumoto, David Powers and Nasser Asgari 18. Omni-directional vision sensor for mobile robot navigation based on particle filter 349 Zuoliang Cao, Yanbin Li and Shenghua Ye 19. Visual Odometry and mapping for underwater Autonomous Vehicles 365 Silvia Botelho, Gabriel Oliveira, Paulo Drews, Mônica Figueiredo and Celina Haffele 20. A Daisy-Chaining Visual Servoing Approach with Applications in Tracking, Localization, and Mapping 383 S. S. Mehta, W. E. Dixon, G. Hu and N. Gans 21. Visual Based Localization of a Legged Robot with a topological representation 409 Francisco Martín, Vicente Matellán, José María Cañas and Carlos Agüero 22. Mobile Robot Positioning Based on ZigBee Wireless Sensor Networks and Vision Sensor 423 Wang Hongbo 23. A WSNs-based Approach and System for Mobile Robot Navigation 445 Huawei Liang, Tao Mei and Max Q.-H. Meng 24. Real-Time Wireless Location and Tracking System with Motion Pattern Detection 467 Pedro Abreua, Vasco Vinhasa, Pedro Mendesa, Luís Paulo Reisa and Júlio Gargantab 25. Sound Localization for Robot Navigation 493 Jie Huang 26. Objects Localization and Differentiation Using Ultrasonic Sensors 521 Bogdan Kreczmer 27. Heading Measurements for Indoor Mobile Robots with Minimized Drift using a MEMS Gyroscopes 545 Sung Kyung Hong and Young-sun Ryuh 28. Methods for Wheel Slip and Sinkage Estimation in Mobile Robots 561 Giulio Reina Visual Localisation of quadruped walking robots 1 Visual Localisation of quadruped walking robots 01 Visual Localisation of quadruped walking robots Renato Samperio and Huosheng Hu RenatoSamperioandHuoshengHu SchoolofComputerScienceandElectronicEngineering,UniversityofEssex UnitedKingdom 1. Introduction Recently, several solutions to the robot localisation problem have been proposed in the sci- entific community. In this chapter we present a localisation of a visual guided quadruped walkingrobotinadynamicenvironment.Weinvestigatethequalityofrobotlocalisationand landmarkdetection,inwhichrobotsperformtheRoboCupcompetition(Kitanoetal.,1997). Thefirstpartpresentsanalgorithmtodetermineanyentityofinterestinaglobalreference frame,wheretherobotneedstolocatelandmarkswithinitssurroundings.Inthesecondpart, afastandhybridlocalisationmethodisdeployedtoexplorethecharacteristicsoftheproposed localisationmethodsuchasprocessingtime,convergenceandaccuracy. Ingeneral,visuallocalisationofleggedrobotscanbeachievedbyusingartificialandnatural landmarks. Thelandmarkmodellingproblemhasbeenalreadyinvestigatedbyusingprede- finedlandmarkmatchingandreal-timelandmarklearningstrategiesasin(Samperio&Hu, 2010). Also,byfollowingthepre-attentiveandattentivestagesofpreviouslypresentedwork of(Quocetal.,2004),weproposealandmarkmodelfordescribingtheenvironmentwith"in- teresting"featuresasin(Lukeetal.,2005),andtomeasurethequalityoflandmarkdescription andselectionovertimeasshownin(Watmanetal.,2004). Specifically,weimplementvisual detectionandmatchingphasesofapre-definedlandmarkmodelasin(Hayetetal.,2002)and (Sungetal.,1999),andforreal-timerecognisedlandmarksinthefrequencydomain(Maosen etal.,2005)wheretheyareaddressedbyasimilarityevaluationprocesspresentedin(Yoon &Kweon,2001). Furthermore,wehaveevaluatedtheperformanceofproposedlocalisation methods,Fuzzy-Markov(FM),ExtendedKalmanFilters(EKF)andancombinedsolutionof Fuzzy-Markov-Kalman(FM-EKF),asin(Samperioetal.,2007)(Haticeetal.,2006). Theproposedhybridmethodintegratesaprobabilisticmulti-hypothesisandgrid-basedmaps withEKF-basedtechniques. Asitispresentedin(Kristensen&Jensfelt,2003)and(Gutmann etal.,1998),somemethodologiesrequireanextensivecomputationbutofferareliableposi- tioning system. By cooperating a Markov-based method into the localisation process (Gut- mann,2002),EKFpositioningcanconvergefasterwithaninaccurategridobservation. Also. Markov-based techniques and grid-based maps (Fox et al., 1998) are classic approaches to robotlocalisationbuttheircomputationalcostishugewhenthegridsizeinamapissmall (Duckett & Nehmzow, 2000) and (Jensfelt et al., 2000) for a high resolution solution. Even theproblemhasbeenpartiallysolvedbytheMonteCarlo(MCL)technique(Foxetal.,1999), (Thrun et al., 2000) and (Thrun et al., 2001), it still has difficulties handling environmental changes(Tanakaetal.,2004). Ingeneral,EKFmaintainsacontinuousestimationofrobotpo- sition,butcannotmanagemulti-hypothesisestimationsasin(Baltzakis&Trahanias,2002). 2 Robot Localization and Map Building Moreover,traditionalEKFlocalisationtechniquesarecomputationallyefficient,buttheymay alsofailwithquadrupedwalkingrobotspresentpoorodometry,insituationssuchaslegslip- pageandlossofbalance.Furthermore,weproposeahybridlocalisationmethodtoeliminate inconsistencies and fuse inaccurate odometry data with costless visual data. The proposed FM-EKF localisation algorithm makes use of a fuzzy cell to speed up convergence and to maintainanefficientlocalisation.Subsequently,theperformanceoftheproposedmethodwas testedinthreeexperimentalcomparisons:(i)simplemovementsalongthepitch,(ii)localising andplayingcombinedbehavioursandc)kidnappingtherobot. Therestofthechapterisorganisedasfollows. FollowingthebriefintroductionofSection1, Section2describestheproposedobservermoduleasanupdatingprocessofaBayesianlo- calisationmethod.Also,robotmotionandmeasurementmodelsarepresentedinthissection forreal-time landmarkdetection. Section 3investigatesthe proposedlocalisationmethods. Section4presentsthesystemarchitecture.Someexperimentalresultsonlandmarkmodelling andlocalisationarepresentedinSection5toshowthefeasibilityandperformanceofthepro- posedlocalisationmethods.Finally,abriefconclusionisgiveninSection6. 2. Observerdesign Fig.1.TheproposedmotionmodelforAibowalkingrobot Thissectiondescribesarobotobservermodelforprocessingmotionandmeasurementphases. Thesephases,alsoknownasPredictandUpdate,involveastateestimationinatimesequence Accordingtotheempiricalexperimentaldata, theodometrysystempresentsadeviationof forrobotlocalisation.Additionally,ateachphasethestateisupdatedbysensinginformation 30%onaverageasshowninEquation(4.12). Therefore,byapplyingatransformationmatrix andmodellingnoiseforeachprojectedstate. WtfromEquation(4.13),noisecanbeaddressedasrobotuncertaintywhereθpointstherobot heading. 2.1 MotionModel Thestate-spaceprocessrequiresastatevectorasprocessingandpositioningunitsinanob- (0.3ulin)2 0 0 t serverdesignproblem.Thestatevectorcontainsthreevariablesforrobotlocalisation,i.e.,2D Qt = 0 (0.3ultat)2 0 (4.12) position(x,y)andorientation(θ). Additionally,thepredictionphaseincorporatesnoisefrom 0 0 (0.3urot+ √(ultin)2+(ultat)2)2 robotodometry,asitispresentedbelow: t 500 cosθ senθ 0 yxθt−−tt− = yxθttt−−−111 + ((uultltiinn++wwttlliinn))csionsθθuttrt−−o1t1++−w((uurtoltltaattt++wwtltlaatt))csoinsθθtt−−11 (4.9) Wt = fw= sen0θtt−−11 −cos0θt−t−11 01 (4.13) where ulat, ulin and urot are the lateral, linear and rotational components of odometry, and 2.2 MeasurementModel wlat, wlin and wrot are the lateral, linear and rotational components in errors of odometry. Inordertorelatetherobottoitssurroundings,wemakeuseofalandmarkrepresentation.The landmarksintherobotenvironmentrequirenotationalrepresentationofameasuredvector fi Also,t 1referstothetimeoftheprevioustimestepandttothetimeofthecurrentstep. t − foreachi-thfeatureasitisdescribedinthefollowingequation: Ingeneral, stateestimationisaweightedcombinationofnoisystatesusingbothprioriand posteriorestimations. Likewise,assumingthatvisthemeasurementnoiseandwisthepro- r1 r2 cessnoise,theexpectedvalueofthemeasurementRandprocessnoiseQcovariancematrixes t t areexpressedseparatelyasinthefollowingequations: f(zt)={ft1,ft2,...}={ bt1 , bt2 ,...} (4.14) s1 s2 t t R=E[vvt] (4.10) wherelandmarksaredetectedbyanonboardactivecameraintermsofrangeri, bearing bi t t Q=E[wwt] (4.11) andasignaturesi foridentifyingeachlandmark.Alandmarkmeasurementmodelisdefined t Themeasurementnoiseinmatrix RrepresentssensorerrorsandtheQmatrixisalsoacon- byafeature-basedmapm, whichconsistsofalistofsignaturesandcoordinatelocationsas fidence indicator for current prediction which increases or decreases state uncertainty. An follows: odometrymotionmodel,u isadoptedasshowninFigure1. Moreover,Table1describes t 1 allvariablesforthreedimen−sional(linear,lateralandrotational)odometryinformationwhere m={m1,m2,...}={(m1,x,m1,y),(m2,x,m2,y),...} (4.15) (x,y)istheestimatedvaluesand(x,y)theactualstates. (cid:27) (cid:27)