ebook img

Semantic Mapping using Virtual Sensors and Fusion of Aerial Images with Sensor Data from a ... PDF

172 Pages·2008·2.19 MB·English
by  
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Semantic Mapping using Virtual Sensors and Fusion of Aerial Images with Sensor Data from a ...

Semantic Mapping using Virtual Sensors and Fusion of Aerial Images with Sensor Data from a Ground Vehicle Örebro Studies in Technology 30 Martin Persson Semantic Mapping using Virtual Sensors and Fusion of Aerial Images with Sensor Data from a Ground Vehicle © Martin Persson, 2008 Title: Semantic Mapping using Virtual Sensors and Fusion of Aerial Images with Sensor Data from a Ground Vehicle Publisher: Örebro University 2008 www.publications.oru.se Editor: Maria Alsbjer [email protected] Printer: Intellecta DocuSys, V Frölunda 04/2008 issn 1650-8580 isbn 978-91-7668-593-8 Abstract Persson, Martin (2008). Semantic Mapping using Virtual Sensors and Fusion of Aerial Images with Sensor Data from a Ground Vehicle. Örebro Studies in Technology30,170pp. In this thesis, semantic mapping is understood to be the process of putting a tag or label on objects or regions in a map. This label should be interpretable byandhaveameaningforahuman.Theuseofsemanticinformationhassev- eral application areas in mobile robotics. The largest area is in human-robot interaction where the semantics is necessary for a common understanding be- tween robot and human of the operational environment. Other areas include localization through connection of human spatial concepts to particular loca- tions, improving 3D models of indoor and outdoor environments, and model validation. This thesis investigates the extraction of semantic information for mobile robots in outdoor environments and the use of semantic information to link ground-level occupancy maps and aerial images. The thesis concentrates on three related issues: i) recognition of human spatial concepts in a scene, ii) the ability to incorporate semantic knowledge in a map, and iii) the ability to connect information collected by a mobile robot with information extracted fromanaerialimage. The first issue deals with a vision-based virtual sensor for classification of views(images).The imagesare fedinto a set of learned virtualsensors, where each virtual sensor is trained for classification of a particular type of human spatialconcept.Thevirtualsensors areevaluatedwithimagesfrombothordi- narycamerasandanomni-directional camera,showing robust properties that cancopewithvariationssuchaschangingseason. In the second part a probabilistic semantic map is computed based on an occupancygridmapandtheoutputfromavirtualsensor.Alocalsemanticmap is built around the robot for each position where images have been acquired. This map is a grid map augmented with semantic information in the form of probabilitiesthattheoccupiedgridcellsbelong toaparticularclass.The local mapsarefusedintoaglobalprobabilisticsemanticmapcoveringtheareaalong thetrajectoryofthemobilerobot. In the third part information extractedfrom anaerialimage is used to im- provethemappingprocess.Regionandobjectboundariestakenfromtheprob- abilistic semantic map are used to initialize segmentation of the aerial image. Algorithms for both local segmentation related to the borders and global seg- mentation of the entire aerial image, exemplified with the two classes ground andbuildings,arepresented.Ground-levelsemanticinformationallowsfocus- ing of the segmentation of the aerial image to desired classes and generation of a semantic map that covers a larger area than can be built using only the onboardsensors. Keywords:semanticmapping,aerialimage,mobilerobot,supervisedlearning, semi-supervisedlearning. Acknowledgements IfithadnotbeenforStefanForslundthisthesiswouldneverhavebeenwritten. WhenStefan,myformersuperioratSaab,getsanideahebelievesin,heusually finds a way to see it through. He pursued our management to start financing myPh.D.studies.IamthereforedeeplyindebtedtoyouStefan,forbelievingin meandmakingallofthispossible. Iwouldliketothankmytwosupervisors,AchimLilienthalandTomDuck- ett,fortheirguidanceandencouragementthroughoutthisresearchproject.You both have the valuable experience needed to pinpoint where performed work canbeimprovedinordertoreachahigherstandard. Most of the data used in this work have been collected using the mobile robotTjorven,theLearningSystemsLab’smostvaluablepartner.Ofthemem- bers of the Learning Systems Lab, I would particularly like to thank Henrik Andreasson,ChristofferValgren,andMartinMagnussonforhelpingwithdata collectionandkeeping Tjorvenupandrunning.Specialthanksto:Henrik,for support with Tjorven and Player, and for reading this thesis; Christoffer, for providingimplementationsofthefloodfillalgorithmandthetransformationof omni-imagesto planar images; and Pär Buschka, who knew everything worth knowingaboutRasmus,theoutdoormobilerobotIfirstused. ThestayatAASS,CentreofAppliedAutonomousSensorSystems,hasbeen both educational and pleasant. Present and former members of AASS, you’ll alwaysbeonmymind. This work could not havebeen performed withoutaccesstoaerialimages. My appreciation to Jan Eriksson at Örebro Community Planning Office, and Lena Wahlgren at the Karlskoga ditto, for providing the high quality aerial images used in this research project. And thanks to Håkan Wissman for the implementationof thecoordinate transformationsthatconnectedthe GPSpo- sitionstotheaerialimages.ThefinancialsupportfromFMV(SwedishDefence MaterialAdministration),ExploraFuturumandGraduateSchoolofModelling and Simulation is gratefully acknowledged. I would also like to express grati- tudetomyemployer,Saab,forsupportingmypart-timePh.D.studies. Finally,tomybelovedfamily,whohascopedwithadistractedhusbandand fatherforthelastyears,thanksforallyourloveandsupport. Contents I Preliminaries 13 1 Introduction 15 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 1.2 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 1.3 SystemOverview . . . . . . . . . . . . . . . . . . . . . . . . . . 19 1.4 MainContributions . . . . . . . . . . . . . . . . . . . . . . . . 20 1.5 ThesisOutline. . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 1.6 Publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2 ExperimentalEquipment 23 2.1 NavigationSensorsforMobileRobots . . . . . . . . . . . . . . 23 2.2 MobileRobotTjorven . . . . . . . . . . . . . . . . . . . . . . . 25 2.3 MobileRobotRasmus . . . . . . . . . . . . . . . . . . . . . . . 27 2.4 HandheldCameras . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.5 AerialImages . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 II Ground-Based Semantic Mapping 31 3 SemanticMapping 33 3.1 MobileRobotMapping . . . . . . . . . . . . . . . . . . . . . . 33 3.1.1 MetricMaps . . . . . . . . . . . . . . . . . . . . . . . . 34 3.1.2 TopologicalMaps . . . . . . . . . . . . . . . . . . . . . 34 3.1.3 HybridMaps . . . . . . . . . . . . . . . . . . . . . . . . 35 3.2 IndoorSemanticMapping . . . . . . . . . . . . . . . . . . . . . 35 3.2.1 ObjectLabelling . . . . . . . . . . . . . . . . . . . . . . 35 3.2.2 SpaceLabelling . . . . . . . . . . . . . . . . . . . . . . . 37 3.2.3 HierarchiesforSemanticMapping . . . . . . . . . . . . 38 3.3 OutdoorSemanticMapping . . . . . . . . . . . . . . . . . . . . 40 3.3.1 3DModellingofUrbanEnvironments . . . . . . . . . . 41 3.4 ApplicationsUsingSemanticInformation . . . . . . . . . . . . . 43 3.5 SummaryandConclusions . . . . . . . . . . . . . . . . . . . . . 45 CONTENTS 4 VirtualSensorforSemanticLabelling 47 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 4.1.1 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 4.2 TheFeatureSet . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 4.2.1 EdgeOrientation . . . . . . . . . . . . . . . . . . . . . . 50 4.2.2 EdgeCombinations . . . . . . . . . . . . . . . . . . . . . 52 4.2.3 GrayLevels . . . . . . . . . . . . . . . . . . . . . . . . . 53 4.2.4 CameraInvariance . . . . . . . . . . . . . . . . . . . . . 54 4.2.5 Assumptions . . . . . . . . . . . . . . . . . . . . . . . . 57 4.3 AdaBoost . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 4.3.1 WeakClassifiers . . . . . . . . . . . . . . . . . . . . . . 59 4.4 BayesClassifier . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 4.5 EvaluationofaVirtualSensorforBuildings . . . . . . . . . . . 60 4.5.1 ImageSets . . . . . . . . . . . . . . . . . . . . . . . . . . 61 4.5.2 TestDescription . . . . . . . . . . . . . . . . . . . . . . 61 4.5.3 AnalysisoftheTrainingResults . . . . . . . . . . . . . . 62 4.5.4 Results. . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 4.6 ABuildingPointer . . . . . . . . . . . . . . . . . . . . . . . . . 70 4.7 EvaluationofaVirtualSensorforWindows . . . . . . . . . . . 73 4.7.1 ImageSetsandTraining . . . . . . . . . . . . . . . . . . 73 4.7.2 Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 4.8 EvaluationofaVirtualSensorforTrucks . . . . . . . . . . . . . 76 4.8.1 ImageSetsandTraining . . . . . . . . . . . . . . . . . . 76 4.8.2 Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 4.9 SummaryandConclusions . . . . . . . . . . . . . . . . . . . . . 81 5 ProbabilisticSemanticMapping 83 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 5.1.1 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 5.2 ProbabilisticSemanticMap . . . . . . . . . . . . . . . . . . . . 84 5.2.1 LocalSemanticMap . . . . . . . . . . . . . . . . . . . . 85 5.2.2 GlobalSemanticMap . . . . . . . . . . . . . . . . . . . 86 5.3 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 5.3.1 VirtualPlanarCameras . . . . . . . . . . . . . . . . . . 88 5.3.2 ImageDatasets . . . . . . . . . . . . . . . . . . . . . . . 90 5.3.3 OccupancyMaps . . . . . . . . . . . . . . . . . . . . . . 91 5.3.4 UsedParameters . . . . . . . . . . . . . . . . . . . . . . 93 5.4 Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 5.4.1 EvaluationoftheHandmadeMap . . . . . . . . . . . . 96 5.4.2 EvaluationoftheLaser-BasedMaps. . . . . . . . . . . . 96 5.4.3 RobustnessTest . . . . . . . . . . . . . . . . . . . . . . . 97 5.5 SummaryandConclusions . . . . . . . . . . . . . . . . . . . . . 99

Description:
tag or label on objects or regions in a map. This label should be tions, improving 3D models of indoor and outdoor environments, and model validation. This thesis The first issue deals with a vision-based virtual sensor for classification of views (images) 5.4.2 Evaluation of the Laser-Based
See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.