ebook img

Visual Guidance of Unmanned Aerial Manipulators PDF

172 Pages·2017·12.34 MB·English
by  
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Visual Guidance of Unmanned Aerial Manipulators

UNIVERSITAT POLITÈCNICA DE CATALUNYA Doctoral Programme AUTOMATIC CONTROL, ROBOTICS AND COMPUTER VISION Ph.D. Thesis V G ISUAL UIDANCE OF U A M NMANNED ERIAL ANIPULATORS Angel Santamaria-Navarro Advisor: Juan Andrade-Cetto March 2017 Visual Guidance of Unmanned Aerial Manipulators by Angel Santamaria-Navarro A thesis submitted to the Universitat Politècnica de Catalunya for the degree of Doctor of Philosophy Doctoral Programme: Automatic Control, Robotics and Computer Vision This thesis has been completed at: Institut de Robòtica i Informàtica Industrial, CSIC-UPC Advisor: Juan Andrade-Cetto Dissertation Committee: Prof. Aníbal Ollero Baturone, Chairman Prof. Alberto Sanfeliu Cortés Prof. Gianluca Antonelli The latest version of this document and a video of the thesis presentation are available through http://www.angelsantamaria.eu. VISUAL GUIDANCE OF UNMANNED AERIAL MANIPULATORS Angel Santamaria-Navarro Abstract The ability to fly has greatly expanded the possibilities for robots to perform surveillance, inspectionormapgenerationtasks. Yetitwasonlyinrecentyearsthatresearchinaerialrobotics was mature enough to allow active interactions with the environment. The robots responsible for these interactions are called aerial manipulators and usually combine a multirotor platform and one or more robotic arms. Themainobjectiveofthisthesisistoformalizetheconceptofaerialmanipulatorandpresent guidance methods, using visual information, to provide them with autonomous functionalities. Akeycompetencetocontrolanaerialmanipulatoristheabilitytolocalizeitintheenviron- ment. Traditionally,thislocalizationhasrequiredexternalinfrastructureofsensors(e.g.,GPSor IR cameras), restricting the real applications. Furthermore, localization methods with on-board sensors, exported from other robotics fields such as simultaneous localization and mapping (SLAM), require large computational units becoming a handicap in vehicles where size, load, andpowerconsumptionareimportantrestrictions. Inthisregard,thisthesisproposesamethod to estimate the state of the vehicle (i.e., position, orientation, velocity and acceleration) by means of on-board, low-cost, light-weight and high-rate sensors. With the physical complexity of these robots, it is required to use advanced control tech- niques during navigation. Thanks to their redundancy on degrees-of-freedom, they offer the possibility to accomplish not only with mobility requirements but with other tasks simultane- ously and hierarchically, prioritizing them depending on their impact to the overall mission success. In this work we present such control laws and define a number of these tasks to drive the vehicle using visual information, guarantee the robot integrity during flight, and improve the platform stability or increase arm operability. The main contributions of this research work are threefold: (1) Present a localization tech- nique to allow autonomous navigation, this method is specifically designed for aerial platforms with size, load and computational burden restrictions. (2) Obtain control commands to drive thevehicleusingvisualinformation(visualservo). (3)Integratethevisualservocommandsinto a hierarchical control law by exploiting the redundancy of the robot to accomplish secondary tasks during flight. These tasks are specific for aerial manipulators and they are also provided. All the techniques presented in this document have been validated throughout extensive experimentation with real robotic platforms. i VISUAL GUIDANCE OF UNMANNED AERIAL MANIPULATORS Angel Santamaria-Navarro Resum La capacitat de volar ha incrementat molt les possibilitats dels robots per a realitzar tasques de vigilància, inspecció o generació de mapes. Tot i això, no és fins fa pocs anys que la recerca enrobòticaaèriahaestatproumaduracompercomençarapermetreinteraccionsambl’entorn d’una manera activa. Els robots per a fer-ho s’anomenen manipuladors aeris i habitualment combinen una plataforma multirotor i un braç robòtic. L’objectiu d’aquesta tesi és formalitzar el concepte de manipulador aeri i presentar mètodes de guiatge, utilitzant informació visual, per dotar d’autonomia aquest tipus de vehicles. Una competència clau per controlar un manipulador aeri és la capacitat de localitzar-se en l’entorn. Tradicionalment aquesta localització ha requerit d’infraestructura sensorial externa (GPS, càmeres IR, etc.), limitant així les aplicacions reals. Pel contràri, sistemes de localització exportatsd’altrescampsdelarobòticabasatsensensorsabord,comperexemplemètodesdelo- calització i mapejat simultànis (SLAM), requereixen de gran capacitat de còmput, característica quepenalitzamoltenvehiclesonlamida,pesiconsumelèctricsongransrestriccions. Enaquest sentit,aquestatesiproposaunmètoded’estimaciód’estatdelrobot(posició,velocitat,orientació i acceleració) a partir de sensors instal lats a bord, de baix cost, baix consum computacional i · que proporcionen mesures a alta freqüència. Degut a la complexitat física d’aquests robots, és necessari l’ús de tècniques de control avançades. Gràcies a la seva redundància de graus de llibertat, aquests robots ens ofereixen la possibilitat de complir amb els requeriments de mobilitat i, simultàniament, realitzar tasques de manera jeràrquica, ordenant-les segons l’impacte en l’acompliment de la missió. En aquest treball es presenten aquestes lleis de control, juntament amb la descripció de tasques per tal de guiar visualment el vehicle, garantir l’integritat del robot durant el vol, millorar de l’estabilitat del vehicle o augmentar la manipulabilitat del braç. Aquestatesiescentraentresaspectesfonamentals: (1)Presentarunatècnicadelocalització per dotar d’autonomia el robot. Aquest mètode està especialment dissenyat per a plataformes amb restriccions de capacitat computacional, mida i pes. (2) Obtenir les comandes de control necessàries per guiar el vehicle a partir d’informació visual. (3) Integrar aquestes accions dins unaestructuradecontroljeràrquicautilitzantlaredundànciadelrobotpercompliraltrestasques durant el vol. Aquestes tasques son específiques per a manipuladors aeris i també es defineixen en aquest document. Totes lestècniques presentadesen aquestatesi hanestatevaluadesde manera experimental amb plataformes robòtiques reals. iii Acknowledgements Iwouldliketothankmysupervisor,JuanAndradeCetto,forhisvaluableguidanceandsupport, and Joan Solà, whose help and advice has shaped many parts of this thesis. I am grateful and indebted to Prof. Vijay Kumar and Dr. Giuseppe Loianno (University of Pennsylvania), who welcomed me to work with them during several months in what became a very fruitful research stay. I also wish to express my gratitude to the members of the dissertation committee, whose comments have helped to polish some of the ideas we wanted to put across. I very much appreciate the advice, assistance and encouragement that I received from IRI colleagues during my work. Finally, I also would like to thank my family for the support they provided me through my entirelifeandinparticulartoNúria,withoutwhomthisthesiswouldnothavebeencompleted. This work has been partially supported by an FPI-UPC grant (87-915) of the Universitat Politècnica de Catalunya undertheEUprojectARCAS(FP7-ICT-287617),andalsofundedfromtheEUprojectAEROARMS(H2020-ICT-2014- 1-644271). v Contents 1 Introduction 1 1.1 Motivation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 Objectives and scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 Thesis overview and reading guide . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2 Robot State Estimation 7 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.3 Basic concepts and notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.3.1 Quaternion conventions and properties . . . . . . . . . . . . . . . . . . . . 13 2.4 Observation models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.4.1 Inertial measurement unit (IMU) . . . . . . . . . . . . . . . . . . . . . . . 15 2.4.2 Smart camera: sonar range with 2D linear velocity . . . . . . . . . . . . . 15 2.4.3 Smart camera: 2D flow. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.4.4 Infrared (IR) range sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.5 System kinematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.5.1 System kinematics in discrete time . . . . . . . . . . . . . . . . . . . . . . 21 2.5.2 Filter transition matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.6 Error-state Kalman filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.6.1 ESKF: Filter innovation and correction . . . . . . . . . . . . . . . . . . . . 26 2.6.2 ESKF: Injection of the observed error into the nominal-state . . . . . . . . 27 2.6.3 ESKF: Reset operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.7 Extended Kalman filter (EKF) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.7.1 EKF: prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.7.2 EKF: Innovation and correction . . . . . . . . . . . . . . . . . . . . . . . . 33 2.7.3 EKF: Reset operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.8 Observability analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 2.9 Control and planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 2.9.1 Dynamic model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 2.9.2 Position and attitude controllers . . . . . . . . . . . . . . . . . . . . . . . . 36 2.9.3 Trajectory planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 2.10 Validation and experimental results . . . . . . . . . . . . . . . . . . . . . . . . . . 37 2.10.1 Simulation results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 2.10.2 Experimental results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 2.11 Summary and main contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 3 Visual Servo 51 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3.2 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 3.3 Control law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.3.1 Stability analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 3.4 Position-based visual servo (PBVS) . . . . . . . . . . . . . . . . . . . . . . . . . . 56 3.4.1 PBVS interaction Jacobian . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 3.5 Image-based visual servo (IBVS). . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 3.5.1 Image Jacobian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 vii viii Contents 3.6 Uncalibrated image-based visual servo (UIBVS) . . . . . . . . . . . . . . . . . . . 60 3.6.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 3.6.2 Uncalibrated image Jacobian . . . . . . . . . . . . . . . . . . . . . . . . . 62 3.7 Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 3.8 Summary and main contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 4 Task Control 71 4.1 Introducion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 4.2 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 4.3 Robot kinematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 4.3.1 State vector and coordinate frames . . . . . . . . . . . . . . . . . . . . . . 77 4.3.2 Onboard-eye-to-hand kinematics . . . . . . . . . . . . . . . . . . . . . . . 79 4.3.3 Eye-in-hand kinematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 4.4 Task control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 4.4.1 Motion distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 4.5 Hierarchical task priority control (HTPC) . . . . . . . . . . . . . . . . . . . . . . . 82 4.5.1 HTPC using full least squares . . . . . . . . . . . . . . . . . . . . . . . . . 83 4.5.2 HTPC decoupling algorithmic task singularities . . . . . . . . . . . . . . . 84 4.5.3 Dynamic change of task priorities . . . . . . . . . . . . . . . . . . . . . . . 86 4.6 Optimization-based trajectory generation . . . . . . . . . . . . . . . . . . . . . . . 86 4.6.1 Optimization principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 4.6.2 Quadratic problem formulation . . . . . . . . . . . . . . . . . . . . . . . . 89 4.6.3 Position, velocity and acceleration bounds . . . . . . . . . . . . . . . . . . 90 4.6.4 Interface with the control algorithm . . . . . . . . . . . . . . . . . . . . . 91 4.7 Tasks definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 4.7.1 Collision avoidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 4.7.2 Onboard-eye-to-hand: global end effector tracking . . . . . . . . . . . . . 94 4.7.3 Onboard-eye-to-hand: camera field of view . . . . . . . . . . . . . . . . . 95 4.7.4 Eye-in-hand: End effector tracking using visual servo . . . . . . . . . . . . 96 4.7.5 Center of gravity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 4.7.6 Desired arm configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 4.7.7 Manipulability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 4.7.8 Velocity minimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 4.7.9 Quadratic programming specific tasks . . . . . . . . . . . . . . . . . . . . 101 4.8 Validation and experimental results . . . . . . . . . . . . . . . . . . . . . . . . . . 102 4.8.1 HTPC using full least squares . . . . . . . . . . . . . . . . . . . . . . . . . 103 4.8.2 HTPC decoupling algorithmic task singularities . . . . . . . . . . . . . . . 109 4.8.3 Optimization-based trajectory generation . . . . . . . . . . . . . . . . . . 118 4.9 Summary and main contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 5 Closing remarks 125 5.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 5.2 Summary of contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 5.3 Future research directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

Description:
guidance methods, using visual information, to provide them with . 1.3 Thesis overview and reading guide . Agile load transportation : Safe and
See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.