ebook img

State Estimation for Robotics PDF

393 Pages·2018·4.26 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview State Estimation for Robotics

STATE ESTIMATION FOR ROBOTICS Timothy D. Barfoot Copyright c 2018 (cid:13) Cambridge University Press is the Official Publisher This Unofficial Version Compiled on August 13, 2018 Send errata to <[email protected]> Revision History 13 May 2017 Version best matching published first edition 12 Aug 2017 Equation (4.47a): Σ changed to Σ x xx 12 Aug 2017 Page 111, bullet 4: Σ changed to Σ y yy 12 Aug 2017 Page 117, bullet (ii): changed 4(a) to 3 12 Aug 2017 Equation (4.87): Pˆ changed to Pˇ − k 12 Aug 2017 Equation (4.89): Pˆ changed to Pˇ k k 12 Aug 2017 Equation (4.102d): x changed to x op,k,0 op,k,i 12 Aug 2017 Equation (7.102): removed negative sign 22 Nov 2017 Fixed typo in Jacobi identity (page 218) 10 Dec 2017 Equation (2.52): Σ−yy1Σyx changed to ΣxyΣ−yy1 10 Dec 2017 Inline above (8.2): rvi changed to rvki i i 10 Dec 2017 Equation (6.26): 0 changed to 0T 10 Dec 2017 Inline below (4.132): e(x ) = Lu(x ) changed to op op e(x ) = L 1u(x ) op − op 10 Dec 2017 Angular acceleration: ω changed to ω ◦21 ◦21 10 Dec 2017 Equation (4.31): corre−c→ted a double com→−ma 10 Dec 2017 Equation (4.92a): n changed to yˇ k,j k,j 5 Jan 2018 Page 227, definition of Ad(SE(3)): removed extra comma before final bracket 5 Jan 2018 Equation (9.55): changed ζ(cid:63) to ζ(cid:63) j j 5 Jan 2018 Equation(8.47):addedmissingVT inanintermediate step 5 Jan 2018 Equation (4.207): changed (cid:63) to ∗ 5 Jan 2018 Equation (4.165a): changed subscript (cid:96) to j 5 Jan 2018 Equation (4.166): changed to Gk(cid:96) Gjk 16 Feb 2018 Pages 267-8: corrected J 1 and J 1 to J and J −(cid:96) −r (cid:96) r 25 Mar 2018 Page 108: corrected ‘mean of output is’ to be ‘mean of input is’ 25 Mar 2018 Equation (6.129): corrected rba to be rab b b 13 Apr 2018 Equation (8.105): corrected j 1 to j = 1 − 22 Apr 2018 Equation (2.20): added missing 1 in definition of η − iii iv Revision History 12 May 2018 Equations (7.234) and (7.239): fixed two negative signs each to repair proof 12 May 2018 Identitypage:fixedsummationonseriesdefinitionsof T, , and their inverses to start at n = 0 not n = 1 T 27 May 2018 Startedanappendixwithnewmaterialsincefirstedi- tion 13 Jul 2018 Equation (7.150): changed T = T T 1 to T = 21 1 −2 21 T T 1 just below this equation in the text 2 −1 13 July 2018 Equation (3.99a): removed HT from front of both 22 terms to match (3.98) 13 July 2018 Equation (3.99b): removed HT from front of both 32 terms to match (3.98), make second term negative 13 Aug 2018 Equation (10.18): corrected J to J u v 13 Aug 2018 Exercise 6.6.6: adjusted wording to due to poor use of the term ‘affine’ Contents Acronyms and Abbreviations xi Notation xiii Foreword xv 1 Introduction 1 1.1 A Little History 1 1.2 Sensors, Measurements, and Problem Definition 3 1.3 How This Book Is Organized 4 1.4 Relationship to Other Books 5 Part I Estimation Machinery 7 2 Primer on Probability Theory 9 2.1 Probability Density Functions 9 2.1.1 Definitions 9 2.1.2 Bayes’ Rule and Inference 10 2.1.3 Moments 11 2.1.4 Sample Mean and Covariance 12 2.1.5 Statistically Independent, Uncorrelated 12 2.1.6 Normalized Product 13 2.1.7 Shannon and Mutual Information 14 2.1.8 Cram´er-Rao Lower Bound and Fisher Information 14 2.2 Gaussian Probability Density Functions 15 2.2.1 Definitions 15 2.2.2 Isserlis’ Theorem 16 2.2.3 Joint Gaussian PDFs, Their Factors, and Inference 18 2.2.4 Statistically Independent, Uncorrelated 20 2.2.5 Linear Change of Variables 20 2.2.6 Normalized Product of Gaussians 22 2.2.7 Sherman-Morrison-Woodbury Identity 23 2.2.8 Passing a Gaussian through a Nonlinearity 24 2.2.9 Shannon Information of a Gaussian 28 2.2.10 Mutual Information of a Joint Gaussian PDF 30 2.2.11 Cram´er-Rao Lower Bound Applied to Gaussian PDFs 30 2.3 Gaussian Processes 32 2.4 Summary 33 2.5 Exercises 33 v vi Contents 3 Linear-Gaussian Estimation 37 3.1 Batch Discrete-Time Estimation 37 3.1.1 Problem Setup 37 3.1.2 Maximum A Posteriori 39 3.1.3 Bayesian Inference 44 3.1.4 Existence, Uniqueness, and Observability 46 3.1.5 MAP Covariance 50 3.2 Recursive Discrete-Time Smoothing 51 3.2.1 Exploiting Sparsity in the Batch Solution 52 3.2.2 Cholesky Smoother 53 3.2.3 Rauch-Tung-Striebel Smoother 55 3.3 Recursive Discrete-Time Filtering 58 3.3.1 Factoring the Batch Solution 59 3.3.2 Kalman Filter via MAP 63 3.3.3 Kalman Filter via Bayesian Inference 68 3.3.4 Kalman Filter via Gain Optimization 69 3.3.5 Kalman Filter Discussion 70 3.3.6 Error Dynamics 71 3.3.7 Existence, Uniqueness, and Observability 72 3.4 Batch Continuous-Time Estimation 74 3.4.1 Gaussian Process Regression 74 3.4.2 A Class of Exactly Sparse Gaussian Process Priors 77 3.4.3 Linear Time-Invariant Case 83 3.4.4 Relationship to Batch Discrete-Time Estimation 87 3.5 Summary 88 3.6 Exercises 88 4 Nonlinear Non-Gaussian Estimation 91 4.1 Introduction 91 4.1.1 Full Bayesian Estimation 92 4.1.2 Maximum a Posteriori Estimation 94 4.2 Recursive Discrete-Time Estimation 96 4.2.1 Problem Setup 96 4.2.2 Bayes Filter 97 4.2.3 Extended Kalman Filter 100 4.2.4 Generalized Gaussian Filter 103 4.2.5 Iterated Extended Kalman Filter 105 4.2.6 IEKF Is a MAP Estimator 106 4.2.7 Alternatives for Passing PDFs through Nonlinearities 107 4.2.8 Particle Filter 116 4.2.9 Sigmapoint Kalman Filter 118 4.2.10 Iterated Sigmapoint Kalman Filter 123 4.2.11 ISPKF Seeks the Posterior Mean 126 4.2.12 Taxonomy of Filters 127 4.3 Batch Discrete-Time Estimation 127 4.3.1 Maximum A Posteriori 128 4.3.2 Bayesian Inference 135 4.3.3 Maximum Likelihood 137 4.3.4 Discussion 142 Contents vii 4.4 Batch Continuous-Time Estimation 143 4.4.1 Motion Model 143 4.4.2 Observation Model 146 4.4.3 Bayesian Inference 146 4.4.4 Algorithm Summary 147 4.5 Summary 148 4.6 Exercises 149 5 Biases, Correspondences, and Outliers 151 5.1 Handling Input/Measurement Biases 152 5.1.1 Bias Effects on the Kalman Filter 152 5.1.2 Unknown Input Bias 155 5.1.3 Unknown Measurement Bias 157 5.2 Data Association 159 5.2.1 External Data Association 160 5.2.2 Internal Data Association 160 5.3 Handling Outliers 161 5.3.1 RANSAC 162 5.3.2 M-Estimation 163 5.3.3 Covariance Estimation 166 5.4 Summary 168 5.5 Exercises 168 Part II Three-Dimensional Machinery 171 6 Primer on Three-Dimensional Geometry 173 6.1 Vectors and Reference Frames 173 6.1.1 Reference Frames 174 6.1.2 Dot Product 174 6.1.3 Cross Product 175 6.2 Rotations 176 6.2.1 Rotation Matrices 176 6.2.2 Principal Rotations 177 6.2.3 Alternate Rotation Representations 178 6.2.4 Rotational Kinematics 184 6.2.5 Perturbing Rotations 188 6.3 Poses 192 6.3.1 Transformation Matrices 193 6.3.2 Robotics Conventions 194 6.3.3 Frenet-Serret Frame 196 6.4 Sensor Models 199 6.4.1 Perspective Camera 199 6.4.2 Stereo Camera 206 6.4.3 Range-Azimuth-Elevation 208 6.4.4 Inertial Measurement Unit 209 6.5 Summary 211 6.6 Exercises 212 viii Contents 7 Matrix Lie Groups 215 7.1 Geometry 215 7.1.1 Special Orthogonal and Special Euclidean Groups 215 7.1.2 Lie Algebras 217 7.1.3 Exponential Map 219 7.1.4 Adjoints 226 7.1.5 Baker-Campbell-Hausdorff 230 7.1.6 Distance, Volume, Integration 237 7.1.7 Interpolation 240 7.1.8 Homogeneous Points 246 7.1.9 Calculus and Optimization 246 7.1.10 Identities 254 7.2 Kinematics 255 7.2.1 Rotations 255 7.2.2 Poses 258 7.2.3 Linearized Rotations 261 7.2.4 Linearized Poses 265 7.3 Probability and Statistics 266 7.3.1 Gaussian Random Variables and PDFs 267 7.3.2 Uncertainty on a Rotated Vector 271 7.3.3 Compounding Poses 273 7.3.4 Fusing Poses 280 7.3.5 Propagating Uncertainty through a Nonlinear Camera Model 285 7.4 Summary 292 7.5 Exercises 293 Part III Applications 295 8 Pose Estimation Problems 297 8.1 Point-Cloud Alignment 297 8.1.1 Problem Setup 298 8.1.2 Unit-Length Quaternion Solution 298 8.1.3 Rotation Matrix Solution 302 8.1.4 Transformation Matrix Solution 316 8.2 Point-Cloud Tracking 319 8.2.1 Problem Setup 319 8.2.2 Motion Priors 320 8.2.3 Measurement Model 321 8.2.4 EKF Solution 322 8.2.5 Batch Maximum a Posteriori Solution 325 8.3 Pose-Graph Relaxation 329 8.3.1 Problem Setup 329 8.3.2 Batch Maximum Likelihood Solution 330 8.3.3 Initialization 333 8.3.4 Exploiting Sparsity 333 8.3.5 Chain Example 334 Contents ix 9 Pose-and-Point Estimation Problems 337 9.1 Bundle Adjustment 337 9.1.1 Problem Setup 338 9.1.2 Measurement Model 338 9.1.3 Maximum Likelihood Solution 342 9.1.4 Exploiting Sparsity 345 9.1.5 Interpolation Example 348 9.2 Simultaneous Localization and Mapping 352 9.2.1 Problem Setup 352 9.2.2 Batch Maximum a Posteriori Solution 353 9.2.3 Exploiting Sparsity 354 9.2.4 Example 355 10 Continuous-Time Estimation 357 10.1 Motion Prior 357 10.1.1 General 357 10.1.2 Simplification 361 10.2 Simultaneous Trajectory Estimation and Mapping 362 10.2.1 Problem Setup 363 10.2.2 Measurement Model 363 10.2.3 Batch Maximum a Posteriori Solution 364 10.2.4 Exploiting Sparsity 365 10.2.5 Interpolation 366 10.2.6 Postscript 367 References 369 Appendix A Supplementary Material 375

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.