Statistical Methods in Photogrammetry and Image-Lidar Fusion by Kyle Andrew Holland A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Environmental Science, Policy and Management in the Graduate Division of the University of California, Berkeley Committee in charge: Professor Gregory S. Biging, Chair Professor Peng Gong Professor John Radke Fall 2013 Statistical Methods in Photogrammetry and Image-Lidar Fusion Copyright © 2013 by Kyle Andrew Holland All Rights Reserved Abstract Statistical Methods in Photogrammetry and Image-Lidar Fusion by Kyle Andrew Holland Doctor of Philosophy in Environmental Science, Policy, and Management University of California, Berkeley Professor Gregory S. Biging, Chair A primary application of photogrammetry is to process, discriminate and classify optical imagery into 3D objects. Light detection and ranging (lidar) is an active sensor that measures the surfaces of 3D objects as discrete points in a point cloud. From the perspective of 3D objects in a common scene, point clouds and photogrammetry are related. Through this relationship there are numerous photogrammetric applications of lidar. This dissertation is concerned with three specific applications of lidar: modeling radiometric properties as a basis for comparison with imagery, estimating camera pose and data fusion. It is partial to the problems of object discrimination and classification, and presents solutions in a statistical context. A reflectance image is derived from reflectance, shadow and projection models. The reflectance image model is applied to compare the point cloud and imagery. The collinear 1 equations of imaging are reparameterized as an object to image space transformation and estimated using maximum likelihood. Reflectance images are applied to quantify errors in this transformation across multiple images and to study the convergence properties of estimates. Finally, the process of image-lidar fusion is discussed in the context of uncertainty and probability. An estimator is specified for image-lidar fusion, derived from a generalized theory of the process. The estimator is shown to be unbiased and relatively efficient compared to the sample mean. 2 Table of Contents List of Figures .................................................................................................................................. iv List of Tables .................................................................................................................................. vii List of Equations ............................................................................................................................ viii Acknowledgement ........................................................................................................................ xiii Chapter One: Introduction .............................................................................................................. 1 1.1 Background and Motivation ................................................................................................. 3 1.1.1 Modern Photogrammetry .............................................................................................. 4 1.1.2 The Point Cloud .............................................................................................................. 7 1.2 Contribution .......................................................................................................................... 9 1.3 Structure of Thesis .............................................................................................................. 10 1.3.1 Notation ....................................................................................................................... 12 Chapter Two: Reflectance Image Model from Lidar Data ............................................................ 13 2.1 Overview ............................................................................................................................. 14 2.2 Reflectance Model .............................................................................................................. 16 2.2.1 Parameters ................................................................................................................... 19 2.2.1.1 Reflectance ........................................................................................................... 19 2.2.2 Time Integral Applications ........................................................................................... 20 2.3 Shadow Model .................................................................................................................... 21 2.3.1 Parameters ................................................................................................................... 24 2.4 Image Model ....................................................................................................................... 26 2.5 Constructing the Reflectance Image ................................................................................... 29 2.6 Application .......................................................................................................................... 30 2.6.1 Study Site ..................................................................................................................... 30 2.6.2 Data .............................................................................................................................. 30 2.6.3 Modeling Reflectance .................................................................................................. 31 2.6.4 Modeling Shadow ........................................................................................................ 32 2.7 Results ................................................................................................................................. 35 2.7.1 Accuracy Assessment ................................................................................................... 36 2.8 Discussion ........................................................................................................................... 39 2.8.1 Contribution ................................................................................................................. 41 Chapter Three: Image Ray Solutions to Camera Pose .................................................................. 42 3.1 Overview ............................................................................................................................. 43 3.2 Object to Image Space Transformation .............................................................................. 45 i 3.2.1 Definition...................................................................................................................... 46 3.2.2 Derivation ..................................................................................................................... 48 3.2.3 Reparameterization ..................................................................................................... 49 3.3 Existing Methods................................................................................................................. 52 3.4 Exploratory Data Analysis ................................................................................................... 55 3.4.1 Study Site ..................................................................................................................... 55 3.4.2 Yellowstone Data ......................................................................................................... 55 3.4.3 Error Distributions ........................................................................................................ 61 3.4.3.1 Vector Normed Difference.................................................................................... 62 3.4.3.2 Bivariate Difference .............................................................................................. 63 3.4.4 Model Selection ........................................................................................................... 63 3.5 Estimating Camera Pose ..................................................................................................... 67 3.5.1 A Statistical Model for the Imaging Vector .................................................................. 67 3.5.2 Likelihood Function ...................................................................................................... 69 3.5.2.1 Constraints on the Rotation Matrix ...................................................................... 69 3.5.3 Maximum Likelihood Estimators ................................................................................. 71 3.5.3.1 Effects matrix ........................................................................................................ 72 3.5.3.2 Covariance Matrix ................................................................................................. 73 3.5.3.3 Projective Center .................................................................................................. 73 3.5.3.4 Scaled Rotation Matrix ......................................................................................... 74 3.5.4 Scaled Optical Depth .................................................................................................... 76 3.5.5 Finding the Trajectory Parameters .............................................................................. 77 3.5.6 Finding the Projective Center ...................................................................................... 77 3.5.7 Iterative Method .......................................................................................................... 78 3.6 Results ................................................................................................................................. 81 3.6.1 Convergence Properties ............................................................................................... 81 3.6.1.1 Frame Step ............................................................................................................ 81 3.6.1.2 Model Step ............................................................................................................ 83 3.6.2 Yellowstone Data ......................................................................................................... 85 3.6.3 Parameter Distributions ............................................................................................... 90 3.7 Discussion ........................................................................................................................... 94 3.7.1 Contribution ................................................................................................................. 97 Chapter Four: Probability Model for Image-Lidar Fusion ............................................................. 98 4.1 Overview ............................................................................................................................. 99 4.2 Vector Space Definitions ................................................................................................... 102 4.3 Image-Lidar Fusion ............................................................................................................ 104 4.3.1 Motivation .................................................................................................................. 105 ii 4.3.2 Related Work ............................................................................................................. 109 4.4 Theory ............................................................................................................................... 112 4.4.1 Sources of Uncertainty as Events ............................................................................... 113 4.4.2 Fusion Probability ...................................................................................................... 117 4.5 Probability Fusion Model .................................................................................................. 124 4.5.1 Pose Probability ......................................................................................................... 126 4.5.2 Attribute Probability .................................................................................................. 127 4.5.3 Transformation Probability ........................................................................................ 127 4.5.3.1 Optical Depth Covariance Ratio .......................................................................... 132 4.5.4 Reduction Probability ................................................................................................. 134 4.5.5 Statistical Properties .................................................................................................. 141 4.5.5.1 Bias ...................................................................................................................... 141 4.5.5.2 Standard Error ..................................................................................................... 142 4.5.5.3 Relative Efficiency ............................................................................................... 142 4.6 Simulation ......................................................................................................................... 144 4.6.1 Approximate Density Function for Fusion Transformation ....................................... 144 4.6.2 Methods ..................................................................................................................... 148 4.6.3 Results ........................................................................................................................ 151 4.7 Discussion ......................................................................................................................... 154 4.7.1 Contribution ............................................................................................................... 156 Chapter Five: Conclusion ............................................................................................................ 157 5.1 Future Works .................................................................................................................... 159 References .................................................................................................................................. 160 Appendix A: Convergence Paths for Camera Pose Estimation ................................................... 172 Appendix B: Key to Variables ...................................................................................................... 188 iii List of Figures Figure 1: A DSM raster of objects in a scene (a) viewed in the 2D orthographic plane and (b) in the full 3D space (low z-values in red relative to high z-values in green). ..................................... 6 Figure 2: A point cloud of objects in a 3D scene (low z-values in red relative to high z-values in green). ............................................................................................................................................. 7 Figure 3: Relationship of solar position parameters (N is north). ................................................ 18 Figure 4: Relationship of surface slope angle and aspect relative to the normal vector (a) of the surface (b) (N is north). ................................................................................................................. 18 Figure 5: The solar ray (dotted line, N is north). ........................................................................... 22 Figure 6: Relationship of and to the solar ray (N is north). ................................................ 23 Figure 7: The effect of the direction vector on the orientation of the focal plane in object space (N is north). Given a fixed focal length, the focal plane without rotation is (a) and a focal plane with rotation is (b) relative to the origin. ..................................................................................... 27 Figure 8: The focal plane depicted with square pixels showing the image principle point as (a), a coordinate as (b) and a coordinate with additive lens distortion as (c). ............................. 27 Figure 9: Aerial image (left) and enhanced image (right) obtained from the first principle component. ................................................................................................................................... 31 Figure 10: Modeled reflectance for the DSM at the given time and location of the aerial image (north is up). .................................................................................................................................. 32 Figure 11: The pseudo-reflectance map in grayscale, the combined shadow map and reflectance models (north is up). ..................................................................................................................... 34 Figure 12: Modeled reflectance values (left) compared to values (right) in grayscale (north is up). ................................................................................................................................................ 34 Figure 13: Enhanced image (left) compared to modeled reflectance image (right) in grayscale. 35 Figure 14: Identified tie points (+) overlaid on the original aerial image. .................................... 37 Figure 15: Identified tie points (+) overlaid on the enhanced aerial image. ................................ 37 Figure 16: Identified tie points (+) overlaid on the DSM presented in grayscale (north is up, darker values are lower elevation than lighter values). ............................................................... 38 Figure 17: Geometry of the image ray and image plane in the object space where is yaw, is pitch and is roll. The principle point is depicted as (a), image frame on the image plane as (b), principle axis as (c) and arbitrary image ray as (d). ...................................................................... 45 Figure 18: Geometry of image frame in the camera frame. The principle point is depicted as (a), image as (b) and principle axis as (c). ........................................................................................... 46 iv Figure 19: Geometry of the image frame as it relates to (28). The space spanning all is the image plane. .................................................................................................................................. 47 Figure 20: Histogram of vector normed difference in pixels on the horizontal axis (see Table 1 for variable declarations, n=434). ................................................................................................. 62 Figure 21: Histograms of bivariate difference in pixels on the horizontal axis (see Table 1 for variable declarations, n=434). ...................................................................................................... 63 Figure 22: Covariate plots showing general trends in the bivariate error in pixels on the horizontal and vertical axes (see Table 1 for variable declarations). ........................................... 66 Figure 23: Quantile plots of residuals showing heavy tails at the extremes in pixels on the horizontal and vertical axes (see Table 1 for variable declarations). ........................................... 66 Figure 24: Convergence in by iteration. .................................................................................... 85 Figure 25: Histograms of estimated camera pose parameters by bootstrap with n=24 (X, Y and Z in meters; Yaw, Pitch and Roll in degrees; dashed lines indicate true values). ........................... 91 Figure 26: Histograms of estimated camera pose parameters by bootstrap with n=48 (X, Y and Z in meters; Yaw, Pitch and Roll in degrees; dashed lines indicate true values). ........................... 92 Figure 27: Histograms of estimated camera pose parameters by bootstrap with n=64 (X, Y and Z in meters; Yaw, Pitch and Roll in degrees; dashed lines indicate true values). ........................... 93 Figure 28: Potential occlusion of the (a) image ray passing through a point in the two- dimensional object subspace (empty dots represent points in the cloud, arbitrary units on axes). ..................................................................................................................................................... 106 Figure 29: Uncertainty region about the (a) image ray passing through a point in the two- dimensional object subspace (empty dots represent points in the cloud, dotted lines are bounds on the uncertainty region, arbitrary units on axes). ................................................................... 107 Figure 30: Multiple image rays (a-c) to a point in the two-dimensional object subspace (empty dots represent points in the cloud, arbitrary units on axes). ..................................................... 108 Figure 31: Directed Acyclic Graph (DAG) of events corresponding to sources of uncertainty in the case of a single sampling unit. .............................................................................................. 117 Figure 32: Directed Acyclic Graph (DAG) of events corresponding to sources of uncertainty in the case of duplicity: (a) pixels in image frames and (b) image frames. .................... 117 Figure 33: Effect of power parameter on probability weights by increasing power (a-d). ........ 122 Figure 34: Directed Acyclic Graph (DAG) of the fusion probability where shaded nodes represent observed events: (a) pixels in image frames and (b) image frames. .......................... 126 Figure 35: Probability levels of an example distribution on in an image frame relative to pixels (+). ............................................................................................................................................... 128 Figure 36: Example probability levels as a function of distance variance by increasing optical depth (a-d) in image frames relative to (+). ........................................................................... 131 Figure 37: Geometry of lidar point occlusion in object space. ................................................... 136 v Figure A.1: Convergence path of lambda in X-Y subspace for n=400: (a) starting point, (b) point of convergence, X denotes true value. ....................................................................................... 172 Figure A.2: Convergence path of lambda in X-Y subspace for n=80: (a) starting point, (b) point of convergence, X denotes true value. ........................................................................................... 173 Figure A.3: Convergence path of lambda in X-Y subspace for n=40: (a) starting point, (b) point of convergence, X denotes true value. ........................................................................................... 174 Figure A.4: Convergence path of lambda in X-Y subspace for n=8: (a) starting point, (b) point of convergence, X denotes true value. ........................................................................................... 175 Figure A.5: Convergence path of lambda in X-Z subspace for n=400: (a) starting point, (b) point of convergence, X denotes true value. ....................................................................................... 176 Figure A.41: Convergence path of lambda in X-Z subspace for n=80: (a) starting point, (b) point of convergence, X denotes true value. ....................................................................................... 177 Figure A.6: Convergence path of lambda in X-Z subspace for n=40: (a) starting point, (b) point of convergence, X denotes true value. ........................................................................................... 178 Figure A.7: Convergence path of lambda in X-Z subspace for n=8: (a) starting point, (b) point of convergence, X denotes true value. ........................................................................................... 179 Figure A.42: Convergence path of lambda in Y-Z subspace for n=8: (a) starting point, (b) point of convergence, X denotes true value. ........................................................................................... 180 Figure A.8: Convergence path of lambda in Y-Z subspace for n=40: (a) starting point, (b) point of convergence, X denotes true value. ........................................................................................... 181 Figure A.9: Convergence path of lambda in Y-Z subspace for n=80: (a) starting point, (b) point of convergence, X denotes true value. ........................................................................................... 182 Figure A.10: Convergence path of lambda in Y-Z subspace for n=400: (a) starting point, (b) point of convergence, X denotes true value. ....................................................................................... 183 Figure A.11: Convergence in lambda by iteration for n=8. ......................................................... 184 Figure A.12: Convergence in lambda by iteration for n=40........................................................ 185 Figure A.13: Convergence in lambda by iteration for n=80........................................................ 186 Figure A.14: Convergence in lambda by iteration for n=400. .................................................... 187 vi
Description: