ebook img

Manhattan and Piecewise-Planar Constraints for Dense Monocular Mapping PDF

48 Pages·2015·4.38 MB·English
by  
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Manhattan and Piecewise-Planar Constraints for Dense Monocular Mapping

Mid and high-level features for dense monocular SLAM Javier Civera Qualcomm Augmented Reality Lecture Series Nov. 19th, 2015 Index Introduction/motivation  Point-based monocular SLAM  Keypoint-based monocular SLAM  Dense monocular SLAM  Mid-level features  Superpixels  Data-driven primitives  High-level features  Room Layout  Objects.  Robotic Vision • Robotic Vision is making a robot “see” ** • Now… what is to see for a robot? • Data input: • Image sequences. • Multi-sensor. • Active sensing. • Problem constraints: • Real-time. • Hardware limits. • Goals: • Autolocation. • 3D scene models. • Temporal models. • Local short-term accuracy. • Long-term models. • Semantics. ** Paraphrasing Olivier Faugeras in Hartley & Zisserman’s book Other applications • The robotics constraints are shared with other applications. • AR/VR. • Wearable/mobile devices. • Laparoscopic surgery. • … Grasa et al., Visual SLAM for Hand-Held Monocular Endoscope, IEEE TMI, 2014 Point-based features (low-level) • Point-based features are accurate in high-texture image regions and for high-parallax motions. • The typical approach has been to use salient point features, discarding low-texture parts. • SfM and Visual SLAM datasets are biased to high-parallax motions. Camera Geometry • Camera is a bearing-only sensor: it only measures angles. • The depth of the scene is estimated by triangulation. ? • The depth estimation is based on the parallax angle. • The larger the parallax, the  X    p   Y  more accurate the depth i   Z estimation   PARALLAX ANGLE C2 tc1c2 C1 Low-Parallax Points • Low parallax is due to: • Distant points • Small camera translation • Depth cannot be estimated for zero parallax points... • ... but provide rich orientation information  x   i  1   y  m,   i i i    z i   i 1 scene point  x   d i   i   i y   parallax i    x  angle z  i  m, y   i   y  i i i i     z   i i  x     i   y rWC  i   i      z   i      i rWC,qWC rWC W Inverse Depth Point Initialization INVERSE New Points added from 1st observation: DEPTH SPACE  1) {x, y, z, θ, φ} initialized from 1st observation and state vector 2) ρ and covariance σ initialized so that 0 ρ0  0 [ρ -2 σ , ρ +2 σ ] includes infinity 0 0 ρ0 0 ρ0   2 1/ d   2 0  min 0  1  0 EUCLIDEAN SPACE  x   i  m, i i y   i 1   z     2 i 0 0 Inverse Depth Point Measurement Projection Model  x   i   y  i Camera Reference Frame   z y   i    x    i   i        i hC  RCW  y rWC m    i i i, i    z     i  xi 1   i      y  m, i i  i i z  i   i 1  d scene point Pinhole Camera Model  i  i  hC  parallax x C  f  h  uu    x hCz  xi m, angle u v   hC  y  i i   y i u C  f  z   y hC   i  z x   i y rWC i   z Two Parameters Radial Distortion   i   rWC,qWC   u  C u C 1r2 r4  rWC h   u    x d x 1 d 2 d  u v  C v C 1r2 r4     u y d y 1 d 2 d W      2 2 r  d u C  d v C d x d x y d y

Description:
David F. Fouhey, Abhinav Gupta, and Martial Hebert. Data-driven 3D primitives for single image understanding. ICCV, 2013. ❑ Feature discovery on
See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.