ebook img

Plenoptic Imaging and Vision using Angle Sensitive Pixels PDF

182 Pages·2016·6.78 MB·English
by  
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Plenoptic Imaging and Vision using Angle Sensitive Pixels

PLENOPTIC IMAGING AND VISION USING ANGLE SENSITIVE PIXELS ADissertation PresentedtotheFacultyoftheGraduateSchool ofCornellUniversity inPartialFulfillmentoftheRequirementsfortheDegreeof DoctorofPhilosophy by SurenJayasuriya January2017 (cid:13)c 2017SurenJayasuriya ALLRIGHTSRESERVED Thisdocumentisinthepublicdomain. PLENOPTICIMAGINGANDVISIONUSINGANGLESENSITIVEPIXELS SurenJayasuriya,Ph.D. CornellUniversity2017 Computationalcameraswithsensorhardwareco-designedwithcomputervisionandgraph- icsalgorithmsareanexcitingrecenttrendinvisualcomputing. Inparticular,mostofthese new cameras capture the plenoptic function of light, a multidimensional function of ra- diance for light rays in a scene. Such plenoptic information can be used for a variety of tasksincludingdepthestimation,novelviewsynthesis,andinferringphysicalpropertiesof ascenethatthelightinteractswith. In this thesis, we present multimodal plenoptic imaging, the simultaenous capture of multiple plenoptic dimensions, using Angle Sensitive Pixels (ASP), custom CMOS image sensorswithembeddedper-pixeldiffractiongratings. WeextendASPmodelsforplenoptic image capture, and showcase several computer vision and computational imaging applica- tions. First, we show how high resolution 4D light fields can be recovered from ASP images, using both a dictionary-based machine learning method as well as deep learning. We then extend ASP imaging to include the effects of polarization, and use this new information to image stress-induced birefringence and remove specular highlights from light field depth mapping. We explore the potential for ASPs performing time-of-flight imaging, and in- troduce the depth field, a combined representation of time-of-flight depth with plenoptic spatio-angular coordinates, which is used for applications in robust depth estimation. Fi- nally, we leverage ASP optical edge filtering to be a low power front end for an embedded deep learning imaging system. We also present two technical appendices: a study of using deep learning with energy-efficient binary gradient cameras, and a design flow to enable agilehardwaredesignforcomputationalimagesensorsinthefuture. BIOGRAPHICALSKETCH The author was born on July 19, 1990. From 2008 to 2012 he studied at the University of Pittsburgh, where he received a Bachelor of Science in Mathematics with departmental honors and a Bachelor of Arts in Philosophy. He then moved to Ithaca, New York to study at Cornell University from 2012 to 2017, where he earned his doctorate in Electrical and ComputerEngineeringin2017. iii Tomyparentsandbrother. iv ACKNOWLEDGEMENTS I am grateful for my advisor Al Molnar for all his mentoring over these past years. It wasarisktoacceptastudentwhosebackgroundwasmathematicsandphilosophy,andhad no experience in ECE or CS, but Al was patient enough to allow me to make mistakes and grow as a researcher. I also want to thank my commitee members Alyssa Apsel and Steve Marschnerfortheiradvicethroughoutmygraduatecareer. Several people made important contributions to this thesis. Sriram Sivaramakrishnan was my go-to expert on ASPs, and co-author on many of my imaging papers. Matthew Hirsch and Gordon Wetzstein, along with the supervision of Ramesh Raskar, collaborated onthedictionary-basedlearningforrecovering4DlightfieldsinChapter3. ArjunJauhari, Mayank Gupta, and Kuldeep Kulkarni, along with the supervision of Pavan Turaga, per- formeddeeplearningexperimentstospeeduplightfieldreconstructionsinChapter3. Ellen Chuang and Debashree Guruaribam performed the polarization experiments in Chapter 4. Adithya Pediredla and Ryuichi Tadano helped create the depth fields experimental proto- type and worked on applications in Chapter 5. George Chen realized ASP Vision from conceptionwithme,andJiyueYangandJudyStephencollectedandanalyzedtherealdigit andfacerecognitionexperimentsinChapter6. AshokVeeraraghavanhelpedsupervisethe projectsinChapters5and6. The work in Appendix A was completed during a summer internship at NVIDIA Re- search under the direction of Orazio Gallo, Jinwei Gu, and Jan Kautz, with the face detec- tionresultscompiledbyJinweiGu,thegesturerecognitionresultsrunbyPavloMolchanov, and the captured gradient images for intensity reconstruction by Orazio Gallo. The design flow in Appendix B was developed in collaboration with Chris Torng, Moyang Wang, Na- garaj Murali, Bharath Sudheendra, Mark Buckler, Einar Veizaga, Shreesha Srinath, and TaylorPritchardunderthesupervisionofChrisBatten. JiyueYangalsohelpedwiththede- v signandlayoutofASPdepthfieldpixels,andGerdKienedesignedtheamplifierpresented intheAppendixB. Special thanks go to Achuta Kadambi for many long discussions about computational imaging. I also enjoyed being a fly on the wall of the Cornell Graphics/Vision group es- pecially chatting with Kevin Matzen, Scott Wehrwein, Paul Upchurch, Kyle Wilson, Sean Bell. My heartfelt gratitude goes to my friends and family who supported me while I un- dertook this journey. I couldn’t have done it without Buddy Rieger, Crystal Lee Morgan, Brandon Hang, Chris Hazel, Elly Engle, Steve Miller, Kevin Luke, Ved Gund, Alex Ruy- ack, Jayson Myers, Ravi Patel, and John McKay. Finally, I’d like to thank my brother Gihanandmyparentsfortheirloveandsupportforalltheseyears. Thisthesisisdedicated tothem. vi TABLEOFCONTENTS BiographicalSketch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii Dedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v TableofContents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii ListofTables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x ListofFigures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi 1 Introduction 1 1.1 DissertationOverview . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 2 Background 5 2.1 PlenopticFunctionofLight . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2 ApplicationsofPlenopticImaging . . . . . . . . . . . . . . . . . . . . . . 6 2.3 ComputationalCameras . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.4 AngleSensitivePixels . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.4.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.4.2 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.4.3 OurContribution . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 3 Angle 17 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 3.2 RelatedWork . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.3 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.3.1 LightFieldAcquisitionwithASPs . . . . . . . . . . . . . . . . . . 21 3.3.2 ImageandLightFieldSynthesis . . . . . . . . . . . . . . . . . . . 24 3.4 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 3.4.1 FrequencyAnalysis . . . . . . . . . . . . . . . . . . . . . . . . . . 26 3.4.2 DepthofField . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 3.4.3 ResiliencetoNoise . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.5 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.5.1 AngleSensitivePixelHardware . . . . . . . . . . . . . . . . . . . 31 3.5.2 SoftwareImplementation . . . . . . . . . . . . . . . . . . . . . . . 33 3.6 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 3.7 CompressiveLightFieldReconstructionsusingDeepLearning . . . . . . . 38 3.7.1 RelatedWork . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 3.7.2 DeepLearningforLightFieldReconstruction . . . . . . . . . . . . 40 3.7.3 ExperimentalResults . . . . . . . . . . . . . . . . . . . . . . . . . 45 3.7.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 3.7.5 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 3.7.6 FutureDirections . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 vii

Description:
new cameras capture the plenoptic function of light, a multidimensional using both a dictionary-based machine learning method as well as deep sign and layout of ASP depth field pixels, and Gerd Kiene designed the on a sensor, light field cameras offer an unprecedented amount of flexibility.
See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.