Axis-aligned Filtering for Interactive Physically-based Rendering Soham Uday Mehta Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-2015-66 http://www.eecs.berkeley.edu/Pubs/TechRpts/2015/EECS-2015-66.html May 12, 2015 Copyright © 2015, by the author(s). All rights reserved. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission. Axis-alignedFilteringforInteractivePhysically-basedRendering by SohamUdayMehta Adissertationsubmittedinpartialsatisfactionofthe requirementsforthedegreeof DoctorofPhilosophy in ComputerScience inthe GraduateDivision ofthe UniversityofCalifornia,Berkeley Committeeincharge: ProfessorRaviRamamoorthi,Chair ProfessorJamesO’Brien ProfessorMartyBanks Spring2015 Axis-alignedFilteringforInteractivePhysically-basedRendering Copyright2015 by SohamUdayMehta 1 Abstract Axis-alignedFilteringforInteractivePhysically-basedRendering by SohamUdayMehta DoctorofPhilosophyinComputerScience UniversityofCalifornia,Berkeley ProfessorRaviRamamoorthi,Chair Computer graphics rendering is undergoing a renaissance, with physically-based rendering methods based on accurate Monte-Carlo image synthesis replacing ad-hoc techniques in a variety of applications including movie production. In interactive applications like product visualization or video games, physically-based lighting effects are increasingly popular. However, producing photo-realisticimagesatinteractivespeedsstillremainsachallenge. InMonte-Carlorendering,apixel’scoloriscomputedbysamplingandintegratingoverahigh- dimensionalspace. Thisincludeseffectslike(1)motionblur,duetoobjectsmovingduringthetime the camera shutter is open; (2) defocus blur, due to camera lens optics; (3) area and environment map lighting, which is direct illumination coming from many directions; (4) global illumination, duetolightreflectedfromonesurfacetoanother. Thecolorissampledthroughray-orpath-tracing. Withinsufficientrays,theimagelooksnoisybecausetheintegrandhashighvariance,and1000sof rays are needed (per pixel) for a pleasing image. Previous work has showed a Fourier analysis for some of these effects, deriving a compact double-wedge spectrum, and a sheared filter that aligns with the slope of the spectrum. This filter can remove noise from a very sparsely sampled Monte- Carlo image, but is very slow. In this thesis, we will extend the Fourier analysis for more general cases, and propose a less compact axis-aligned filter, that aligns with the frequency axes. The resulting spatial bandwidths are then used for image-space filtering, that is orders of magnitude faster than sheared filtering. The packing of the Fourier spectra also provides adaptive sampling rates that minimize noise in conjunction with the adaptive filter. These algorithms improve speed relative to converged ground-truth by about 30(cid:0)60(cid:2), and we are able to demonstrate interactive speed with a GPU ray-tracer. We also demonstrate an application of our method to mixed reality withaKinectcamera. i Contents Contents i 1 Introduction 1 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2 Background 5 2.1 Radiometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2 TheRenderingEquation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.3 FourierAnalysisandFiltering . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3 SoftShadows 9 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 3.2 PreviousWork . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 3.3 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 3.4 Axis-AlignedFiltering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.5 AdaptiveSamplingRates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 3.6 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 3.7 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 4 DiffuseIndirectIllumination 29 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 4.2 PreviousWork . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 4.3 FourierAnalysisofIndirectIllumination . . . . . . . . . . . . . . . . . . . . . . . 32 4.4 Axis-AlignedFiltering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 4.5 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 4.6 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 5 MultipleEffects 51 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 5.2 PreviousWork . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 ii 5.3 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 5.4 DefocusBlur . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 5.5 DirectIlluminationwithdefocusblur . . . . . . . . . . . . . . . . . . . . . . . . . 58 5.6 IndirectIllumination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 5.7 SamplingRates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 5.8 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 5.9 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 5.10 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 6 EnvironmentIlluminationandApplicationtoMixedReality 76 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 6.2 PreviousWork . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 6.3 DifferentialRendering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 6.4 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 6.5 Two-modeSamplingAlgorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 6.6 FourierAnalysisforEnvironmentLighting . . . . . . . . . . . . . . . . . . . . . . 83 6.7 PracticalFiltering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 6.8 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 7 Conclusion 97 7.1 FutureWork . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 A BandlimitsforGlossyBRDFs 100 B MotionBlurwithSecondaryEffects 102 C Two-modeSamplingAlgorithm 105 D DerivationofChapter6,equation11 108 E VerificationofChapter6,equation17 110 Bibliography 112 iii Acknowledgments I want to thank my advisor Prof. Ravi Ramamoorthi for providing invaluable guidance in the progress of my work. His deep insights in the field of rendering were indispensable for the com- pletionofthiswork. I thank Prof. Fredo Durand from MIT who was a co-advisor of many of my projects. In par- ticular, his expertise in Fourier analysis was extremely helpful. I also want to thank Prof. James O’Brien for providing advice at various points, and for having me as a GSI for the undergraduate Computer Graphics course, which was a great learning experience for me. I would also like to thank Prof. Carlo Sequin and Prof. Maneesh Agrawala for welcoming me into their classes and helping my knowledge of graphics grow. I also thank Prof. Alysoha Efros and Prof. Marty Banks for providing constructive criticism as my qualifying exam and dissertation committee members. I want to thank my colleagues Dr. Dikpal Reddy, Dr. Jiamin Bai, Dr. Michael Tao who pro- vided great advice on graduate school. I want to thank my collaborators and friends Ling-Qi Yan, BrandonWangandEricYaofortheirinputsandcontributions. This work was supported in part by NSF grant CGV 1115242, an NVIDIA PhD fellowship, GPU donations from NVIDIA, and funding from the Intel Science and Technology Center for VisualComputing. 1 Chapter 1 Introduction The central focus of this thesis is speeding up photo-realistic rendering in 3-dimensional com- puter graphics. Rendering refers to converting a 3-dimensional virtual collection of objects with different reflective properties, viewed through a camera illuminated by some light sources, into a 2-dimensional red-green-blue image. The scene, i.e. objects, materials, lights and camera, is assumedtobeknown. Renderingasingleimagerequirescomputingthelightreachingthecamera ateachpixel,andeachimagetypicallycontainsaboutamillionpixels. 1.1 Motivation Computing the color at each pixel requires evaluation of a complicated integral involving light energybouncingaroundinthescene,inthesteadystate. Specifically,wedealwithphoto-realistic renderingeffects,commonlytermed“distributioneffects”,suchas 1. SoftShadows(integratingoveranarealight), 2. IndirectIllumination(integratingoverlightreflectedfromotherobjectsorsurfaces), 3. DepthofField(Integratingovertheallpointsonthecameralens), 4. MotionBlur(Integratingovertimeforwhichthecamerashutterremainsopen), 5. EnvironmentLighting(Integratingoverdistantlightingcomingfromalldirections). The integrand consists of a high-dimensional “light field” that is a function of coordinates in camera, light, and/or directional space. In most scenarios, the integrand light fields at adjacent or nearby pixels overlap across multidimensional domains. But, in traditional rendering, computa- tionalresultsarenotsharedbetweenpixels. Thesemultidimensionaleffectsareexpensive,butalso CHAPTER1. INTRODUCTION 2 crucialforrealisminhighquality offlinerendering. Manyapproximate techniqueshavebeenpro- posed for each effect, but they introduce bias and fail to produce the accurate result. The correct, physically-basedsolutionisonlypossiblethroughMonte-Carlopath-tracing. As the complexity within a pixel’s multidimensional domain increases, the computation time requiredtorenderanimageusingMonte-Carlopath-tracingalsoincreases. Forexample,forshad- owscastbyanarealight,asthelightsourcebecomeslargereachpixelseesmorecomplexgeometry and the occlusion signal has more variance. Capturing this variance, i.e. reducing the noise using previous methods requires computing more samples for each pixel, which increases render times. Another observation is that as the complexity of these effects increases, the final image content is often smooth. Intuitively, for shadows, the larger the size of the light source, the blurrier (softer) the shadows. This blur in turn removes high frequencies from the final image. Putting these two observations together we come to an ironic conclusion. Using simple Monte-Carlo ray-tracing, as the light size increases, more rays are required, but the complexity and spatial frequencies in the finalimageactuallydecreaseduetotheblurringorfilteringfromthelightsource–andweendup devotingmoreandmoreresourcestocomputeasimplerandsimplerresult. Oneofthekeyinsights is that as complexity increases inside of a single pixel, it is often true that there is a corresponding increase in overlap between the integral domains of nearby pixels. For example, nearby pixels see the same blockers casting shadows from the light source. Similar arguments can be applied for other distribution effects. At an intuitive level it seems obvious that we should be able to share information to reduce the total computation in these cases. However, robustly deriving how much information to share and how to share it fast is a more difficult problem, and it is the problem addressedbythisthesis. 1.2 Contributions A large body of recent work has indicated that the number of rays in Monte-Carlo rendering can be dramatically reduced if we share samples between pixels. Previous work has extended image denoising techniques to Monte-Carlo images. Other work has showed a Fourier analysis for some of these effects, deriving a compact filter that aligns with the slope of the spectrum. Although these methods based on sheared filtering, statistical denoising or light field reconstruction can denoise very sparsely sampled images, their practical filtering algorithms are very slow, making them unsuitable for interactive use. In this thesis, we make a different set of tradeoffs. We use a simple filter to reduce the number of Monte-Carlo samples considerably compared to brute force, butlessthansomepreviousmethods. However,webenefitinhavinganextremelysimplefiltering step,whichreducestoaspatially-varyingimage-spaceblur. Thiscanbeperformedextremelyfast, and enables our method to be integrated in a GPU-accelerated ray-tracer. The final algorithm is essentially an image denoiser which is very simple to implement and can operate with minimal overhead. Our analysis also provides adaptive sampling rates which reduce noise throughout the image. Therefore, we are able to achieve interactive frame rates while obtaining the benefits of highqualityray-traceddistributioneffects.
Description: