ebook img

DTIC ADA461213: Image-Based Techniques for Digitizing Environments and Artifacts PDF

9 Pages·3.5 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview DTIC ADA461213: Image-Based Techniques for Digitizing Environments and Artifacts

Image-Based Techniques for Digitizing Environments and Artifacts PaulDebevec UniversityofSouthernCaliforniaInstituteforCreativeTechnologies GraphicsLaboratory 13274FijiWay,5thfloor,MarinadelRey,California,USA,90292 [email protected] Abstract artifact’ssurfacegeometryanditsphotometricproperties- theinformationnecessarytorenderitfromdifferentview- This paper presents an overview of techniques for gen- ing angles and seen under various forms of illumination. erating photoreal computer graphics models of real-world Despitemuchinsightfulworkinthisarea,nogeneralfully places and objects. Our group’s early efforts in model- technique exists that for faithfully digitizing objects hav- ing scenes involved the development of Fac¸ade, an inter- ing arbitrarily complex geometry and spatially-varying re- activephotogrammetricmodelingsystemthatusesgeomet- flectanceproperties. ric primitives to model the scene, and projective texture The computer graphics research my group has under- mappingtoproducethesceneappearanceproperties. Sub- taken in recent years has involved a variety of object and sequent work has produced techniques to model the inci- scene modeling methods, ranging from techniques based dentilluminationwithinscenes,whichwehaveshowntobe principallyonrecoveringgeometrytothosebasedmoresig- useful for realistically adding computer-generated objects nificantlyonacquiringimages. InthispaperIwilldiscuss to image-based models. More recently, our work has fo- severaloftheseprojectswithaparticularregardtothena- cussedonrecoveringlighting-independentmodelsofscenes tureofthemodelsacquired,whythemodelswereappropri- andobjects,capturinghoweachpointonanobjectreflects atefortheparticularapplicationsbeingexplored,andwhich light. Our latest work combines three-dimensional range aspects of the artifacts have been left unrecorded. While scans,digitalphotographs,andincidentilluminationmea- noneoftheseprojectsonitsownpresentsageneralsolution surementstoproducelighting-independentmodelsofcom- tothesceneandobjectdigitizingproblem,evaluatingtheir plexobjectsandenvironments. strengthsandweaknessestogethersuggestsnewavenuesfor combininggeometricandphotometricmeasurementsofob- jectsfordigitizingamoregeneralsetofcomplexobjects. 1.Introduction 2. Immersion ’94: Modeling an Environment fromStereo A basic goal of three-dimensional modeling and imag- inghasbeentodigitizephotorealisticmodelsofreal-world places and objects. The potential benefits of these tech- In1994IworkedinagroupledbymediaartistMichael niques to cultural heritage are significant: objects and Naimarkworkingtoanalyzeasequenceofforward-looking places can be archived in digital forms less susceptible to stereoimageshehadshotatone-meterintervalsalongsev- decayandplunder;scholarscantestandcommunicatethe- eralmilesofthetrailsinBanffNationalForest.Ourgoalfor oriesofreconstructionwithoutdisturbingoriginalartifacts; thesummerwastoturnthesesetsofstereoimagepairsinto and the world’s artistic and architectural heritage can be aphotorealisticvirtualenvironment.Thetechniqueweused vividly and widely shared among all those with access to was to determine stereo correspondences, and thus depth, a computer. It will likely be a long time before an exact between left-right pairs of images, and then to project the computerrepresentationofanartifact–arecord,onecould corresponded pixels forward into the 3D world. For this suppose,oftheobject’satomicstructure–willbeabletobe weusedastereoalgorithmdevelopedbyJohnWoodilland nondestructively recorded, stored, and rendered. Instead, Ramin Zabih [18]. To create virtual renderings, we pro- most digitizing techniques have focussed on recording an jected a supersampled version of the points onto a virtual Report Documentation Page Form Approved OMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington VA 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE 3. DATES COVERED 2003 2. REPORT TYPE 00-00-2003 to 00-00-2003 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Image-Based Techniques for Digitizing Environments and Artifacts 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION University of California,Institute for Creative Technologies,13274 Fiji REPORT NUMBER Way,Marina del Rey,CA,90292 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSOR/MONITOR’S ACRONYM(S) 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 12. DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release; distribution unlimited 13. SUPPLEMENTARY NOTES The original document contains color images. 14. ABSTRACT 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF 18. NUMBER 19a. NAME OF ABSTRACT OF PAGES RESPONSIBLE PERSON a. REPORT b. ABSTRACT c. THIS PAGE 8 unclassified unclassified unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 image plane displaced from the original point of view, us- ingaZ-buffertoresolvetheocclusions.Withjustonestereo pair, we could realistically re-render the scene from any- where up to a meter away from the original camera po- sitions, except for artifacts resulting from areas that were unseen in the original images, such as the areas originally hiddenbehindtreetrunks.Tofillintheseoccludedareasfor novel views, our system would pick the two closest stereo pairs to the desired virtual point of view, and render both tothedesirednovelpointofview. Theseimageswerethen opticallycompositedsothatwhereveronelackedinforma- tion, the other would fill it in. In areas where both images hadinformation,thedatawaslinearlyblendedaccordingto which original view the novel view was closer to. The re- sultwastheabilitytorealisticallymovethroughtheforest, aslongasonekeptthevirtualviewpointwithinaboutame- teroftheoriginalpath. Naimarkpresentedthisworkatthe SIGGRAPH95panel“MuseumswithoutWalls: NewMe- diaforNewMuseums”[1],andtheanimationsmaybeseen attheImmersionprojectwebsite[15]. Theresultsthatwereachievedwereexcitingatthetime; a scene that at the beginning seemed impossible to model hadbeen”digitized”inthesensethatnearlyphotorealren- dered images could be created of flying down the forest trail. Nonetheless, the model of the scene obtained by the Figure1.TheImmersion’94image-basedmodel- technique did not allow the virtual camera to move more ing and rendering project. The top images are than a meter or so from the original path, and the scene a stereo pair (reversed for cross-eyed stereo could only be seen in the original lighting in which it was viewing) taken in Banff National Forest. The photographed. ThenextprojectIwilldiscussaimedtofree middle left photo is a stereo disparity map thecameratoflyaroundthescene. produced by John Woodfill’s parallel imple- mentation of the Zabih-Woodfill stereo algo- 3 Photogrammetric Modeling and Render- rithm[18]. Toitsrightthemaphasbeenpro- ingwithFac¸ade cessed using a left-right consistency check to invalidate regions where running stereo based on the left image and stereo based My thesis work [6] at Berkeley done in collaboration on the right image did not produce consis- with C.J. Taylor presented a system for modeling and ren- tentresults. Belowaretwovirtualviewsgen- deringarchitecturalscenesfromphotographs.Architectural erated by casting each pixel out into space scenesareaninterestingcaseofthegeneralmodelingprob- based on its computed depth estimate, and lem since their geometry is typically very structured, and reprojectingthepixelsintonovelcamerapo- theyarealsooneofthemostcommontypesofenvironment sitions. On the left is the result of virtually to model. The goal of the research was to model architec- moving one meter forward, on the right is tureinawaythatisconvenient,requiresrelativelyfewpho- theresultofvirtuallymovingonemeterback- tographs, and produces freely navigable and photorealistic ward. Note the dark disoccluded areas pro- models. duced by these virtual camera moves; these TheproductofthisresearchwasFac¸ade[8],aninterac- areaswerenotseenintheoriginalstereopair. tivecomputerprogramthatenablesausertobuildphotore- In the Immersion ’94animations (available at alisticarchitecturalmodelsfromasmallsetofphotographs. http://www.debevec.org/Immersion,thesere- InFac¸ade,theuserbuildsa3Dmodelofthescenebyspec- gionswereautomaticallyfilledinfromneigh- ifying a collection of geometric primitives such as boxes, boringstereopairs. arches,andsurfacesofrevolution. However,unlikeinatra- ditionalmodelingprogram,theuserdoesnotneedtospec- ifythedimensionsorthelocationsofthesepieces. Instead, the user corresponds edges in the model to edges marked inthephotographs, andthecomputerusesanoptimization Fac¸ade to generate photorealistic views of the scene by processtodeterminetheshapesandpositionsoftheprimi- usingalloftheavailablephotographs. tivesthatmakethemodelagreewiththephotographedge- Some additional research done in the context of the ometry(Fig. 2). Fac¸adesystemenablesthecomputertoautomaticallyrefine a basic recovered model to conform to more complicated architecturalgeometry. Thetechnique,calledmodel-based stereo, displaces the surfaces of the model to make them maximally consistent with their appearance across multi- ple photographs. Thus, a user can model a bumpy wall as a flat surface, and the computer will compute the re- lief. This technique was employed in modeling the West fac¸adeofthegothicRouencathedralfortheinteractiveart installation Rouen Revisited shown at the SIGGRAPH 96 art show. Most of the area between the two main towers seen in Fig. 3 was originally modeled as a single poly- gon. TheRouenprojectalsomotivatedtheadditionofnew features to Fac¸ade to solve for unknown focal lengths and centers of projection in order to make use of historic pho- tographsofthecathedral. Figure 2. A screen snapshot from Fac¸ade. The windows include the image viewers at the left, where the user marks architectural edge features, and model viewers, where the user instantiates geometric primitives (blocks) and corresponds model edges to image features. Fac¸ade’s reconstruction feature then determines the camera param- eters and position and dimensions of all Rendering: 1996 Rendering: 1896 Rendering: painting the blocks that make the model conform to the photographs. The other windows in- Figure 3. Rouen Revisited. Synthetic views clude the toolbar, the camera parameter di- of the Rouen cathedral from the Rouen Revis- alog, the block parameter/constraint dialog, ited art installation. Left: a synthetic view and the main image list window. See also created from photographs taken in January, http://www.debevec.org/Thesis/. 1996. Middle: a synthetic view created from historic postcards showing the cathedral at the time Monet executed his series of paint- Fac¸ade solves directly for the architectural dimensions ings (1892-1894). Right: a synthetic view of ofthescene: thelengthsofwalls,thewidthsofdoors,and one of Monet’s twenty-eight paintings of the theheightsofroofs,ratherthanthemultitudeofvertexco- cathedral projected onto its historic geome- ordinatesthatastandardphotogrammetricapproachwould try,renderingitfromanovelviewpoint. try to recover. As a result, the reconstruction problem be- comes simpler by orders of magnitude, both in computa- tionalcomplexityandinthenumberofimagefeaturesthatit isnecessaryfortheusertomark. Thetechniquealsoallows 4 The Campanile Movie: Rendering in Real theusertoexploitarchitecturalsymmetries–modelingre- Time peatedstructuresandcomputingredundantdimensionsonly once. Like any structure-from-multiple-views algorithm, The Campanile Movie project aimed to use Fac¸ade to Fac¸ade solves for where the original cameras were in the reconstruct a model of the UC Berkeley bell tower and its scene. With the camera positions known, any one of the entire surrounding campus - to increase the photorealism photographs can be projected back onto the reconstructed of the renderings by showing both a building and its con- geometry using projective texture mapping, which allows text. Fordatacapture,Itookphotographsfromtheground, fromthetower,and(thankstoBerkeleyprofessorofarchi- tectureCrisBenton)fromabovethetowerusingakite. The finalmodelwebuiltinFac¸adecontainedfortyofthecam- pusbuildings; thebuildingsfurtherawayappearedonlyas texturesprojectedontotheground. Therewereafewthou- sand polygons in the model, and the sixteen images (Fig. 4) used in rendering the scene fit precisely into the avail- abletexturememoryoftheSiliconGraphicsRealityEngine we used for rendering. Using OpenGL and a hardware- accelerated view-dependent texture-mapping technique – selectively blending between the original photographs de- pending on the user’s viewpoint [9] – made it possible to renderthesceneconvincinglyinrealtime. (a) 5 Fiat Lux: Capturing Lighting and Adding Objects We most recently used Fac¸ade to model and render the interior of St. Peter’s Basilica for the animation Fiat Lux (Fig.5),whichwasshownattheSIGGRAPH99Electronic Theater. InFiatLux,ourgoalwastonotonlycreatevirtual renderings of moving through St. Peter’s, but to augment thespacewithanimatedcomputer-generatedobjectsinthe serviceofanabstractinterpretationoftheconflictbetween (b) Galileoandthechurch. The key to making the computer-generated objects ap- pear to be truly present in the scene was to illuminate the CG objects with the actual illumination from the Basilica. Torecordtheilluminationweusedahighdynamicphotog- raphymethod[7]wehaddevelopedinwhichaseriesofpic- turestakenwithdifferingexposuresarecombinedintoara- dianceimage–withoutthetechnique,camerasdonothave (c) (d) nearly the range of brightness values to accurately record the full range of illumination in the real world. We then Figure 4. TheCampanileMovie. At top are the usedanimage-basedlighting[4]techniquetoilluminatethe original sixteen photographs used for ren- CG objects with the images of real light using a global il- dering; four additional aerial photographs lumination rendering system. In addition, we used an in- were used in modeling the campus geome- verseglobalillumination[17]techniquetoderivelighting- try. In the middle is a rendering of the cam- independentreflectancepropertiesofthefloorofSt.Peter’s, pus buildings reconstructed from the pho- allowingtheobjectstocastshadowsonandappearinreflec- tographs using Fac¸ade; the final model also tionsinthefloor. Havingthefullrangeofilluminationwas included photogrammetrically recovered ter- additionallyusefulinproducingavarietyofrealisticeffects rain extending out to the horizon. At bottom ofcinematography,suchassoftfocus,glare,vignetting,and are two computer renderings of the Berke- lensflare. ley campus model obtained through view- dependent texture mapping from the SIG- 6 The Light Stage: A Photometric Approach GRAPH 97 animation. The film can be seen athttp://www.debevec.org/Campanile/. toDigitizingCulturalArtifacts Thebesttechniquespresentedtodateformodelingreal- world artifacts involve two stages. First, the geometry of the artifact is measured using a range scanning device and somesortofmeshmergingalgorithm[13,16,12]. Second, photographstakenundercontrolledlightingconditionsare analyzedtodeterminethecolorofeachpointontheobject’s surface to produce texture maps for the object. A notable exception to this process is the trichromatic laser scanner device presented in [3, 2] which is able to measure object surfacecolorinthesameprocessasobtainingthegeomet- ric laser scan. All of these techniques have produced 3D models with accurate models of surface geometry as well aslighting-independentdiffusetexturemaps. These techniques represent significant advances in the (a) fieldofartifactdigitization, andhaveshownthemselvesto work particularly well on objects with close to Lamber- tian reflectance properties such as aged marble sculptures. However, these existing techniques for model acquisition are difficult to apply to a large class of artifacts exhibit- ing spatially varying specular reflection properties, com- plex BRDFs, translucency, and subsurface scattering. As a result, artifacts featuring polished silver, gold, glass, fur, cloth,jewels,jade,leaves,orfeathersremainverychalleng- ingtoaccuratelydigitizeandtoconvincinglyrender. Since (b) a large class of cultural artifacts are specifically designed tohaveintricategeometryandtoreflectlightininteresting ways,developingnewtechniquesforsuchobjectsisanim- portantavenueforresearch. OurrecentworkinvolvingtheLightStage[5,11](Figure 6) has explored a technique for creating re-lightable com- putermodelsofartifactsandpeoplewithoutexplicitlymod- elingtheirgeometricorreflectanceproperties. Instead,the artifact is illuminated from a dense array of incident illu- minationdirectionsandasetofdigitalimagesaretakento recordtheartifact’sreflectancefield,afunctionthatrecords how the object would appear seen from any angle as illu- minated by any lighting environment. The data within a reflectancefieldcanbequicklycombinedtogetherinorder to produce images of an artifact under any form of illumi- (c) nation, including lighting environments captured from the realworld. Figure 5. Fiat Lux. The animation Fiat Lux In[11],ourgroupusedanimprovedlightstagedeviceto shown at the SIGGRAPH 99 Electronic The- digitize several Native American cultural artifacts exhibit- ater used Fac¸ade [8] to model and render ingmanyofthegeometricandreflectancepropertieswhich the interior of St. Peter’s Basilica from sin- have traditionally been difficult to model and render. We gle panorama assembled from a set of ten furthermore described an interactive lighting tool that al- images. Each image was acquired using lowsartifactstobere-illuminatedbyauserinrealtime,and highdynamicrangephotography[7],inwhich proposed image-based rendering techniques that allow an each image is taken with a range of differ- artifacttobemanipulatedin3Daswellasbeingarbitrarily ent exposure settings and then assembled illuminated. An advantage of this technique for capturing into a single image that represents the full andrenderingobjectsisthattheobjectneednothavewell- range of illumination in the scene. The re- definedsurfacesoreasytomodelreflectanceproperties;the covered lighting was used to illuminate the object can have arbitrary translucency, self-shadowing, in- synthetic objects added to the scene, giving terreflection, subsurface scattering, and fine surface detail. them the correct shading, shadows, reflec- This is helpful for modeling and rendering subjects such tions,andhighlights. Thefilmiscanbeseen as human faces, jewelry, and clothing which exhibit all of athttp://www.debevec.org/FiatLux/. theseproperties. The significant drawback to this approach is the large amountofdatathatneedstobecapturedandstoredforthe object: on the order of at least a thousand lighting direc- tionsforeachobjectviewingdirection.Asaresult,wehave yet to capture a model with both a dense array of viewing andlightingdirections,sothechangesinviewpointwehave beenabletoobtainthusfarhavebeenlimited. A recent project at MIT and MERL [14] used a device similar to our light stage to capture several objects from moderatelydensearraysofviewingdirectionsandapprox- imately 64 lighting directions, and used a silhouette-based volumetricintersectiontechniquetoobtainroughgeometry forthedigitizedobjects. AswesawearlierwithourFac¸ade work,usingbasicscenegeometryasastructureforproject- ingimagesallowssparselysampledviewingdirectionstobe effectively extrapolated to new viewing directions, at least (a) forrelativelydiffuseobjectswherethelightreflectingfrom a surface remains nearly constant under changing viewing directions. Something that still has not been leveraged in the digitizing process is that the way a particular point on anobjectsurfacereflectslight(thepoint’sfour-dimensional BRDF)canoftenbewell-approximatedbyalow-parameter reflectance model encompassing, for example, diffuse and specular parameters. The next project I will describe ex- ploitsthischaracteristic. (b) 7.LinearLightSourceReflectometry Animportantaspectofmanyculturalartifactsisthatthey havespatially-varyingreflectanceproperties:theyaremade ofdifferentmaterialseachofwhichreflectlightinadiffer- entway.Anexampleisanilluminatedmanuscript,inwhich differentcolorsofinksanddifferenttypesofpreciousmet- (c) (d) alsareappliedtothesurfaceofparchmenttoproducetex- tual designs and illustrations. We have found the case of Figure 6. Digitizing objects in the Light Stage. digitizinganilluminatedmanuscripttobeparticularlygood The Light Stage [5, 11] is designed to illumi- for developing algorithms to digitize spatially-varying re- nateanartifactfromeverydirectionlightcan flectance properties since the geometry of a manuscript is come from in a short period of time. In (a), a relativelyflat,makingthegeometryrecoveryandreflectom- NativeAmericanheaddressisbeingdigitized. etryprocessessimpler. This allows a digital video camera to directly Digitizing the manuscript within the light stage would capture the subject’s reflectance field – how it bepossible, butsincethemanuscriptcontainsbothdiffuse transforms incident illumination into radiant and sharply specular regions it would need to be illumi- illumination – as an array of images (b). As natedbyalargenumberoflightingandviewingdirections a result, we can then synthetically illuminate to be faithfully captured. To avoid this problem, we de- thesubjectunderanyformofcomplexillumi- velopedanewdigitizingtechniquedescribedin[10]based nation directly from this captured data. The on using a translating linear light source as the illuminant Native American headdress is shown as if il- (see Fig. 7). We found that in passing the light source luminated by light from a cathedral (c) and a over the artifact and viewing it from an oblique angle, we forest(d). couldindependentlyobservethediffuseandspecularreflec- tion of the light at each surface point. To turn these ob- servations into digitized surface reflectance properties, we fit the observed diffuse and specular reflections to a stan- dard physically-motivated reflectance model with diffuse, artifact: a dataset of how a environment or artifact would specular,andspecularroughnesscoefficients. Furthermore, look from any angle under any form of incident illumina- we found that by passing the light source across the ob- tion. The problems of digitizing environments and objects ject twice at different angles, we can estimate surface nor- are related but have differences. Environments are larger, malsateverypointontheobjectaswell. Simpleextensions andoftenmoregeometricallycomplex,makingthemharder to the technique – adding a laser stripe and a back light – to photograph from a dense sampling of all possible an- makeitpossibletocaptureper-pixelsurfaceheightaswell gles. Environmentsarealsohardertoilluminatewithcon- asper-pixelsurfacetranslucencyinadditiontotheotherre- trolledlighting-outdoorscenesaremosteasilyilluminated flectanceparameters. by natural lighting rather than controlled sources, and in- door scenes tend to exhibit a great deal of mutual illumi- nationbetweensurfaces. Asaresult,manycomputermod- els which have been made of real-world environments are lighting-dependent,showingthesceneunderjustoneform ofillumination, suchasintheBanffforest, Berkeleycam- pus, and St. Peter’s Basilica models. Nonetheless, these modelscanbeveryeffectivedigitizations,perhapsbecause for environments we are able to consider the illumination conditionstobepartoftheenvironment. For digitizing artifacts, being able to capture their lighting-independent reflectance properties is much more important. Partofthereasonisthatwhenweviewavirtual artifactfromdifferentangles,wemostnaturallyassumethat thelightingstaysinourownframeofreferenceratherthan rotating with the artifact. Also, in order to place a virtual (a) artifact into a virtual museum environment, it is necessary to illuminate the artifact with the light present in the ap- propriate part of the museum. Fortunately, artifacts are in many respects easier to digitize than environments. They are smaller, and thus can be brought into object scanning facilitiestobephotographedfrommanyanglesandillumi- nated under controlled forms of illumination. As a result, most artifact modeling projects, such as the Native Amer- ican Artifacts and the illuminated manuscript, have been successfulatrecoveringlighting-independentmodelsofthe (b) (c) artifacts.However,thenativeartifactscouldonlybeviewed from a limited set of viewing angles, and the manuscript, Figure 7. Linear Light Source Reflectometry. being a flat object, could be digitized without concern for The spatially-varying surface reflectance pa- self-occlusionandmutualilluminationeffects. rameters of an illuminated manuscript are recordedusingasimpleapparatusbasedon 9.Conclusion atranslatinglinearlightsource. Theper-pixel estimatesofdiffuseandspecularparameters, as well as surface normals, can be used to Theprojectssurveyedinthispapereachpresentadiffer- createphotorealreal-timerenderings(below) entapproachtodigitizingartifactsandenvironments.While of the artifact from any viewing angle and in each aims to produce photoreal results, the methods sig- anylightingenvironment. nificantly differ in the nature of the measurements taken, the amount of user input required, the amount of geomet- ric detail recovered, and the degree to which the models canbeseenundernovellightingandfromnewviewpoints. 8.Discussion Methods for environment and object digitizing in the fu- ture will likely draw upon several of these techniques to Lookingattheseprojectstogether,wecanseethateach improve their results, though techniques for digitizing ob- one of them can be characterized as acquiring or estimat- jects as compared to environments will likely continue to ing a portion of the reflectance field of an environment or besignificantlydifferent. Acknowledgements [10] A.Gardner,C.Tchou,T.Hawkins,andP.Debevec. Linear light source reflectometry. In Proceedings of SIGGRAPH 2003,ComputerGraphicsProceedings,AnnualConference Theprojectsdescribedinthispaperarejointworkwith Series,July2003. Michael Naimark, John Woodfill, Leo Villareal, Golan [11] T.Hawkins, J.Cohen, andP.Debevec. Aphotometricap- Levin, C.J. Taylor, Jitendra Malik, George Borshukov, proach to digitizing cultural artifacts. In Proc. 2nd Inter- YizhouYu, TimHawkins, JonathanCohen, AndrewGard- national Symposium on Virtual Reality, Archaeology, and ner, and Chris Tchou. Abjiheet Ghosh and Chris Tchou CulturalHeritage(VAST2001),pages333–342,December prepared the illuminated manuscript renderings in Fig. 7, 2001. andMayaMartinezarrangedfortheuseoftheculturalar- [12] M.Levoy,K.Pulli,B.Curless,S.Rusinkiewicz,D.Koller, tifacts used in this work. This work has been funded by L.Pereira,M.Ginzton,S.Anderson,J.Davis,J.Ginsberg, IntervalResearchCorporation,theNationalScienceFoun- J.Shade,andD.Fulk. Thedigitalmichelangeloproject:3d scanningoflargestatues. ProceedingsofSIGGRAPH2000, dation,theCaliforniaMICROprogram,InteractivePictures pages131–144,July2000. Corporation,JSEPcontractF49620-93-C-0014,U.S.Army [13] S. Marschner. Inverse Rendering for Computer Graphics. contractDAAD19-99-D-0046,andtheUniversityofSouth- PhDthesis,CornellUniversity,August1998. ern California. The content of this information does not [14] W. Matusik, H. Pfister, A. Ngan, P. Beardsley, R. Ziegler, necessarilyreflectthepositionorpolicyofthesponsorsand andL.McMillan. Image-based3dphotographyusingopac- noofficialendorsementshouldbeinferred. ityhulls. ACMTransactionsonGraphics, 21(3):427–437, July2002. [15] M.Naimark,J.Woodfill,P.Debevec,andL.Villareal. Im- References mersion’94. http://www.debevec.org/Immersion/,1994. [16] H.Rushmeier, F.Bernardini, J.Mittleman, andG.Taubin. [1] A.C.Addison,D.MacLeod,G.Margolis,M.Naimark,and Acquiringinputforrenderingatappropriatelevelsofdetail: H.-P. Schwartz. Museums without walls: New media for Digitizingapieta`.EurographicsRenderingWorkshop1998, newmuseums. InR.Cook,editor,ComputerGraphicsan- pages81–92,June1998. nual Conference Series (SIGGRAPH 95), pages 480–481, [17] Y.Yu,P.Debevec,J.Malik,andT.Hawkins. Inverseglobal August1995. illumination: Recoveringreflectancemodelsofrealscenes [2] R.Baribeau,L.Cournoyer,G.Godin,andM.Rioux.Colour fromphotographs. InSIGGRAPH99,August1999. [18] R.ZabihandJ.Woodfill. Non-parametriclocaltransforms three-dimensional modelling of museum objects. Imaging forcomputingvisualcorrespondence. InEuropeanConfer- thePast,ElectronicImagingandComputerGraphicsinMu- enceonComputerVision,pages151–158,May1994. seumandArchaeology,pages199–209,1996. [3] R. Baribeau, M. Rioux, and G. Godin. Color reflectance modelingusingapolychromaticlasersensor. IEEETrans. PatternAnal.MachineIntell.,14(2):263–269,1992. [4] P. Debevec. Rendering synthetic objects into real scenes: Bridging traditional and image-based graphics with global illuminationandhighdynamicrangephotography. InSIG- GRAPH98,July1998. [5] P.Debevec,T.Hawkins,C.Tchou,H.-P.Duiker,W.Sarokin, and M. Sagar. Acquiring the reflectance field of a human face.ProceedingsofSIGGRAPH2000,pages145–156,July 2000. [6] P. E. Debevec. Modeling and Rendering Architecture fromPhotographs. PhDthesis, UniversityofCaliforniaat Berkeley,ComputerScienceDivision,BerkeleyCA,1996. http://www.debevec.org/Thesis. [7] P.E.DebevecandJ.Malik.Recoveringhighdynamicrange radiancemapsfromphotographs. InSIGGRAPH97,pages 369–378,August1997. [8] P.E.Debevec,C.J.Taylor,andJ.Malik.Modelingandren- deringarchitecturefromphotographs: Ahybridgeometry- andimage-basedapproach.InSIGGRAPH96,pages11–20, August1996. [9] P.E.Debevec,Y.Yu,andG.D.Borshukov. Efficientview- dependent image-based rendering with projective texture- mapping. In 9th Eurographics workshop on Rendering, pages105–116,June1998.

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.