Depth from Optical Turbulence YuandongTian SrinivasaG.Narasimhan AlanJ.Vannevel CarnegieMellonUniversity CarnegieMellonUniversity U.S.NavalAirWarfareCenter [email protected] [email protected] [email protected] Abstract Scene Point High (Near) variance Turbulencenearhotsurfacessuchasdesertterrainsand roads during the summer, causes shimmering, distortion Pinhole and blurring in images. While recent works have focused Low Turbulence Region variance onimagerestoration,thispaperexploreswhatinformation Scene Point (Far) aboutthescenecanbeextractedfromthedistortioncaused byturbulence. Basedonthephysicalmodelofwaveprop- Figure1.Randomfluctuationsintherefractiveindexofamedium causetheperturbationofalightwaveradiatingfromascenepoint. agation, we first study the relationship between the scene Theresultingimageprojectionsofthescenepointovertimeare depthandtheamountofdistortioncausedbyhomogenous also random. The longer the distance of a scene point from the turbulence. Wethenextendthisrelationshiptomoreprac- camera,thegreaterthevarianceofitsimageprojection. ticalscenariossuchasfiniteextentandheight-varyingtur- bulence, and present simple algorithms to estimate depth turbulencecandoforvision. Inotherwords,whatinforma- ordering, depth discontinuity and relative depth, from a tionaboutthescenecanbeextractedwhenviewedthrough sequence of short exposure images. In the case of gen- turbulence? Based on the physical model of wave propa- eralnon-homogenousturbulence,weshowthatastatistical gation, we study the relationship between the scene depth property of turbulence can be used to improve long-range and the amount of distortion caused by homogenous tur- structure-from-motion (or stereo). We demonstrate the ac- bulence over time (see an intuitive illustration in Fig 1). curacyofourmethodsinbothlaboratoryandoutdoorset- Then,weextendthisrelationshiptomorepracticalscenar- tingsandconcludethatturbulence(whenpresent)canbea iosoffiniteextentandheight-varyingturbulence,andshow strongandusefuldepthcue. howandinwhatscenarioswecanestimatedepthordering, depth discontinuity and relative depths. Although general 1.Introduction non-homogenous turbulence does not directly yield depth The visual manifestations of clear air turbulence occur information,itsstatisticalpropertycanbeusedalongwitha ofteninourdailylives—fromhotkitchenapplianceslike stereocamerapairtoimprovelong-rangedepthestimation. toastersandovens,toplumesofairplanes,todesertterrains, Theinputtoourtechniquesisasequenceofshortexpo- to roads on hot summer days, to the twinkling of stars at sure images captured from a stationary camera (or camera night. Theshimmeringanddistortionobservedarecaused pair). Depthcuesareobtainedbyfirsttrackingimagefea- byrandomfluctuationsoftemperaturegradientsnearwarm tures and then by computing the variances of tracker dis- surfaces. Inthiscase,theimageprojectionofascenepoint placements over time. Any feature tracking algorithm can viewedthroughturbulenceisnolongeradeterministicpro- be applied, such as that based on template matching. We cess,andoftenleadstopoorimagequality. verify our approaches in both laboratory and outdoor set- Severalworksinremotesensingandastronomicalimag- tingsbycomparingagainstknown(groundtruth)distances ing have focused on image correction through turbulence. of the scene from the camera. We also analyze how the Foratmosphericturbulence,thedistortedwavefrontsarriv- depthcueestimationisinfluencedbytheparametersofthe ing from stars can be optically corrected using precisely imaging system, such as aperture, exposure time and the controlled deforming mirror surfaces, beyond the angular numberofframes. Thedepthinformationcomputedissur- resolution limit of telescopes [16]. For terrestrial imaging prisinglyaccurate,evenwhenthesceneandcameraarenot applications, recent workshaveproposed todigitally post- within the turbulent region. Hence, we believe that turbu- processthecapturedimagestocorrectfordistortionsandto lenceshouldnotbeonlyviewedas“noise”thatanimaging deblurimages[6,3,4,9,26]. Opticalflowbasedmethods system must overcome, but also as an additional source of have been used further to register the image sequences to informationaboutthescenethatcanbereadilyextracted1. achievemodestsuper-resolution[18]. 1Whilenotthefocusofthiswork,theshortexposure(noisy)anddis- Whilepreviousworks havefocusedonwhat turbulence tortedinputimagescanbecombinedusingadenseimagealignmentap- does to vision, this article addresses the question of what proach[21]toimproveimagequality(seeSupplementarymaterial). 1 Report Documentation Page Form Approved OMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington VA 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE 3. DATES COVERED 2012 2. REPORT TYPE 00-00-2012 to 00-00-2012 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Depth from Optical Turbulence 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION U.S. Naval Air Warfare Center,China Lake,CA,93555 REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSOR/MONITOR’S ACRONYM(S) 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 12. DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release; distribution unlimited 13. SUPPLEMENTARY NOTES To be presented at 25th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2012), June 16-21, Providence, RI 14. ABSTRACT Turbulence near hot surfaces such as desert terrains and roads during the summer, causes shimmering, distortion and blurring in images. While recent works have focused on image restoration, this paper explores what information about the scene can be extracted from the distortion caused by turbulence. Based on the physical model of wave propagation,we first study the relationship between the scene depth and the amount of distortion caused by homogenous turbulence. We then extend this relationship to more practical scenarios such as finite extent and height-varying turbulence and present simple algorithms to estimate depth ordering, depth discontinuity and relative depth, from a sequence of short exposure images. In the case of general non-homogenous turbulence, we show that a statistical property of turbulence can be used to improve long-range structure-from-motion (or stereo). We demonstrate the accuracy of our methods in both laboratory and outdoor settings and conclude that turbulence (when present) can be a strong and useful depth cue. 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF 18. NUMBER 19a. NAME OF ABSTRACT OF PAGES RESPONSIBLE PERSON a. REPORT b. ABSTRACT c. THIS PAGE Same as 9 unclassified unclassified unclassified Report (SAR) Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 1.1.RelatedWork α image perturbed plane Characterizing the structure of turbulence is one of the incident A open problems in physics, with a long research history, wave starting from the early methods of Kolmogorov [10]. For thiswork,wereferencemultipletextbooksbyKopeika[12], Tatarskii [20], Ishimaru [8] and Roggemann [16]. To our B knowledge, the key physical model (Eqn. 3) in these texts diffraction hasnotbeenexploitedbythecomputervisioncommunity. kernel Direct measurement of turbulent media has received Figure2.Thephasedifferenceofanincidentwave(e.g.,thephase much attention in fluid dynamics. Shadowgraph and ofpointB leadsthatofA)attheaperturedeterminestheangle- Schlierenimaging[17,24]techniquesareoftenusedtocap- of-arrivalα(AoA)andinturn,thecenterofthediffractionkernel, ture the complex airflow around turbines, car engines and i.e.,thelocationoftheprojectedscenepointintheimageplane. airplaneswings. Imagedisplacementobservedinturbulent mediahasbeenshowntobeproportionaltotheintegralof In general, the strength C2 of turbulence depends on a therefractiveindexgradientfield.Thispropertyisexploited n varietyofenvironmentalandphysicalfactors,suchastem- in a tomographic approach [5] with many views to com- perature,pressure,humidityandwavelengthoflight,which pute the density field of the medium from image displace- in turn depend on the time of day (less during sunset and mentsofknownbackgrounds. Thisapproach,calledBack- sunrise, moreatmid-day), cloudcover(lessduringcloudy groundOrientedSchlieren(BOS)imaging[23,22,15],has dayandmoreduringcloudynights),andwindpatterns. An emerged as a new technique for flow visualization of den- empirical relationship between these factors and refractive sity gradients in fluids. Such approaches have also been indexchangescanbefoundinKopeika’stextbook[12]. usedtorenderrefractivevolumesofgasflow[2]. Similarly, therehasbeenwork[13]thataimstoestimatetheshapeofa 3.ImageFormationthroughTurbulence curvyrefractiveinterfacebetweentwomedia(waterandair, forexample)usingstereoandknownbackgrounds. Incon- Whenanelectromagneticwavepropagatesthroughatur- trast, ourworkexploitsimagedisplacementstoextractthe bulent medium, it undergoes random fluctuations in both depthscuesofanunknownsceneusinganimagesequence amplitude and phase. The perturbed phase determines the capturedfromasingleviewpoint. angle-of-arrival (AoA) of the light incident at the camera, whichinturnfixestheprojectedlocationofthescenepoint 2.CharacterizationofTurbulence intheimage(Fig.2).Mathematically,thepropagationofan electricfieldundertheinfluenceoftheturbulencestructure Turbulence causes random fluctuations of the refractive function in Eqn. 2 can be obtained by solving Maxwell’s index n(r,t) at each location r in the medium and at time equations. Since most surfaces and sources of interest to t. FromKolmogorov’sseminalwork[10,11],n(r,t)forms computervisionareatfinitedistancesfromthecameraand arandomfieldinspace-timeandcanbecharacterizedbya produce divergent waves, we will consider the propaga- structure function D(r ,r ,t) that computes the expected 1 2 tion of spherical waves. Then, following the derivations squareddifferenceofrefractiveindexattwodistinctspatial in [12,8],thevariance(cid:104)α2(cid:105)oftheangles-of-arrivalofthe locationsr andr : 1 2 wavesfromascenepointatdistanceLfromthecamerais D(r1,r2,t)=(cid:104)|n(r1,t)−n(r2,t)|2(cid:105). (1) obtainedbyintegratingalongthelineofsight: Forstationaryturbulence,thestructurefunctionisconstant (cid:90) L (cid:16)z(cid:17)5/3 overt,i.e.,D(r ,r ,t)=D(r ,r ). Stationaryturbulence (cid:104)α2(cid:105)=2.914D−1/3 C2(z) dz, (3) 1 2 1 2 n L ishomogeneousifD(r ,r ) = D(r),wherer = r −r . 0 1 2 1 2 Thismeansthatthestructurefunctiondependsonlyonthe where,Disthediameteroftheaperture. Theactualfluctu- relative displacement of the locations. Homogenous tur- ation(cid:104)δ2(cid:105)oftheprojectedimagelocationcanbecomputed bulence is isotropic if the structure function is spherically usingtherelationδ = ftanαwhere,f isthefocallength symmetric, i.e., D(r) = D(r), where r = ||r||. From di- ofthecamera. Forsmallangles,δ ≈fα. mensional analysis, Kolmogorov shows that the structure Inthefollowing,wewilldiscussthreeimportantspecial functionfollowsa2/3powerlaw[20]: cases of the above image formation model. We will ad- dress the general case of non-homogeneous turbulence in D(r)=C2r2/3, (2) n Section7. First,considerascenariowhereboththecamera where, the constant C2 reflects the strength of turbulence. and the scene of interest are immersed in a homogeneous n Fornon-homogeneousturbulence,C2isafunctionofabso- turbulence medium (for example, a road scene with vehi- n lutelocation. Anon-turbulentatmospherehasC2 =0. clesonahotsummerday),asillustratedinFig.3(a). Since n Turbulence Region 100 L c = 0 90 L c = 20 Scene 90 L t = 40 80 Camera (La) Predicted variance 23456780000000 LLL ttt === 123000 234567000000 LLLL tttt ==== 12340000 Lc Turbulence Region Ls 10 10 Lt Scene 00 50 1L00 150 200 00 50 1L00 150 200 Figure4.Thevarianceoftheangle-of-arrivalpredictedbyEqn.6 Camera L under different experimental settings. For each curve, the cam- (b) eraandtheextentofturbulence(Lc andLt)arefixed,whilethe scenedepth(L )isvaried.Eachcurverepresentsamonotonically Figure3.Imageformationthroughturbulence.(a):Boththecam- s increasingfunctionofscenedepththatconvergestoafixedvari- eraandthesceneareimmersedwithinahomogeneousturbulence ance(atinfinity).Thedashedblacklineisthelinearrelationinthe region. (b): Thecameraand/orsceneareoutsidetheturbulence specialcasewhenL =L =0. region. c s IfwefixthecameralocationL andtheturbulenceregion C2 isaconstant,wecanintegrateEqn.3toobtain: c n L , and move the scene point away from the camera, the t (cid:90) L(cid:16)z(cid:17)5/3 varianceisamonotonicallyincreasingfunctionwithrespect (cid:104)α2(cid:105) = 2.914D−1/3C2 dz toL,asshowninFig.4.Fromthis,weobservethatthevari- n L 0 anceincreasesevenifthescenemovesawayfromtheturbu- 3 = K2L, (4) lenceregion.Thisisacounter-intuitiveresultthatcannotbe 8 n explainedbyrayoptics(hence,theusageof“waves”inthis where, K2 = 2.914D−1/3C2. So, the variance of pro- article). Thevariance, however, convergestoafixedvalue n n (cid:104)α2 (cid:105),whenthescenepointisinfinitelyfarawayfromthe jected positions of the scene point in the image plane over ∞ camera(e.g.,adistantstar). Thiscanbeseenbytakingthe time is directly proportional to the distance L between the limitL →∞inEqn.6toobtain: scenepointandthecamera. Settingasidetheissueofspa- s tialresolution,thislinearrelationshipdeterminesdepthwith 3 (cid:16) (cid:17) cgoionns.taBntypcroemcipsiaornisfoonr,ailnldstiesrtaenoc,ethsewditehpitnhtphreetcuirsbiounlefnaclelsraes- (cid:104)α∞2 (cid:105) = Kn2Lsli→m∞8L5/3 (Lt+Ls)8/3−L8s/3 = K2L . (7) thesquareofthedistancefromthecameratothescene. n t In many scenarios, like the plume of an aircraft or Inthiscase,thelightemittedbythescenepointcanbemod- a steaming kettle, the source of turbulence may not ex- eledasaplanewave. tend over the entire line-of-sight from the camera to the Height-varyingturbulence. Inpractice,theairturbulence scene. In this case, we will assume local homogeneity, may not be homogenous in the entire field of view. For i.e., C2 is a constant within a short range and zero else- n exampleonanasphaltedroad,theturbulenceismoresevere where. Forconvenience,wedecomposeLintothreeparts: near the road surface than away from it. We model this L=L +L +L ,asillustratedinFig.3.L isthedistance s t c s effect by writing the strength of turbulence as a smoothly betweenthescenepointandtheturbulenceregion,L isthe t varyingfunctionofheighth,yieldingaseparablemodel: path length within the turbulence region and L is the dis- c tancebetweenthecameraandtheturbulenceregion. Once (cid:104)α2(cid:105)=K2(h)(cid:104)α2 (cid:105). (8) n relative again,wecanintegrateEqn.3toobtaintheanalyticform: Typically,C2(h)(orK2(h))decreaseswithrespecttoh. n n K2 (cid:90) Lt+Ls (cid:104)α2(cid:105) = n z5/3dz 4.DepthcuesfromanImageSequence L5/3 Ls 3 (cid:16) (cid:17) Inthissection,weinvestigatewhatdepthcuescanbeob- = K2 (L +L )8/3−L8/3 . (5) n8L5/3 t s s tained from the observed variance of image displacements of scene points. The input to all our algorithms is a se- (cid:16) (cid:17) Letting(cid:104)α2 (cid:105) = 3 (L +L )8/3−L8/3 allows quenceofimagesofastationarysceneviewedthroughtur- relative 8L5/3 t s s ustowriteinshort: bulence by a fixed camera. Once the images are captured, we track a sparse set of distinctive feature points on the (cid:104)α2(cid:105)=K2(cid:104)α2 (cid:105). (6) scene. While any feature-tracking algorithm may be used, n relative 8 Scene 7 Light sourcCeamera Variances564 3 Hot 2 0 5 10 15 20 25 30 35 40 45 Griddles Ground truth LED depth Figure 5. Experimental setup: Three adjacent electric cooking Figure 6. Left: LEDs are immersed in the turbulence region. griddlesareheatedupto400degreesFahrenheittocreatehotair Right: TherelationshipbetweenthevarianceofLEDprojections turbulence. A camera observes a scene through the turbulence. and their ground truth depths is very close to linear (correlation Byvaryingthetemperature,wecanemulateavarietyofoutdoor coefficientis0.987),andisconsistentwiththemodel(Eqn.6). turbulencestrengthsandpath-lengthsofseveralkilometers. 5.LaboratoryExperiments we adopt a simple frame-to-frame template matching ap- proach. Tohandleimageblurringcausedbyturbulence,we We performed several experiments in a controlled lab- alsoaddblurredversionsofthetemplates. Thevarianceof oratory environment to validate the theory. A flat cooking theimagelocationofeachtrackedpointisthencomputed. griddleofsize52cm×26cmisusedtoproduceandmain- For a fixed configuration of camera and extent of the tainuniformheatofupto400degreesFahrenheit,acrossthe homogeneous turbulence region, the model (Eqn. 6) is a flatsurface. Intheexperimentsetup(Fig.5),multiplesuch monotonic smooth function of scene depth. Thus, both griddlesareplacedsidebysidetoincreasethepath-length depth ordering and discontinuities (like two buildings far ofturbulence. Thethreegriddlessetatmaximumtempera- apart)ofthescenecanbereadilyobtainedfromvariances. tureproduceroughlythesameshimmeringasakilometerof Inparticular,detectingsuchdiscontinuitiescanbeusefulto naturalturbulenceinthedesert. Bycontrollingthenumber segmentthesceneintodifferentdepthlayers(planes). ofgriddlesandthetemperature,awiderangeofturbulence On the other hand, a more quantitative measurement, strengthsseenoutdoorscanbeemulated.Inallexperiments, such as relative depth between scene points, requires ad- variancesare estimatedbycapturing a20-30seconds long ditional assumptions. Note that absolute depth cannot be videosequenceofthesceneat30fps. computed without knowing the turbulence strength, C2. n 5.1.QuantitativeEvaluation Thus, without loss of generality, we will assume L = 1. t When the camera and scene are immersed in turbulence Variance-depth linearity within turbulence region. 50 (Lc = Ls = 0), the linear variance-depth relationship equally-spacedLEDsareplaced5cmabovethehotgriddle (Eqn. 4) allows us to obtain relative depth by taking vari- (in the turbulence region). One end of the stick is closer anceratiostoeliminatetheunknownconstantK2. Ingen- tothecamerawhiletheotherisfartheraway. Fig.6shows n eral, if Lc, Ls and Kn2 are known, depth can be obtained the variances computed for each LED projection onto the byinvertingEqn.6. Bymonotonicityofthemodel,onlya imageplaneaveragedover3experimentaltrials.Consistent uniquedepthcanbeobtainedfromagivenvariance. with the model (Eqn. 6) when L = L = 0, indeed the s c However, in practice, these constants are usually un- relationship between the depth (represented by the indices known. Thus,forN scenepointswehaveN+3unknowns ofLED)andthevariancesislinearwithahighcorrelation (N depthsplusLc,LsandKn2)butN equations.Considera coefficientof0.987.Asimilarexperimentthatestimatesthe scenewithrepetitivepatterns(windowsonabuilding,street depthofacurvylineonasphereisalsoshowninFig.7. lamps,carsparkedonastreet),thenthedepths{L }N of i i=1 Identifying depth discontinuity. In this experiment, we theN pointsfollowanarithmeticsequence: place two checker-board patterns vertically at two distinct depths (Lnear and Lfar) and measure variances of the key L =L +i∆L (9) i 0 points on the scene. We conducted four experiments with Thus, N depths{L }N areparameterizedby2variables, differentsettingsofLnearandLfar(Table1). Allwerecap- i i=1 L and∆L. Asaresult,only2+3=5scenepointssuffice tured in the same setting of f/11 with exposure 1/2000, 0 toestimate(usingnumericaloptimization)boththerelative whilethezoomwasvariedtoincludetheentirescenewithin depthsandtheextentoftheturbulenceregion. thefieldofview.Fig.8illustratesthevariancediscontinuity Inthecaseofheight-varyingturbulence,weneedtoes- bytwoseparateparametricfittingsofthekeypointsontwo timate the height-varying function K2(h) as well as the checker-boards in one experiment. Clearly, the depth dis- n scene depth. Fortunately, if the height is aligned with the continuitycanbedetectedfromthevariancediscontinuity. y-axisoftheimage,thenseparatingdepthfromheightcan Validation of the physics model (Eqn. 6). As shown in beachievedbytreatingeachscan-lineindividually. Fig. 8, due to height variations of the turbulence, the vari- Experiment 2 9 ) 2el 10 8 pix 8 7 ( 6 6 e nc 4 5 a 2 y vari 00 34 100 0 200 200 2 y 300 400 x 400 500 800 600 x 10 50 100 150 200 250y300 350 400 450 500 Figure8.Experimentswithtwoplanarchecker-boardsplacedatdifferentdistancesfromthecamera. Duetospacelimit,weonlyshow1 (Exp.2)ofthe4experiments,andleavetheremaininginthesupplementarymaterial. TheexperimentalsettingcanbefoundinTable1. Thefirstcolumnshowsasampledistortedframe,thesecondandthirdcolumnsshowtwoviewsofthevariancedistributionofthecorners ofthechecker-boards. Fromthefigures,varianceschangesduetodepthdiscontinuityandheightisobvious. Wedetectthediscontinuity andfitsmoothsurfacestothevariances. Theratioofvariancesofthetwodepthplanesarethencomputedandquantitativelycomparedto thegroundtruth(Table1). 8 8 7 2s (pixel ) 567 2s (pixel ) 456 e 4 e c c n n 3 a 3 a Vari 2 Vari 2 01 8001 100 200y300 400 500 800 600 x400 200 0 60x0 400 200 0 500 400 300 y200 100 0 Figure9.Depthestimationofequallyspacedpointsonaninclinedplanarscene. Asampledistortedframeduetoturbulenceisshownon theleft. Ontheright,aretwoviewsofthevariancedistributionofasparsesetofkeypointsandasmoothfunctionfitillustratingthenear planargeometry.Fromthis,itispossibletopredicttherelativedepthsofthescenepoints(assumingthelengthoftheturbulenceregionto be1).Forevaluation,therelativedepthestimatedisconvertedtoactualdepthbyusingtheactuallength(196cm)ofturbulenceregion.The estimatedslopeoftheplaneis0.517cmper10pixelsinhorizontaldirection,comparedtothegroundtruthvalue0.529cmper10pixels. 2.2 No. L L Lnear Lfar Measured Predicted c t 2.1 s 2 Exp1 54 171 163 225 1.77 1.79 e cn1.9 Exp2 74 173 183 382 3.60 3.52 airav11..78 Exp3 74 173 247 382 1.67 1.94 1.6 Exp4 74 173 247 320 1.56 1.55 1.5 Table1.Columns1-4showthegroundtruthmeasurement(incen- 1.4 0 100 200 300 400 500 600 700 timeters) for the four checker-board experiments. Columns 5-6 Sample frame pixel coordinates show the comparison between the measured (5th column) vari- Figure7.Ellipsefittingonavideosequencecapturingacurveon anceratioandthatpredictedbythemodel(6thcolumn).Inallbut thespherethroughturbulence. IdeallytheprojectionoftheLEDs onecase,themeasurementsareveryaccurate. formsanellipseontheimageplane. Left: Asampleframeofthe capturedvideosequence.Right:Averageerrorinfittingis12.6%. Depthestimationofequallyspacedscenepoints. Wees- The average fitting error between a covariant x and dependent variable y is computed using (cid:112)(cid:80) (yˆ −y )2/(cid:112)(cid:80) (y¯−y )2, timatethedepthsofequallyspacedpointsusingthemethod whereyˆ isthefittedvalueofpointxi aindy¯iisthemeainofy.i in Section 4. A horizontally slanted plane with a texture i i ofabuildingfacadeisplacedbehindtheturbulenceregion. Fig.9showsthetwoviewsofthecomputedvariancesatkey ancechangessmoothlyoverthey-axis. However,thevari- pointsandthesurfacefitsthatdemonstratethenearplanar ance ratio computed by two points on two checker-boards geometry. AssumingL = 1(lengthofturbulenceregion), t at the same scan-line is independent of height h, amount relativedepthsofscenepointscanbecomputed.Forvalida- of turbulence C2 and aperture diameter D. On the other tion,theestimatedrelativedepthsareconvertedtoabsolute n hand,wecancomputethetheoreticalvarianceratiosusing ones using the actual length (L = 196 cm) of the turbu- t thegroundtruthvalueofL,L ,L andmodelEqn.6,. The lenceregion.Thedepthslopeoftheplane(∆LinSection4) c t measurement is consistent with the theory, validating the isestimatedas0.517cmper10pixelsinthehorizontaldi- model in all four settings (Table 1) that covers both cases rection,comparedtothegroundtruthvalueof0.529cmper wherethesceneiswithinandoutsidetheturbulenceregion. 10pixels.Pleaseseethesupplementarymaterialforvideos. 40 20.5 110 meters to target 160 meters to target 35 20 30 19.5 ces25 ces 19 Varian112050 Varian11781..855 ning 5 17 or 0 16.5 M 0 200 400 600 800 1000 10−5 10−4 10−3 10−2 10−1 (a) Number of frames (b) Exposure (seconds) 18 es0.25 16 anc 0.2 es 14 Vari0.15 irnessuoffilutciioennt nc 12 d region Varia 1680 Normalize00.0.015 ernoon (c)0 R.1el0a.t1i5ve 0A.2per0a.t2u5re 0D.3iam0e.3t5er (d0) Im1a0ge s2iz0e of3 c0hec4k0boar5d0 Aft Figure10.Influenceofimagingparametersonthevarianceofthe cornersofthechecker-boardpattern. (a)Thevarianceestimates Figure11.Sampleframesoftheoutdoorexperimentsinthemorn- convergewithsufficientframes,showingthattheturbulenceissta- ing and afternoon. The targets are placed at different distances tionaryduringmeasurement.(b)Themeasuredvarianceissimilar fromthecamera. fordifferentexposures,exceptforverylongexposureswherethe trackingperformancedegradesduetomotionblur. (c)Consistent withthemodel,themeasuredvariancedecreasessignificantlywith 6.OutdoorExperiments aperture size. (d) Insufficient image resolution results in much lowervarianceestimation. Besidesindoorexperimentsinacontrolledenvironment, wealsoconductedexperimentsoutdoorsinadesertregion. 5.2.InfluenceofImagingParameters The imaging setup used for the experiments consists of a ProsilicaGC1380HcameraandaCelestronC6Telescope. Theaccuracyofthemeasuredvariancedependsonmany Thefocallengthis1500mm. Aone-to-oneratioopticalre- imagingparameters. Largeraperturereducesthedepth-of- lay is used between the telescope and the camera, without field,higherexposuretimeaddsunwantedmotionblur,and changingthefocallengthofthemaintelescope. Weplaced lowimagemagnificationcausesgreaterquantizationofthe two standard contrast targets 110 meters and 160 meters variance. Herewepresentanempiricalstudy. away from the camera and captured sequences of 300-400 Theestimateofthevarianceconvergesasthenumberof imagesduringmildturbulence(morning)andstrongturbu- capturedframesincreases. Fig.10(a)showsthevarianceof lence(afternoon).Weuseda30mmapertureinthemorning the10keypointsinExp.1withdifferentnumbersofframes anda10mmapertureintheafternoon,andvariedtheexpo- used. Inourexperiments,stableestimatesareachievedus- suretimesbetween0.5msand1ms. ing frames captured over 30 seconds using a 30 fps cam- We tracked a sparse set of points through each image era. Tostudytheeffectsofaperture,wevarythef/#from sequenceandrejectedoutlierssuchasthestatictrackersof f/3.7tof/11,fixtheexposuretimeat1/8000s,andzoom the dirt on the CCD and high-variance erroneous trackers atthehighestlevel. Fig.10(c)showsasignificantdecrease nearlocallyrepetitivetextures. Thecomputedvariancesof invarianceaspredictedbythemodelinEqn.6. Theplotin thetrackersconvergequickly(seesupplementarymaterial). Fig.10(b)showsthevariancescomputedbychangingonly Table2showsthemeanvariancecomputedfromallthe the exposure time. Fig. 10(d) shows the effect of varying trackersofeachimagesequence,foreachdepthandimag- focal-length. Herewenormalizethevariancesbythepixel ing setting. Since the amount of variance is relatively in- size of the checker-board patterns to remove the effects of variant to exposure change, we further averaged the vari- image magnification. In these experiments, we have tried ance over different exposures. If we take the variance ra- tomaintainthesamenoiselevelinthecamerabymaintain- tio between 110m and 160m, we obtain 1.6857 for 30mm ingsimilarimagebrightness(byvaryingilluminationinten- aperture in the morning and 1.5785 for 10mm aperture in sity). Fromtheseplots,aperturesizeisthemainfactorthat the afternoon. Both are close to the ratio of two distances affectstheestimationquality. However,sincewetakevari- 160/110=1.4545,verifyingthedependenceofturbulence anceratioasadepthmeasure,theeffectofaperturesizeis model on the depth. Besides, Table 2 also shows the stan- reduced(intheory,independent).Also,forverylowmagni- dard derivation of variances in each video sequence. The fication (spatial resolution) or long exposure time (1/30s), low standard deviation shows that the turbulence is indeed thevarianceestimateshowslargedegradation. homogenous on surfaces that are perpendicular to the op- In addition, camera shaking can cause false displace- tical axis. Note we do not consider the height variation of ments leading to poor variance estimates. In general, this turbulence,sincecomparedtothelaboratorysetting,thetar- isahardproblemandwewillsetitasideforfuturework. getoccupiesamuchnarrowerfieldofview. MorningCapture. Target110metersaway,30mmaperture. 0.1 0.08 0.50msexposure 0.75msexposure 1.00msexposure 0.08 0.06 x)0.06 y) 4.32±0.49 4.89±0.46 4.79±0.45 p(∆0.04 p(∆0.04 Average: 4.67 0.02 0.02 MorningCapture. Target160metersaway,30mmaperture. −020 −10 0 10 20 −020 −10 0 10 20 0.50msexposure 0.75msexposure 1.00msexposure ∆x ∆y 8.23±0.88 7.61±0.67 7.75±0.64 Figure12.Distributionofxandydisplacementoftrackersinout- Average: 7.86 door image sequence, computed over all trackers and all frames fromanimagesequenceintheafternoon. Consistentwithouras- AfternoonCapture. 10mmapertureand5msexposure sumption,theyfollowsazero-meandistribution. Target110metersaway Target160metersaway Tearnasdblfe1o1r204.om3uA.tv6dteu8orroa±brgueel7exv.np5ace0reriiamvnicedeneto(.anaTdrehiet7s.v8sa6tra/i4nan.d6ca67er9=d.r13ad.t67eio8r±is5v7ab9tei(.ot34wn09)emeomnf t1ra6ap0cekmr-- parity (uncalibrated)10112230050500 800 600 y400 200 0 5x00 1000 parity (uncalibrated)1012340000000 800 600 y400 200 0 5x00 1000 (a) (b) ture captured in the morning) and 69.37/43.68=1.5785 (10mm aperture captured in the afternoon), close to the distance ratio Figure13.Jitterstereointurbulence. (a)(Uncalibrated)disparity (160m/110m=1.4545),verifyingourmodel. computedfromtwovideoframesatacertaintime,duetoturbu- lence,thedisparityisnoisy. Overthe319frames,thecorrelation 7.Jitter-stereoinNonhomogenousTurbulence coefficientvariesfrom0.632to0.954withmeanbeing0.884and standardderivationbeing0.054. (b)(Uncalibrated)disparityus- Untilnow,wehaveaddresseddepthestimationunderho- ingmeantrackerlocations. Thedisparityisclearlylinear(corre- mogenousandsimpleheight-varyingturbulence. However, lationcoefficientis0.976). duetounpredictabletemperatureandhumidityfluctuation, turbulence cannot be guaranteed to be homogenous over a our experiments (see Fig. 12). The disparities between large area and a long period of time. In this case, it is im- the mean-locations of corresponding trackers are then es- possibletoestimatedepthusingthemodelwithoutknowing timated. Notethatthedisparityofascenepointonaplane theturbulencestructurefunction. Instead,wewillexploita is a linear function with respect to its x and y coordinates generalstatisticalpropertyofturbulencealongwithastereo ontheimage. Thelinearfitisstrongwithacorrelationco- camerapairtoimprovelong-rangedepthestimation. efficientof0.976andissignificantlybetterthancomputing Recall that binocular stereo estimates the depth of a disparitiesonaper-framebasis,asshowninFig.13. scenepointbycomputingscenedisparityacrosstwoviews. 8.ComparisonswithDepth-from-X Ifascenepointisfarawaycomparedtothestereobaseline, the disparity may be less than one pixel and the depth es- Thisworkintroducesturbulenceasanewcueforscene timationmayfail. However, inthepresenceofturbulence, depth.So,itisinstructivetodiscusstheparallelsanddiffer- thelocationofthescenepoint“jitters”arounditstrueloca- encesbetweendepthfromturbulenceandotherdepth-from- tionintheimage. Buthowdoweestimatethetruelocation Xapproachesincomputervision. of a scene point without turbulence? It has been observed Structurefrommotion(SFM):SFMreliesonestimating thatdistributionof(evennon-homogeneous)turbulencedis- the pixel disparity across different views of the scene that tortionsisclosetozero-mean. Thisistrueevenifthevari- reduces with depth. In the case of turbulence, variance of anceofeachscenepointisdifferent. Thus,themeanposi- projected scene point is measured from the same camera tionsofthetrackedscenepointsaretheirmost-likelyposi- positionovertimeandmonotonicallyincreaseswithscene tionswhenthereisnoturbulence. Furthermore,estimating distance. Thus, while SFM is suited for shorter distances mean locations of trackers allows us to obtain the dispar- (forafixedbaseline),depthfromturbulenceisbetteringen- itypossiblywithsub-pixelaccuracy,helpinginlong-range eral for a longer range and/or stronger turbulence. But if depthestimationwithashortbaseline.Thisapproachisalso thescene isoutside theturbulence region, thedepth preci- similar in spirit to Swirski et. al [19] that estimates corre- siondegradesintheasymptoticregionofthevariancecurve spondencesinstereousingunderwatercausticspatterns. (Fig.4). Atthesametime,bothapproachessharethesame Inordertoexperimentallyverifyourapproach,wecap- issueswithfindingandtrackingcorrespondingfeatures. tured two image sequences of a planar scene 110m away from different view points, with baseline of less than 1m. Depthfromdefocusordiffusion(DFD):Inbothcases,the The two sequences were captured at different times when point-spread function varies across the scene. The extent the turbulence was significantly different. We tracked a oftheobservedblurmonotonicallyincreaseswithdistance common sparse set of scene points in the two views and fromthesensorinthecaseofturbulenceanddistancefrom compute the mean locations. We also verified that the the focal/diffuser plane in the case of DFD [7, 25]. Depth distribution of tracker displacements are zero-mean in all fromturbulencerequirescaptureofatemporalsequenceof images,andissimilartomovingapinholeacrosstheaper- [4] S.Harmeling,M.Hirsch,S.Sra,andSchlkopf. Onlineblind tureofthelenstocomputedepth[1]. deconvolutionforastronomy. InICCP,2009. 1 [5] G. Herman. Image reconstruction from projections. Aca- Structure from bad weather: Perhaps depth from fog demicPress,NewYork,1980. 2 or haze [14] is most similar in spirit to depth from turbu- [6] M.Hirsch,S.Sra,B.Schlkopf,andS.Harmeling. Efficient lence. These approaches also use a single viewpoint, pro- filterflowforspace-variantmultiframeblinddeconvolution. vide measures that are more reliable for scenes with long InCVPR,2010. 1 distances, and are (mostly) independent of the scene re- [7] T.Hwang, J.Clark, andA.Yuille. Adepthrecoveryalgo- flectance properties. That said, there are also fundamental rithmusingdefocusinformation. InCVPR,1989. 7 differences. Turbulenceisastatisticalandtemporallyvary- [8] A.Ishimaru. WavePropagationandScatteringinRandom Media. Wiley-IEEEPress,1999. 2 ingphenomenon,wheredepthcuesareduetophasevaria- [9] N.JoshiandM.Cohen. Seeingmt.rainier: Luckyimaging tionsoftheincidentlightratherthantheintensityvariations formulti-imagedenoising,sharpeningandhazeremoval. In as in fog. The environmental illumination (air light) pro- ICCP,2010. 1 videsastrongcuefordepthfromfog,whereasthespecific [10] A. Kolomogorov. Dissipation of energy in the locally illuminationgeometryoftheenvironmentplayslittleorno isotropicturbulence. Comptesrendusdel’AcadmiedesSci- partindepthfromturbulence. encesdel’U.R.S.S.32:16C18,1941. 2 [11] A. Kolomogorov. The local structure of turbulence in in- compressibleviscousfluidforverylargeReynold’snumbers. 9.ConclusionandFutureWork Comptesrendusdel’AcadmiedesSciencesdel’U.R.S.S.32: This article is an initial attempt at understanding what 16C18,1941. 2 depth cues can be extracted from optical turbulence. We [12] N.S.Kopeika. ASystemEngineeringApproachtoImaging. SPIEPublications,1999. 2 derived a simple relation between scene depths and vari- [13] N.MorrisandK.Kutulakos. Dynamicrefractionstereo. In ance of the projected scene points under turbulence. Our ICCV,2005. 2 experiments showed, somewhat surprisingly, that accurate [14] S. Narasimhan and S. Nayar. Vision and the atmosphere. depthcuescanbeobtainedfromopticalturbulence. There IJCV,48(3),2002. 8 are several avenues of future work including dense scene [15] H. Richard, M. Raffel, M. Rein, J. Kompenhans, and reconstruction and image super-resolution from the image G.Meier.Demonstrationoftheapplicabilityofabackground sequence under turbulence. We wish to also study several orientedschlierenmethod. InIntl.Symp.onApplicationsof other related physical phenomena. The twinkling of stars LaserTechniquestoFluidMechanics,2000. 2 is caused primarily due to the changes in amplitude of the [16] M.C.RoggemannandB.Welsh. ImagingThroughTurbu- lence. CRCpress,1edition,Mar.1996. 1,2 incidentwavethataredistance-relatedaswell. Asidefrom [17] G.Settles.ShlierenandShadowgraphTechniques.Springer- temperature gradients, the chaotic movement of a medium Verlag,2001. 2 itself can cause turbulence. This type of phenomenon can [18] M. Shimizu, S. Yoshimura, M. Tanaka, and M. Okutomi. occurduetounderwatercurrents,duetostrongwindflow Super-resolution from image sequence under influence of intheupperatmosphere,andduetoairflowaroundengines. hot-airopticalturbulence. InCVPR,2008. 1 Wewishtobuilduponthisworktoapplytothesescenarios. [19] Y. Swirski, Y. Schechner, B. Herzberg, and S. Negah- daripour. Stereo from flickering caustics. In ICCV, 2009. Acknowledgement. This research was supported in 7 partsbyanNSFCAREERawardIIS-0643628andanONR [20] V. Tatarskii. Wave Propagation in a Turbulent Medium. grantN00014-11-1-0295. YuandongTianissupportedbya McGraw-HillBooks,1961. 2 Microsoft Research Fellowship. The authors thank Randy [21] Y.TianandS.Narasimhan. Agloballyoptimaldata-driven Dewees and Kevin Johnson at the U.S. Naval Air Warfare approachforimagedistortionestimation. InCVPR,2010. 1 Center at China Lake, for help with outdoor experiments, [22] G. Vasudeva, D. R. Honnery, and J. Soria. Non-intrusive andDr. BehzadKamgar-Parsi(ONR)forinitiatingconnec- measurement of a density field using the background ori- tionsbetweenCMUandtheUSNavy. ented schlieren method. In 4th Australian Conf. on Laser DiagnosticinFluidMechanicsandCombustion,2005. 2 [23] L. Venkatakrishnan and G. E. A. Meier. Density measure- References ments using the background oriented schlieren technique. ExperimentsinFluids,37:237–247,2004. 2 [1] E.AdelsonandJ.Wang. Singlelensstereowithaplenoptic [24] G. Wetzstein, W. Heidrich, and R. Raskar. Hand-held camera. IEEETrans.onPAMI,14(2),1992. 8 schlieren photography with light field probes. In ICCP, [2] B. Atcheson, I. Ihrke, W. Heidrich, A. Tevs, D. Bradley, 2011. 2 M.Magnor, andH.-P.Seidel. Time-resolved3dcaptureof [25] C.Zhou,O.Cossairt,andS.Nayar.Depthfromdiffusion.In non-stationarygasflows. ACMTOG,27(5),2008. 2 CVPR,2010. 7 [3] J. Gilles, T. Dagobert, and C. Franchis. Atmospheric tur- [26] X. Zhu and P. Milanfar. Stabilizing and deblurring atmo- bulencerestorationbydiffeomorphicimageregistrationand sphericturbulence. InICCP,2011. 1 blinddeconvolution. InACIVS,2008. 1