ebook img

The Relevance and Utility of Leading Accounting Research - ACCA PDF

24 Pages·2010·0.58 MB·English
by  
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview The Relevance and Utility of Leading Accounting Research - ACCA

Motion Analysis for Image Enhancement(cid:0) (cid:0) Resolution(cid:1) Occlusion(cid:1) and Transparency y Michal Irani Shmuel Peleg Institute of Computer Science The Hebrew University of Jerusalem (cid:0)(cid:1)(cid:0)(cid:2)(cid:3) Jerusalem(cid:4) ISRAEL Abstract Accurate computation of image motion enables the enhancement of image sequences(cid:0) In scenes having multiple movingobjectsthe motion computationis performedtogetherwith objectsegmen(cid:1) tation by using a unique temporal integration approach(cid:0) After computing the motion for the di(cid:2)erent image regions(cid:3) these regions can be enhanced by fusing several successive frames covering the same region(cid:0) Enhancements treated here include improvement of image resolution(cid:3) (cid:4)lling(cid:1)in occluded regions(cid:3) and reconstruction of transparent objects(cid:0) (cid:2) Introduction We describe methods for enhancing image sequences using the motion information computed by a multiple motions analysis method(cid:0) The multiple moving objects are (cid:4)rst detected and tracked(cid:3) using both a large spatial region and a large temporal region(cid:3) and without assuming any temporal motion constancy(cid:0) The motion models used to approximate the motions of the objects are (cid:5)(cid:1)D parametric motions in the image plane(cid:3) such as a(cid:6)ne and projective transformations(cid:0) The motion analysis is presented in a previous paper (cid:7)(cid:8)(cid:9)(cid:3) (cid:8)(cid:10)(cid:11)(cid:3) and will only be brie(cid:12)y described here(cid:0) Once an object has been tracked and segmented(cid:3) it can be enhanced using information from several frames(cid:0) Tracked objects can be enhanced by (cid:4)lling(cid:1)in occluded regions(cid:3) and by improving the spatial resolution of their images(cid:0) When the scene contains transparent moving objects(cid:3) they can be reconstructed separately(cid:0) (cid:0) This research was supported by the Israel Academy of Sciences(cid:0) Email address of authors(cid:1) fmichalb(cid:2) pelegg(cid:3)cs(cid:0)huji(cid:0)ac(cid:0)il(cid:0) y M(cid:0) Irani waspartially supported by a fellowship from the Leibniz Center(cid:0) Section (cid:5) includes a brief description of a method used for segmenting the image plane into di(cid:2)erently moving objects(cid:3) computing their motions(cid:3) and tracking them throughout the image sequence(cid:0) Sections (cid:13)(cid:3) (cid:14)(cid:3) and (cid:15) describe the algorithms for image enhancement using the computed motion information(cid:16) Section (cid:13) presents a method for improving the spatial resolution of tracked objects(cid:3) Section (cid:14) describes a method for reconstructing occluded segments of tracked objects(cid:3) and Section (cid:15) presents a method for reconstructing transparent moving patterns(cid:0) An initial version of this paper appeared in (cid:7)(cid:8)(cid:15)(cid:11)(cid:0) (cid:5) (cid:3) Detecting and Tracking Multiple Moving Objects In this section we describe brie(cid:12)y a method for detecting and tracking multiple moving objects in image sequences(cid:3) which is presented in detail in (cid:7)(cid:8)(cid:10)(cid:11)(cid:0) Any other good motion computation method can be used as well(cid:0) In this approach for detecting di(cid:2)erently moving objects(cid:3) a single motion is (cid:4)rst computed(cid:3) and a single object which corresponds to this motion is identi(cid:4)ed and tracked(cid:0) We call this motion the dominant motion(cid:3) and the corresponding object the dominant object(cid:0) Once a dominant object has been detected and tracked(cid:3) it is excluded from the region of analysis(cid:3) and the process is repeated on the remaining image regions to (cid:4)nd other objects and their motions(cid:0) When the image motion can be described by a (cid:5)(cid:1)D parametric motion model(cid:3) and this model is used for motion analysis(cid:3) the results are very accurate at a fraction of a pixel(cid:0) This accuracy results from two features(cid:16) (cid:8)(cid:0) The use of large regions when trying to compute the (cid:5)(cid:1)D motion parameters(cid:0) (cid:5)(cid:0) Segmentation of the image into regions(cid:3) each containing only a single (cid:5)(cid:1)D motion(cid:0) (cid:0)(cid:1)(cid:2) (cid:0)(cid:3)D Motion Models (cid:5)(cid:1)D parametric transformations are used to approximate the projected (cid:13)(cid:1)D motions of objects on the image plane(cid:0) This assumption is valid when the di(cid:2)erences in depth caused by the motions are small relative to the distances of the objects from the camera(cid:0) Given two grey level images of an object(cid:3) I(cid:17)x(cid:0)y(cid:0)t(cid:18) and I(cid:17)x(cid:0)y(cid:0)t(cid:19)(cid:8)(cid:18)(cid:3) it is assumed that(cid:16) I(cid:17)x(cid:19)p(cid:17)x(cid:0)y(cid:0)t(cid:18)(cid:0)y(cid:19)q(cid:17)x(cid:0)y(cid:0)t(cid:18)(cid:0)t(cid:19)(cid:8)(cid:18)(cid:20) I(cid:17)x(cid:0)y(cid:0)t(cid:18)(cid:0) (cid:17)(cid:8)(cid:18) where (cid:17)p(cid:17)x(cid:0)y(cid:0)t(cid:18)(cid:0)q(cid:17)x(cid:0)y(cid:0)t(cid:18)(cid:18)is the displacement induced on pixel (cid:17)x(cid:0)y(cid:18) by the motion of the planar object between frames t and t(cid:19)(cid:8)(cid:0) It can be shown (cid:7)(cid:8)(cid:21)(cid:11) that the desired motion (cid:17)p(cid:0)q(cid:18) minimizes the following error function at Frame t in the region of analysis R(cid:16) (cid:0)t(cid:1) (cid:2) Err (cid:17)p(cid:0)q(cid:18)(cid:20) (cid:17)pIx(cid:19)qIy (cid:19)It(cid:18) (cid:1) (cid:17)(cid:5)(cid:18) (cid:0)x(cid:0)y(cid:1)(cid:0)R X We perform the error minimization over the parameters of one of the following motion models(cid:16) (cid:0)t(cid:1) (cid:8)(cid:0) Translation(cid:0) (cid:5) parameters(cid:3) p(cid:17)x(cid:0)y(cid:0)t(cid:18)(cid:20) a(cid:3) q(cid:17)x(cid:0)y(cid:0)t(cid:18)(cid:20) d(cid:0) In order to minimize Err (cid:17)p(cid:0)q(cid:18)(cid:3) its derivatives with respect to a and d are set to zero(cid:0) This yields two linear equations in the two unknowns(cid:3) a and d(cid:0) Those are the two well(cid:1)known optical (cid:12)ow equations (cid:7)(cid:14)(cid:3) (cid:5)(cid:21)(cid:11)(cid:3) where every small window is assumed to have a single translation(cid:0) In this translation model(cid:3) the entire object is assumed to have a single translation(cid:0) (cid:0)t(cid:1) (cid:5)(cid:0) A(cid:1)ne(cid:16) (cid:9) parameters(cid:3) p(cid:17)x(cid:0)y(cid:0)t(cid:18)(cid:20) a(cid:19)bx(cid:19)cy(cid:3) q(cid:17)x(cid:0)y(cid:0)t(cid:18)(cid:20) d(cid:19)ex(cid:19)fy(cid:0) Deriving Err (cid:17)p(cid:0)q(cid:18) with respect to the motion parameters and setting to zero yields six linear equations in the six unknowns(cid:16) a(cid:3) b(cid:3) c(cid:3) d(cid:3) e(cid:3) f (cid:7)(cid:14)(cid:3) (cid:15)(cid:11)(cid:0) (cid:13) (cid:13)(cid:0) Moving planar surface (cid:17)a pseudo projective transformation(cid:18)(cid:16) (cid:22) parameters (cid:7)(cid:8)(cid:3) (cid:14)(cid:11)(cid:3) (cid:2) (cid:2) p(cid:17)x(cid:0)y(cid:0)t(cid:18) (cid:20) a (cid:19) bx (cid:19) cy (cid:19) gx (cid:19) hxy(cid:3) q(cid:17)x(cid:0)y(cid:0)t(cid:18) (cid:20) d (cid:19) ex (cid:19) fy (cid:19) gxy (cid:19) hy (cid:0) Deriving (cid:0)t(cid:1) Err (cid:17)p(cid:0)q(cid:18) with respect to the motion parameters and setting to zero(cid:3) yields eight linear equations in the eight unknowns(cid:16) a(cid:3) b(cid:3) c(cid:3) d(cid:3) e(cid:3) f(cid:3) g(cid:3) h(cid:0) (cid:0)(cid:1)(cid:0) Detecting the First Object When the region of support of a single object in the image is known(cid:3) its motion parameters can be computed using a multiresolution iterative framework (cid:7)(cid:13)(cid:3) (cid:14)(cid:3) (cid:15)(cid:3) (cid:9)(cid:3) (cid:8)(cid:9)(cid:3) (cid:8)(cid:10)(cid:11)(cid:0) Motion estimation is more di(cid:6)cult in the common case when the scene includes several moving objects(cid:3) and the region of support of each object in the image is not known(cid:0) It was shown in (cid:7)(cid:10)(cid:3) (cid:8)(cid:9)(cid:3) (cid:8)(cid:10)(cid:11) that in this case the motion parameters of a single object can be recovered accurately by applying the same motion computation framework (cid:17)with some iterative extensions (cid:7)(cid:8)(cid:9)(cid:3) (cid:8)(cid:10)(cid:11)(cid:18) to the entire region of analysis(cid:0) This procedure computes a single motion (cid:17)the dominant motion(cid:18) between two images(cid:0) A seg(cid:1) mentation procedure is then used (cid:17)see Section (cid:5)(cid:0)(cid:15)(cid:18)in order to detect the corresponding object (cid:17)the dominant object(cid:18) in the image(cid:0) An example of a detected dominant object using an a(cid:6)ne motion model between two frames is shown in Figure (cid:5)(cid:0)c(cid:0) In this example(cid:3) noise has a(cid:2)ected strongly the segmentation and motion computation(cid:0) The problem of noise is overcome once the algorithm is extended to handle longer sequences using temporal integration (cid:17)Section (cid:5)(cid:0)(cid:13)(cid:18)(cid:0) (cid:0)(cid:1)(cid:4) Tracking Detected Objects Using Temporal Integration Thealgorithmforthedetection ofmultiple movingobjectsdiscussed in Section (cid:5)(cid:0)(cid:5)canbe extended to track detected objects throughout long image sequences(cid:0) This is done by temporal integration(cid:3) where for each tracked object a dynamic internal representation image is constructed(cid:0) This image is constructed by taking a weighted averageof recent frames(cid:3) registered with respect to the tracked motion(cid:0) This image contains(cid:3) after a few frames(cid:3) a sharp image of the tracked object(cid:3) and a blurred image of all the other objects(cid:0) Each new frame in the sequence is compared to the internal representation image of the tracked object rather than to the previous frame (cid:7)(cid:8)(cid:9)(cid:3) (cid:8)(cid:10)(cid:11)(cid:0) Following is a summary of the algorithm for detecting and tracking an object in an image sequence(cid:16) For each frame in the sequence (cid:17)starting at t (cid:20) (cid:21)(cid:18) do(cid:16) (cid:8)(cid:0) Compute the dominant motion parametersbetween the internal representation image of the tracked object Av(cid:17)t(cid:18) and the new frame I(cid:17)t(cid:19)(cid:8)(cid:18)(cid:3) in the region M(cid:17)t(cid:18) of the tracked object (cid:17)see Section (cid:5)(cid:0)(cid:5)(cid:18)(cid:0) Initially(cid:3) M(cid:17)(cid:21)(cid:18) is the entire region of analysis(cid:0) (cid:5)(cid:0) Warp the current internal representation image Av(cid:17)t(cid:18) and current segmentation mask M(cid:17)t(cid:18) towards the new frame I(cid:17)t(cid:19)(cid:8)(cid:18) according to the computed motion parameters(cid:0) (cid:13)(cid:0) Identify the stationary regions in the registered images (cid:17)see Section (cid:5)(cid:0)(cid:15)(cid:18)(cid:3) using the reg(cid:1) istered mask M(cid:17)t(cid:18) as an initial guess(cid:0) This will be the segmented region M(cid:17)t(cid:19)(cid:8)(cid:18) of the tracked object in frame I(cid:17)t(cid:19)(cid:8)(cid:18)(cid:0) (cid:14) a(cid:18) b(cid:18) c(cid:18) Figure (cid:8)(cid:16) An example of the evolution of an internal representation image of a tracked object(cid:0) a(cid:18) Initially(cid:3) the internal representation image is the (cid:4)rst frame in the sequence(cid:0) The scene contains four moving objects(cid:0) The tracked object is the ball(cid:0) b(cid:18) The internal representation image after (cid:13) frames(cid:0) c(cid:18) The internal representation image after (cid:15) frames(cid:0) The tracked object (cid:17)the ball(cid:18) remains sharp(cid:3) while all other regions blur out(cid:0) (cid:14)(cid:0) Computethe updated internal representation image Av(cid:17)t(cid:19)(cid:8)(cid:18)by warping Av(cid:17)t(cid:18) towards I(cid:17)t(cid:19)(cid:8)(cid:18) using the computed dominant motion(cid:3) and averaging it with I(cid:17)t(cid:19)(cid:8)(cid:18)(cid:0) Whenthemotionmodelapproximatesofthetemporalchangesofthetrackedobjectwellenough(cid:3) shape changes relatively slowly over time in registered images(cid:0) Therefore(cid:3) temporal integration of registered frames produces a sharp and clean image of the tracked object(cid:3) while blurring regions having other motions(cid:0) Figure (cid:8) shows an example of the evolution of an internal representation image of a tracked rolling ball(cid:0) Comparing each new frame to the internal representation image rather than to the previous frame gives the algorithm a strong bias to keep tracking the same object(cid:0) Since additive noise is reduced in the the average image of the tracked object(cid:3) and since image gradients outside the tracked object decrease substantially(cid:3) both segmentation and motion computation improve signi(cid:4)cantly(cid:0) In the example shown in Figure (cid:5)(cid:3) temporal integration is used to detect and track the domi(cid:1) nant object(cid:0) Comparing the segmentation shown in Figure (cid:5)(cid:0)c to the segmentation in Figure (cid:5)(cid:0)d emphasizes the improvement in segmentation using temporal integration(cid:0) Another example of detecting and tracking objects using temporal integration is shown in Figure (cid:13)(cid:0) In this sequence(cid:3) taken by an infrared camera(cid:3) the background moves due to camera motion(cid:3) while the car has another motion(cid:0) It is evident that the tracked object is the background(cid:3) as all other regions in the image are blurred by their motion(cid:0) (cid:0)(cid:1)(cid:5) Tracking Other Objects After detecting and tracking the (cid:4)rst object(cid:3) attention is directed at other objects(cid:0) This is done by applying the tracking algorithm once more(cid:3) this time to the rest of the image(cid:3) after excluding the (cid:15) a(cid:18) b(cid:18) c(cid:18) d(cid:18) Figure (cid:5)(cid:16) Detecting and tracking the dominant object using temporal integra(cid:1) tion(cid:0) a(cid:1)b(cid:18) Two frames in the sequence(cid:0) Both the background and the helicopter are moving(cid:0) c(cid:18) The segmented dominant object (cid:17)the background(cid:18) using the dominant a(cid:6)ne motion computed between the (cid:4)rst two frames(cid:0) Black regions are those ex(cid:1) cluded from the dominant object(cid:0) d(cid:18) The segmented trackedobject afterafew frames using temporal integration(cid:0) (cid:4)rst detected object from the region of analysis(cid:0) The scheme is repeated recursively(cid:3) until no more objects can be detected(cid:0) In the example shown in Figure (cid:14)(cid:3) the second dominant object is detected and tracked(cid:0) The detection and tracking of several moving objects can be performed in parallel(cid:3) with a delay of one or more frame between the computations for di(cid:2)erent objects(cid:0) a(cid:18) b(cid:18) c(cid:18) d(cid:18) Figure (cid:13)(cid:16) Detecting and tracking the dominant object in an infrared image sequence using temporal integration(cid:0) a(cid:1)b(cid:18)Twoframesin the sequence(cid:0) Boththe background andthe cararemoving(cid:0) c(cid:18) The internal representation image of the tracked object (cid:17)the background(cid:18)(cid:0) The background remains sharp with less noise(cid:3) while the moving car blurs out(cid:0) d(cid:18) The segmented tracked object (cid:17)the background(cid:18) using an a(cid:6)ne motion model(cid:0) White regions are those excluded from the tracked region(cid:0) (cid:9) a(cid:18) b(cid:18) c(cid:18) Figure (cid:14)(cid:16) Detecting and tracking the second object using temporal integration(cid:0) a(cid:18) The initial region of analysis after excluding the (cid:4)rst dominant object (cid:17)from Figure (cid:13)(cid:0)d(cid:18)(cid:0) b(cid:18) The internal representation image of the second tracked object (cid:17)the car(cid:18)(cid:0) The car remains sharp while the background blurs out(cid:0) c(cid:18) Segmentation of the tracked car after (cid:15) frames(cid:0) (cid:0)(cid:1)(cid:6) Segmentation Once a motion has been determined(cid:3) we would like to identify the region having this motion(cid:0) To simplify the problem(cid:3) the two images are registered using the detected motion(cid:0) The motion of the corresponding region is canceled after registration(cid:3) and the tracked region is stationary in the registeredimages(cid:0) Thesegmentationproblemreducesthereforetoidentifying thestationaryregions in the registered images(cid:0) Pixels are classi(cid:4)ed as moving or stationary using local analysis(cid:0) The measure used for the classi(cid:4)cation is the average of the normal (cid:12)ow magnitudes over a small neighborhood of each pixel (cid:17)typicallya(cid:13)(cid:0)(cid:13)neighborhood(cid:18)(cid:0) Inordertoclassifycorrectlylargeregionshavinguniformintensity(cid:3) a multi(cid:1)resolution scheme is used(cid:3) as in low resolution levels the uniform regions are small(cid:0) The lower resolution classi(cid:4)cation is projected on the higher resolution level(cid:3) and is updated according to higher resolution information when it con(cid:12)icts the classi(cid:4)cation from the lower resolution level(cid:0) (cid:10) (cid:4) Improvement of Spatial Resolution Oncegoodmotionestimationandsegmentationofatrackedobjectareobtained(cid:3)itbecomespossible to enhance the images of this object(cid:0) Restoration of degraded images when a model of the degradation process is given is an ill(cid:1) conditioned problem (cid:7)(cid:5)(cid:3) (cid:23)(cid:3) (cid:8)(cid:8)(cid:3) (cid:8)(cid:13)(cid:3) (cid:8)(cid:23)(cid:3) (cid:5)(cid:14)(cid:11)(cid:0) The resolution ofan image is determined by the physical characteristics of the sensor(cid:16) the optics(cid:3) the density of the detector elements(cid:3) and their spatial response(cid:0) Resolution improvement by modifying the sensor can be prohibitive(cid:0) An increase in the sampling rate could(cid:3) however(cid:3) be achieved by obtaining more samples of the imaged object from a sequence of images in which the object appears moving(cid:0) In this section we present an algorithm for processing image sequences to obtain improved resolution of di(cid:2)erently moving objects(cid:0) This is an extension of our earlier method(cid:3) which was presented in (cid:7)(cid:8)(cid:14)(cid:11)(cid:0) While earlier research on super(cid:1)resolution (cid:7)(cid:8)(cid:5)(cid:3) (cid:8)(cid:14)(cid:3) (cid:8)(cid:22)(cid:3) (cid:5)(cid:15)(cid:11) treated only static scenes and pure translational motion in the image plane(cid:3) we treat dynamic scenes and more complex motions(cid:0) The segmentation of the image plane into the di(cid:2)erently moving objects and their tracking(cid:3) using the algorithm mentioned in Section (cid:5)(cid:3) enables processing of each object separately(cid:0) The Imaging Model(cid:2) The imaging process(cid:3) yielding the observed image sequence fgkg(cid:3) is mod(cid:1) eled by(cid:16) gk(cid:17)m(cid:0)n(cid:18)(cid:20) (cid:2)k(cid:17)h(cid:17)Tk(cid:17)f(cid:17)x(cid:0)y(cid:18)(cid:18)(cid:18)(cid:19)(cid:3)k(cid:17)x(cid:0)y(cid:18)(cid:18) (cid:3) where (cid:1) gk is the sensed image of the tracked object in the kth frame(cid:0) (cid:1) f is a high resolution image of the tracked object in a desired reconstruction view(cid:0) Finding f is the objective of the super(cid:1)resolution algorithm(cid:0) (cid:1) Tk isthe(cid:5)(cid:1)Dgeometrictransformationfromf togk(cid:3) determined bythecomputed (cid:5)(cid:1)Dmotion parameters of the tracked object in the image plane (cid:17)not including the decrease in sampling rate between f and gk(cid:18)(cid:0) Tk is assumed to be invertible(cid:0) (cid:1) h is ablurring operator(cid:3)determined by the PointSpread Function ofthe sensor (cid:17)PSF(cid:18)(cid:0)When lacking knowledge of the sensor(cid:24)s properties(cid:3) it is assumed to be a Gaussian(cid:0) (cid:1) (cid:3)k is an additive noise term(cid:0) (cid:1) (cid:2)k is a downsampling operator which digitizes and decimates the image into pixels and quan(cid:1) tizes the resulting pixels values(cid:0) Thereceptive(cid:0)eld(cid:17)inf(cid:18)ofadetectorwhoseoutputisthepixelgk(cid:17)m(cid:0)n(cid:18)isuniquelyde(cid:4)nedbyits center(cid:17)x(cid:0)y(cid:18)anditsshape(cid:0) Theshapeisdeterminedbytheregionofsupportoftheblurringoperator (cid:1)(cid:3) h(cid:3) and by the inverse geometric transformation Tk (cid:0) Similarly(cid:3) the center (cid:17)x(cid:0)y(cid:18) is obtained by (cid:1)(cid:3) Tk (cid:17)(cid:17)m(cid:0)n(cid:18)(cid:18)(cid:0) An attempt is made to construct a higher resolution image f(cid:25)(cid:3) which approximates f as accu(cid:1) rately as possible(cid:3) and surpasses the visual quality of the observed images in fgkg(cid:0) It is assumed that the acceleration of the camera while imaging a single image frame is negligible(cid:0) (cid:22) Reconstructed Image Original Image Simulated Imaging Imaging Process Process Simulated Observed Low-resolution Low-resolution Images Images Compare simulated and observed low-resolution images. Figure (cid:15)(cid:16) Schematic diagram of the super resolution algorithm(cid:0) A reconstructed image is sought such that after simulating the imaging process(cid:3) the simulated low(cid:1)resolution images are closest to the ob(cid:1) served low(cid:1)resolution images(cid:0) The simulation of the imaging process is expressed by Equation (cid:13)(cid:0) The Super(cid:3)Resolution Algorithm(cid:2) The presented algorithm for creating higher resolution (cid:0)(cid:4)(cid:1) images is iterative(cid:0) Starting with an initial guess f for the high resolution image(cid:3) the imaging K (cid:0)(cid:4)(cid:1) processissimulatedtoobtainasetoflowresolutionimagesfgk gk(cid:5)(cid:3) correspondingtotheobserved K (cid:0)(cid:4)(cid:1) input images fgkgk(cid:5)(cid:3)(cid:0) If f were the correct high resolution image(cid:3) then the simulated images K K (cid:0)(cid:4)(cid:1) K (cid:0)(cid:4)(cid:1) fgk gk(cid:5)(cid:3) should be identical to the observed images fgkgk(cid:5)(cid:3)(cid:0) The di(cid:2)erence images fgk (cid:2)gk gk(cid:5)(cid:3) are used to improve the initial guess by (cid:26)backprojecting(cid:27) each value in the di(cid:2)erence images onto (cid:0)(cid:4)(cid:1) (cid:0)(cid:3)(cid:1) its receptive (cid:4)eld in f (cid:3) yielding an improved high resolution image f (cid:0) This process is repeated iteratively to minimize the error function K (cid:0)n(cid:1) (cid:8) (cid:0)n(cid:1) (cid:2) e (cid:20) vK kgk (cid:2)gk k(cid:2) u k(cid:5)(cid:3) u X t The algorithm is described schematically in Figure (cid:15)(cid:0) (cid:23) The imaging process of gk at the nth iteration is simulated by(cid:16) (cid:0)n(cid:1) (cid:0)n(cid:1) gk (cid:20) (cid:17)Tk(cid:17)f (cid:18)(cid:3)h(cid:18)(cid:4) s (cid:17)(cid:13)(cid:18) where (cid:4) s denotes a downsampling operator by a factor s(cid:3) and (cid:28) is the convolution operator(cid:0) The iterative update scheme of the high resolution image is expressed by(cid:16) K (cid:0)n(cid:6)(cid:3)(cid:1) (cid:0)n(cid:1) (cid:8) (cid:1)(cid:3) (cid:0)n(cid:1) f (cid:20) f (cid:19) Tk (cid:17)(cid:17)gk(cid:2)gk (cid:18)(cid:5)s(cid:18)(cid:3)p (cid:17)(cid:14)(cid:18) K k(cid:5)(cid:3) X (cid:0) (cid:1) where K is the number of low resolution images(cid:3) (cid:5) s is an upsampling operator by a factor s(cid:3) and p is a (cid:26)backprojection(cid:27) kernel(cid:3) determined by h and Tk as explained below(cid:0) The average taking in Equation (cid:17)(cid:14)(cid:18) reduces additive noise(cid:0) The algorithm is numerically similar to common iterative methods for solving sets of linear equations (cid:7)(cid:8)(cid:23)(cid:11)(cid:3) and therefore has similar properties(cid:3) such as rapid convergence (cid:17)see next paragraph(cid:18)(cid:0) In Figure (cid:9)(cid:3) the resolution of a car(cid:24)s license plate was improved from (cid:8)(cid:15) frames(cid:0) Analysis and Discussion(cid:2) We introduce exact analysis of the superresolution algorithm in the case of deblurring(cid:16) Restoring an image from K blurred images (cid:17)taken from di(cid:2)erent viewing posi(cid:1) K tions of the object(cid:18)(cid:3)with (cid:5)(cid:1)D a(cid:1)ne transformationsfTkgk(cid:5)(cid:3) between them and the reconstruction viewing position(cid:3) and without increasing thesampling rate(cid:0) This isaspecial caseofsuperresolution(cid:3) which is simpler to analyze(cid:0) In this case the imaging process is expressed by(cid:16) (cid:0)n(cid:1) (cid:0)n(cid:1) gk (cid:20) Tk(cid:17)f (cid:18)(cid:3)h and the restoration process in Equation (cid:17)(cid:14)(cid:18) becomes(cid:16) K (cid:0)n(cid:6)(cid:3)(cid:1) (cid:0)n(cid:1) (cid:8) (cid:1)(cid:3) (cid:0)n(cid:1) f (cid:20) f (cid:19) Tk (cid:17)gk (cid:2)gk (cid:18)(cid:3)p (cid:0) (cid:17)(cid:15)(cid:18) K k(cid:5)(cid:3) X (cid:0) (cid:1) The following theorems show that the iterative super resolution scheme is an e(cid:2)ective deblurring operator (cid:17)proofs are given in the appendix(cid:18)(cid:0) Theorem (cid:4)(cid:2)(cid:5) The iterations of Equation (cid:2)(cid:3)(cid:4) converge to the desired deblurred image f (cid:2)i(cid:5)e(cid:5)(cid:6) an f that ful(cid:0)lls(cid:7) (cid:6)k gk (cid:20) Tk(cid:17)f(cid:18)(cid:3)h(cid:4)(cid:6) if the following condition holds(cid:7) (cid:8) k(cid:4)(cid:2)h(cid:3)pk(cid:2) (cid:5) (cid:3) K (cid:17)(cid:9)(cid:18) K k(cid:5)(cid:3)kTkk(cid:2) where (cid:4) denotes the unity pulse function centered atP(cid:17)(cid:21)(cid:0)(cid:21)(cid:18)(cid:5) Remark(cid:7) When the (cid:8)(cid:9)D image motions of the tracked object consist of only (cid:8)(cid:9)D translations and rotations(cid:6) then Condition (cid:2)(cid:10)(cid:4) reduces to k(cid:4)(cid:2)h(cid:3)pk(cid:2) (cid:5) (cid:8) (cid:5) Proof(cid:0) see appendix(cid:0) Theorem (cid:4)(cid:2)(cid:6) Given Condition (cid:2)(cid:10)(cid:4)(cid:6) the algorithm converges at an exponential rate (cid:2)the norm of n the error converges to zero faster than q for some (cid:21) (cid:5) q (cid:5) (cid:8)(cid:4)(cid:6) regardless of the choice of initial (cid:0)(cid:4)(cid:1) guess f (cid:5) (cid:8)(cid:21)

Description:
Sustainability Accounting and Accountability (2007) and co-author of Financial Accounting Theory – European Edition. (2006). He is currently joint editor of
See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.