ebook img

Efficient Robust Mean Value Calculation of 1D Features PDF

0.12 MB·
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Efficient Robust Mean Value Calculation of 1D Features

Efficient Robust Mean Value Calculation of 1D Features Erik Jonsson and Michael Felsberg Computer Vision Laboratory Department of Electrical Enginering, Link¨oping University, Sweden 6 [email protected], [email protected] 1 0 2 Abstract n a A robust mean value is often a good alternative to J the standard mean value when dealing with data ρ(x) ρ(x) 9 containing many outliers. An efficient method for 2 samples of one-dimensional features and the trun- ] cated quadratic error norm is presented and com- x x V pared to the method of channel averaging (soft his- Figure1: Errornorms: Truncatedquadratic(left), C tograms). Tukey’s biweight (right) . s c [ 1 Introduction 9 8 1 In a lot of applications in image processing we are 003v faoamfclepodlwew-ilsietvhdeedlnaiomtiasaicngogenafteanaidntuienrdgegsle,o-tpbsrueotsfetorhuvetinloigeurtsslm.ieOoronptehroienbxg-- Error function value567 8 lem also occurs in high-level operations like ob- 0 4 ject recognition and stereo vision. A wide range . 1 ofrobusttechniquesfordifferentapplicationshave 30 1 2 3 4 5 6 7 8 0 x beenpresented,whereRANSAC[5]andtheHough 6 1 transform [7] are two classical examples. Figure 2: A simple 1D data set together with : In this paper, we focus on the particular prob- the error function generated using the truncated v lem of calculating a mean value which is robust quadratic error norm with cutoff distance 1 i X against outliers. An efficient method for the spe- r cial case of one-dimensional features is presented a and compared to the channel averaging [3] ap- Some popular choices are the truncated quadratic proach. and Tukey’s biweight shown in figure 1. A sim- ple 1D data set together with its error function is shown in figure 2. The x which minimizes (1) 2 Problem Formulation belongs to a general class of estimators called M- estimators [8], and will in this text be referred to Given a sample set X = [x(1)...x(n)], we seek to as the robust mean value. minimize an error function given by n 3 Previous Work E(x)=Xρ(kx(k)−xk) (1) k=1 Findingtherobustmeanisanon-convexoptimiza- If we let ρ be a quadratic function, the minimiz- tionproblem,andauniqueglobalminimumisnot ing x is the standard mean value. To achieve the guaranteed. The problem is related to clustering, desired robustness against outliers, ρ should be and the well-known mean shift iteration has been a function that saturates for large argument val- shownto convergeto a localminimum of a robust ues. Such functions are called robust error norms. error function [1]. 1.6 - maximal if the samples are contained in a 1.4 continuous window of length 2c, i.e. if [a,b] 1.2 is feasible and [a−1,b+1] is infeasible. 1 0.8 Now define for a window w=[a,b] 0.6 0.4 b 1 0.2 µw = Xxk (4) −02 −1.5 −1 −0.5 0 0.5 1 1.5 2 b−a+1 k=a Figure 3: Example of channel kernel functions lo- no = (a−1)+(n−b) (5) cated at the integers b qw = X(µw−xk)2 (6) k=a Another approach is to use the channel repre- Eˆw = qw+noc2 (7) sentation (soft histograms) [2, 3, 4, 6]. Each sam- ple x can be encoded into a channel vector c by Note thatno is the number ofsamples outside the the nonlinear transformation window. Consider the global minimum x of the 0 error function and the window w of samples xk c=[K(kx−ξ1k),...,K(kx−ξmk)] (2) that fall within the quadratic part of the error where K is a localized kernel function and ξk the function centered around x0, i.e. the samples xk channel centers, typically located uniformly and such that |xk − x0| ≤ c. Either this window is located close to the boundary (a = 1 or b = n) such that the kernels overlap (fig 3). By averag- or constitutes a maximal window. In both cases, ing the channelrepresentationsofthe samples, we get something which resembles a histogram, but x0 =µw, and Eˆw =E(µw). This is not necessarily with overlapping and “smooth” bins. Depending true for an arbitrary window, e.g. if µw is located close to the window boundary. However, for an on the choice of kernel, the representation can be arbitrary window w, we have decoded to obtain an approximate robust mean. The distance between neighboring channels corre- b sponds to the scale of the robust error norm. Eˆw = X(µw−xk)2+noc2 ≥ (8) k=a 4 Efficient 1D Method 1 n ≥ Xmin{(µw−xk)2,c2} (9) k=1 This section will cover the case where the x’s are n one-dimensional, e.g. intensities in an image, and = Xρ(µw−xk)=E(µw) (10) thetruncatedquadraticerrornormisused. Inthis k=1 case, there is a very efficient method, which we have not discovered in the literature. For clarity, The strategy is now to enumerate all maximal we describe the case where all samples have equal and boundary windows, evaluate Eˆw for each and weight, but the extension to weighted samples is take the minimum, which is guaranteed to be the straightforward. globalminimumofE. Notethatitdoesnotmatter First,somenotation. Weassumethatourdata if some non-maximal windows are included, since is sorted in ascending order and numbered from we always have Eˆw ≥E(µw). 1...n. Since the x’s areonedimensional,we drop The following iteration does the job: Assume thevectornotationandwritesimplyxk. Theerror thatwehaveafeasiblewindow[a,b],notnecessar- norm is truncated at c, and can be written as ily maximal. If [a,b+1] is feasible, take this as the new window. Otherwise, [a,b] was the largest ρ(x)=min{x2,c2} (3) maximal window starting at a, and we should go on looking for maximal windows starting at a+1. The method works as follows: We keep track of Take [a+1,b] as the first candidate, then keepin- indicesa,bandandawindoww =[a,b]ofsamples creasingbuntilthewindowbecomesinfeasible,etc. [xa,...,xb]. The window [a,b] is said to be Ifproperinitializationandterminationoftheloop isprovided,thisiterationwillgenerateallmaximal - feasible if |xb−xa|<2c and boundary windows. 1Thissectionhasbeenslightlyrevisedsincetheoriginal The last point to make is that we do not need SSBApaper,asitcontainedsomeminorerrors. torecomputeqw fromscratchasthewindowsizeis 5 Properties of the Robust changed. Similar to the treatment of mean values and variances in statistics, we get by expanding Mean Value the quadratic expression b Inthissection,somepropertiesoftherobustmean qw = X(µw−xk)2 = valuesgeneratedbythetruncatedquadraticmethod k=a and the channel averaging will be examined. In b figure 4, we show the robust mean of a sample set = Xx2k−(b−a+1)µ2w = consistingofsomevalues(inliers)withmeanvalue k=a 3.0 and an outlier at varying positions. As the = S −(b−a+1)−1S2 (11) outliermovessufficientlyfarawayfromtheinliers, 2 1 it is completely rejected, and when it is close to where we have defined 3.0, it is treated as an inlier. As expected, the b truncated quadratic method makes a hard deci- S1 = Xxk =(b−a+1)µw (12) sion about whether the outlier should be included k=a or not, whereas the channel averaging implicitly b assumes a smoother error norm. S2 = Xx2k (13) Another effect is that the channel averaging k=a overcompensates for the outlier at some positions (14) (around x = 6.0 in the plot). Also, the exact be- haviorofthemethodcanvaryatdifferentabsolute S and S can easily be updated in constant time 1 2 positionsduetothegrid effect illustratedinfigure asthewindowsizeisincreasedordecreased,giving 5. We calculated the robust mean of two samples the whole algorithm complexity O(n). The algo- x ,x ,symmetricallyplacedaroundsomepointx 1 2 0 rithm is summarized as follows: 2 with |x −x |=|x −x |=d. The channels were 1 0 2 1 placed with unit distance, and the displacement Algorithm 1 Fast 1D robust mean calculation of the estimated mean m compared to the desired Initialize a←1, b←1, S ←x , S ←x2 1 1 2 1 value x0 is shownfor varyingx0’s inthe rangebe- while a≤n do tweentwoneighboringchannelcenters. Thefigure if a≤b then shows that the method makes some (small) sys- Calculate candidate Eˆw and µw: tematic errors depending on the position relative µw ←(b−a+1)−1S1 to the channel grid. No such grid effect occurs Eˆw ←S2−µwS1+noc2 using the method from section 4. If Eˆw is the smallest so far, store Eˆw, µw. Whenthe robustmeanalgorithmis appliedon end if sliding spatial windows of an image, we get an if b<n and |xb+1−xa|<2c then edge-preserving image smoothing method. In fig- b←b+1 ure6,weshowthe256x256Lennaimagesmoothed S1 ←S1+xb with the truncated quadratic method using a spa- S ←S +x2 2 2 b tial window of 5 x 5 and c = 0.1 in the intensity else domain, where intensities are in the range [0,1]. S1 ←S1−xa The pixels are weightedwith a Gaussianfunction. S ←S −x2 2 2 a a←a+1 end if 6 Discussion end while The µw correspondingtothe smallestEˆw isnow Wehaveshownanefficientwaytocalculatethero- the robust mean. bustmeanvalueforthespecialcaseofone-dimensional features and the truncated quadratic error. The advantage of this method is that it is simple, ex- Note that it is straightforward to introduce a act and global. The disadvantage is of course its weight w for each sample, such that a weighted limitation to one-dimensional feature spaces. meanvalue is produced. We shouldthen let n be 0 One example of data for which the method thetotalweightofthesamplesoutsidethewindow, could be applied is image features like intensity µw the weighted mean value of the window w, S1 or orientation. If the number of samples is high, and S weighted sums etc. 2 e.g. in robust smoothing of a high resolution im- 2The check a≤b is required to avoid zero division if a age volume, the method might be suitable. If wasincreasedbeyondbinthepreviousiteration. 3.25 Exact truncated quadratic Channel averaging 3.2 3.15 mean mated robust 3.1 Esti 3.05 3 2.95 3 3.5 4 4.5 5 5.5 6 6.5 7 7.5 8 Outlier position Figure 4: The influence of an outlier on the mean value 0.06 d=0 Figure6: Lenna,robustlysmoothedwiththetrun- d=0.1 d=0.2 0.04 dd==00..34 cated quadratic method 0.02 [3] M. Felsberg and G. Granlund. Anisotropic m − x0 0 channel filtering. In Proc. 13th Scandinavian Conference on Image Analysis, LNCS 2749, −0.02 pages 755–762,Gothenburg, Sweden, 2003. −0.04 [4] M. Felsberg and G.H. Granlund. POI detec- tion using channel clustering and the 2D en- −0.063 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 4 ergy tensor. In 26. DAGM Symposium Mus- x0 tererkennung, Tu¨bingen, 2004. Figure 5: The grid effect [5] R. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge a convolution-like operation is to be performed, University Press, 2001. the overhead of sorting the samples could be re- [6] H. Scharr, M. Felsberg, and P.-E. Forss´en. duced significantly, since the data is already par- Noise adaptive channel smoothing of low-dose tiallysortedwhenmovingtoanewspatialwindow, images.InCVPRWorkshop: ComputerVision leading to an efficient edge-preserving smoothing for the Nano-Scale, 2003. algorithm. [7] M.Sonka,V.Hlavac,andR.Boyle.ImagePro- cessing, Analysis, andMachine Vision. Brooks Acknowledgment / Cole, 1999. This work has been supported by EC Grant IST- [8] G. Winkler and V. Liebscher. Smoothers for 2003-004176COSPAL. discontinuous signals. Nonparametric Statis- tics, 14:203–222,2002. References [1] Y Cheng. Mean shift, mode seeking and clus- tering.IEEETransactionsonPatternAnalysis andMachine Intelligence,17(8):790–799,1995. [2] M. Felsberg. Auto-associative feature process- ing. In Early Cognitive Vision Workshop, Isle of Skye, Scotland, 2004.

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.