ebook img

NASA Technical Reports Server (NTRS) 19950005348: Improved image decompression for reduced transform coding artifacts PDF

11 Pages·1.1 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview NASA Technical Reports Server (NTRS) 19950005348: Improved image decompression for reduced transform coding artifacts

(NASA-CR-196805) IMPROVED IMAGE N95-I1761 DECOMPRESSION FOR REDUCED TRANSFORM COOING ARTIFACTS (Notre Dame Univ.) 10 p Unclas G3/61 0022363 NASA-CR-196805 /_ /_ Improved image decompression for reduced transform coding artifacts ,//J <:;...... Thomas P. O'Rourke, Student Member, [EEE, and Robert L. Stevenson, Member, IEEE Abstract--The perceived quality of images reconstructed duces the artifacts introduced by the coding technique. It from low bit rate compression is severely degraded by the is based on a stochastic framework in which probabilistic appearance of transform coding artifacts. This paper pro- models are used for both the noise introduced by the coding poses a method for producing higher quality reconstructed images based on a stochastic model for the image data. and for a "good" image. The restored image is the maxi- Quantization (scalar or vector) partitions the transform co- mum a posteriori (MAP) estimate based on these models efficient space and maps all points in a partition cell to a [2; 3]. Previous techniques which have tried to address representative reconstruction point, usually taken as the centroid of the cell. The proposed image estimation tech- this issue have various problems which limit their ability nique selects the reconstruction point within the quantiza- to produce high quality image estimates. Some techniques tion partition cell which results in a reconstructed image propose changes in the way the image is coded[4; 5]; this which best fits a non-Gaussian Marker random field (MRF) image model. This approach results in a convex constrained however reduces the efficiency of the source coder and thus optimization problem which can be solved iteratively. At reduces the compression ratio. The proposed postprocess- each iteration, the gradient projection method is used to ing technique only requires modification of the image de- update the estimate based on the image model. In the transform domain, the resulting coefficient reconstruction coder. Linear based estimators [6], while removing some points are projected to the particular quantization parti- artifacts, usually degrade edge information in the original tion cells defined by the compressed image. Experimental image. Several techniques try to overcome this smoothing results will be shown for images compressed using scalar quantization of block DCT and using vector quantization of of the edges by first estimating the edge information in the subband wavelet transform. The proposed image decom- compressed image data [7; 8]. This however is a very diffi- pression provides a reconstructed image with reduced visi- cult task for very high colnpression ratios, where the actual bility of transform coding artifacts and superior perceived quality. edge inforlnation is somewhat scrambled. This paper will first describe a generic model for im- I. INTRODUCTION age compression. For the purpose of the reconstruction algorithm, this model is descriptive enough to describe Source coding of image data has been a very active area many compression techniqnes. A decompression algorithm of research for many years. The goal is to reduce the num- is then described based on a previously proposed image ber of bits needed to represent an image while making as model [9; 10]. The computational algorithm is also de- few as possible perceptible changes to the image. Many al- scribed. Experimental results are shown for two differ- gorithms have been developed which can successfully com- ent transform coding methods. The first method is the press a grayscale image to around 0.8 bits per pixe] (bpp) Joint Photographic Experts Group (JPEG) image coin- with almost no perceptible effects [1]. A problem arises, pression standard [11; 1]. The second method uses a sub- however, as these compression techniques are pushed be- band wavelet transform with a vector quantization (VQ) yond this rate. For higher compression ratios (< 0.4 bpp proposed in [12]. It can be seen that images reconstructed for grayscale) most algorithms start to generate artifacts with this new postprocessing technique show a reduction which severely degrade the perceived quality of the image. in many of the most noticeable artifacts and thus allow The type of artifacts generated is dependent on the com- higher compression ratios. pression technique and on the particular image. For block encoded images, the most noticeable artifact is generally II. DECOMPRESSION ALGORITHM the discontinuities present at the block boundaries. For To decompress the compressed image representation, a subband encoded images, it is the ringing at image edges MAP technique is proposed. Let the compressed image which is usually first perceived. data be represented by y while the decompressed full reso- This paper proposes a technique which addresses this lution image is represented by z. For MAP estimation, the issue through a postprocessing algorithm which greatly re- decompressed image estimate Z is given by This paper has been submitted to IEEE TransactionJ on Circuits and Systems for Video Technology This work was supported in by part by Z = arg n_x L(z[y), (1) NASA Lewis Research Center under contract NASA-NAG 3-154c* This work was presented in part at the 1994 SPIE/IS&T's Symposium on Elec- tronic Imaging Science & Technology, San Jose, CA, February, 1994 where L(-) is the log likelihood function L(.) = logPr(.) The authors are with the Laboratory for Image and Signal Analysis which in this case is a measure of how likely the decom- in the Department of Electrical Engineering at the University of Notre Dame, Notre Dame, IN 46556 pressed image z resulted in the compressed representation y. UsingBayesrule A Gibbs distribution is used to explicitly write the distri- bution of MRF's. A Gibbs distribution is any distribution which can be expressed in the form = argmzaXlog Pr(y) ' (2) '{ } = arg mzaX{log Pr(ylz) + log Pr(z) - log Pr(y)}_3) Pr(x) = _exp - _ V¢(x) (10) = argmzax{logPr(ylz ) + logPr(z)}, (4) cE¢ where Z is a normalizing constant, 9_(.) is any function of where the Pr(y) term is dropped because it is a constant a local group of points c and C is the set of all such local with respect to the optimization parameter z. The condi- groups. Note that the Gaussian model is a special case tional probability Pr(y[z) is based on the image compres- of the MRF: To understand how to include discontinuities sion method while the probability Pr(z) is based on prior into the statistical model, it is important to first under- information about the image data. stand what the model represents and how to define the A. Image compression model model for a particular application. For a particular source In a transform coding compression technique, a unitary signal Zl, the value of the probability measure Pr(Zl) is transformation H is applied to tile original image x. Tim related to how closely Zl matches our prior information -- compressed representation y isobtained by applying a quan- about the source. So a zl which closely matches our prior tization Q to the transform coefficients which can be writ- information should have a higher probability than one that ten as does not. For this to be true, the function _(-) should pro- -- y = Q[Hx]. (5) vide a measure of the consistency of a particular z, where a z more consistent with the prior information will have Quantization partitions the transform coefficient space and smaller values of !4(.). The situation which is important in maps all points in a partition cell to a representative re- this work occurs when the prior information is mostly true - construction point, usually taken as the centroid of the but a limited amount of inconsistency is allowable (e.g. a cell. The indices of these cells are usually entropy coded and then transmitted as the compressed representation y. piecewise smooth surface; that is, a surface which is mostly smooth but a few discontinuities are allowable). In the standard linage decompression method, the recon- In this research a special form of the MRF is used which structed image is given by has this very desirable property. This model is character- i = H-lV-l[y], (6) ized by a special form of the Gibbs distribution 1 1 where the inverse quantization maps the indices to the re- Pr(x) = _ exp{-_ Z pv(dt_x)} (11) construction points. cEC Since quantization is a many-to-one operation, many ira- - ages map into the same compressed representation. The where k is a scalar constant that is greater than zero, d_. operation of the quantizer is assumed to be noise free; that is a collection of linear operators and the function PT(') is is, a given image z will be compressed to the same corn- given by - pressed representation y every time. Tile conditional prob- ability for the noise free quantizer can be described by pv(U) u__;'+ 2T(lul- Z), Ilunll _><TT,; (12) Pr(YlZ) = 01,, yy=Q#[HQz[]H, z]. (7) see Figure 1. Since PT(') is convex, this particular form of the MRF results in a convex optimization problem when Since used in the MAP estimation formulation (4). Therefore, such MAP estimates will be unique, stable, and can be computed efficiently. The function PT(') is known as the -- logPr(ylz) = ( -_0,, yy =# QQ[[HHzz]],, (8) Huber minimax function [13] and for that reason this statis- tical model is called the Huber-Markov random field model the MAP estimation in (4) can be written as the con- (HMRF). strained optimization problem For this distribution, the linear operators, d_, provide i = arg min{- log Pr(z)}, (9) the mechanism for incorporating what is considered con- ZEZ sistent most of the time while the function PT(') is the where Z is the set of images which compress to y (i.e. mechanism for allowing some inconsistency. The parame- ter T controls the amount of inconsistency allowable. The Z = {z: y = Q[Yz]}). function PT(') allows some inconsistency by reducing the B. Image model importance of the consistency measure when the value of For a model of a "good" image (i.e. Pr(z)) a non- the consistency me_ure exceeds some threshold, T. Gaussian IVlarkov random field (MRF) model is used [9; For the measure of consistency, the fact that the dif- 10]. This model has been shown to successfully model both ference between a pixel and its local neighbors should be the smooth regions and discontinuities present in images. small is used; that is, there should be little local variation -- 2 hfi-l] hill h[i+l] I I t, V iI I" A" i A ; g P- m l[i-l] l[i] l[i+l] u2 Fig. 2, Illustration of projection operator for scalar quantization. to find the steepest direction g(k) towards the minimum g(k)= V pT(dtcz ('_))) = v2.-._"pT(,c,.i,lcZ (k)),o.,c, (17) u) \cEC / cEC where p'T(U) is the first derivative of the Huber function. The size of the step a(k) is chosen as -T T a(k = g(k)tg(_) ,, t k) t (18) g(k)t(_cE c PT(dcz( )d_d_)g (k)" Fig. 1, Huber minimax function PT() superimposed on quadratic func- tion. This choice of step size is based on selecting the opti- mal step size for a quadratic approximation to the non- in the image. For this assumption, an appropriate set of quadratic function in (15). Since this is an approximation, consistency measures is the value of the objective function may increase if the step size is too large. To avoid this potential problem, the value {d'cz}¢ec = {zm,, - z_.l}k.te.V .... 1<.... ____v, (13) of a(k) is divided by 2 until the step size is small enough that the value of the objective function is decreased. Since where A/',_,,_ consists of the eight nearest neighbors of the the updated estimate w (k+_), pixel located at (m, n) and N is tile dimension of the image. Across discontinuities this measure is large, but the rela- W (k+l) = Z(k) --}-a(k)g (k), (19) tive importance of the measure at such a point is reduced because of the the use of the Huber function. may fall outside the constraint space Z, w (k+l) is projected The MAP estimate can now be written as onto Z to give the image estimate at the (k+ 1)-th iteration Z(k+l) _---'pz(Wlk'FII). (20) = argminE_(z ) (14) ZEZ cEC In projecting the image w (k+l) onto the constraint space = argmin E Z PT(Zm'n -- Zk,i).(15) Z, we are finding the point z(k+l) E Z for which IIz(k+l) - ZEZ w(k+l)ll is a minimum. If w (k+l) E Z, then z(k+l) = l_m,n_N k,lEN'm._ w ok+l) and IIz(k+l)- w(k+l)ll = 0. Since H is unitary, As a result of the choice of image model [9; 10], this results in a convex (but not quadratic) constrained optimization IlHz(k+ _)- HwCk+_)l[ = I1_(TM)- w(k+_)ll (21) which can be solved using iterative techniques. and the projection can be carried out in the transform do- III. RECONSTRUCTION ALGORITHM main. The form of Pz is dependent on the quantizer Q. The An iterative approach is used to find z in the constrained projection operators T'z for scalar and vector quantization minimization of (15). An initial estimate z(°) is improved are described in the following subsections. by successive iterations until the difference between a(k) and z(k+l) is below a given threshold e. The rate of con- A. Scalar quantization vergence of the iteration is affected by the choice of the ini- The scaler quantizer is a partition of the real number line tial estimate. A better initial estimate will result in faster 7_ and is defined by its breakpoints. The breakpoints are convergence. The initial estimate used here is formed by determined by the amount of compression desired. The the standard decompression boundaries or breakpoints of cell i are l[i] and hill, see Figure 2. If a coefficient _0falls in cell i, z/°!= (16) l[i] _<,_ < h[i], (22) Given the estimate at the k-th iteration, zIk), the gra- dient descent method is used to find the estimate at the then index i is transmitted. The compressed representation next iteration, z(_+_). The gradient of _ pm(dtcz) is used y is the set of indices for all the transform coefficients. - 3 Projectionto theconstraintspaceisrathersimplefor scalaqruantizationT.ofindthetransformcoefficien#tsin X / / Hz (k+l), find the corresponding coefficient a in Hw (k+l). (t) /" From the compressed representation y, find in which cell p should fall. Assume/_ should fall in cell i. The standard decompression method would use p = r where r is the [\ " X centroid of the cell. The coefficient /_ which falls in cell i ,'_2..\... can be determined by .i.. --.. _, t[i]_<_ < h[i], (23) #= h/[[ii]],, cho[i<]l<[i_],_. ..-"'_" g T.[i] This wilt minimize [[Hz/k+t) - Hwlk+l)[[ while assuring that y = Q[Hz(k+l)]. The image estimate z(k+ll is ob- X / tained by performing the inverse transformation on Hz Ik+t). i / (O /" B. Vector quantzzatton O The vector quantizer is a partition of _n and is defined by its codebook vectors r[i]. The size of the codebook is determined by the desired bit rate for that subband. t \ / [i] A coefficient vector wis considered in cell i if it is closer I \ \ _ q: b_ _ to codevector r[i] than to any other codevector \ t k /_/ I1_- _'[i]11< [l_ - r[J]ll, j :¢ i. (24) -- The distance metric used here is the squared Euclidean distance. Although the illustrations in Figure 3 represent a 2-D quantizer, the method described below for finding Fig. 3 Illustration of projection operator for vector quantization _ directly generalizes to higher dimensional spaces. Projection to the constraint space is more complicated where t E [0, 1] if the intersection exists. Among all j :fl i, for VQ than for the scalar case. Tile standard decompres- the smallest positive value of t will define the point (t on sion method would use t_ = r[i] for the reconstruction the cell boundary. (1 will be equally distant from r[i] and -- point. To project the vector w to cell i, it is necessary to find the point in cell i which is closest to w . The easy case The next step involves traveling along the boundary of occurs when w itself is in cell i since t* = w. In the more the cell to _ . The idea is to move towards w but not leave _ difficult case where w is closer to some other codevector, the cell boundary. Let (2 be the perpendicular projection the projection point _ will be on the boundary of cell i. of ¢oonto the k-th hyperplane (11(- r[i]ll = [l_ - fiX:Ill). The first step in projecting w to lZ is to move from w to (2 is the point in this hyperplane which is closest to w and the boundary of cell i. The next step is to move along the can also be found as the intersection of a line boundary to/z . The points _ on the boundary of cell i all satisfy = w + t('r[i] - r[k]) (28) II_- r[i]ll = II_- r[j]ll (25) with a hyperplane. Both (1 and (2 will be in this hyper- plane. Next, move along the line segment from _1 towards for some j # i. For a particular value of j, (25) defines _2, stopping at the first intersecting hyperplane. The in- a hyperplane of points equally distant from the i-th and tersection point becomes the new (1 and this hyperplane j-th codevectors. Since w is outside cell i, the line segment becomes the new k-th hyperplane. (_ now becomes the from w to r[i] described by projection of ¢oonto this new hyperplane and the process is repeated until one of the stopping conditions is met. This = t_o + (1 - t)_-[i], 0 < t < 1 (26) process is stopped if will intersect the boundary of cell i. For each codevector • _1 travels all the way to _, without being intercepted r[j], j ¢ i, it is easy to determine if the intersection of by another hyperplane the hyperplane (25) with the line segment (26) exists. The • or if (1 would leave the cell boundary by moving to- value of t in (26) which defines the point of intersection is wards _, given by • or if _1 does not make progress towards _o . t = (r[i] - r[j]). (r[i] - r[j]) (27) When this process is completed, _ = (I. 2(viii - w). (r[i] - r[j]) -- 4 IV. EXPERIMENTALRESULTS efficients grouped into 4 by 4 blocks (or 8 by 8) which are treated as 16 (or 64) dimensional vectors. The bit rate Inthissectiont,heresultsofusingtheproposepdost- for each subband is determined by the method given in processinmgethodareshownforimagecsompresseudsing [12] based on the desired overall bit rate and human vi- scalaqruantizatioonfblockDCTandusingvectorquanti- sual system considerations. The bit rate is higher for lower zationofsubbandwavelettransform. frequency subbands. The highest frequency subbands are A. Scalar quantizalion allocated zero bpp and are dropped altogether. For each of The standard "Lenna" image shown in Figures 4(a) and the remaining subbands, a locally optimal VQ codebook is 5(a) was compressed to 0.264 bpp by the JPEG algorithm designed for that subband signal. See [17] for general in- [11; 1]. The result of standard decompression of this image formation on VQ and [12] for details on the VQ approach is shown in Figure 4(b) and enlarged to show detail in Fig- which is used here. The point here is to demonstrate the ure 5(b). At this compression ratio, the coding artifacts are postprocessing technique is sufficiently flexible to be ap- very noticeable. Most noticeable are the blocking effects of plied to any transform coding method. the coding algorithm, but aliasing artifacts along large con- Figure 7 shows the results for moderate compression trast edges are also noticeable. Tile result of postprocess- (0.444 bpp) and postprocessing for 9 iterations. The results ing this image is shown in Figure 4(c) and enlarged to show of postprocessing for 20 iterations are shown in Figure 8for -- detail in Figure 5(c). The number of iterations needed to the case of heavy compression (0.155 bpp). The greatest sufficiently reduce the artifacts is dependent on tile severity improvement comes in the first few iterations. Further it- of the degradation. More iterations are required for con- erations of postprocessing continne to improve the image, - vergence with more severe degradation. Notice that the but tile amount of improvement in each iteration decreases blocking effects have been removed ill the postprocessed as the image gets better. image. This can be most easily seen in the shoulder and The primary artifacts of this coding technique are alias- _ background regions. Notice that while the discontinuities ing at edges and loss of high frequency information. As ex- due to the blocking effects have been smoothed, the sharp pected, these artifacts are more severe in the case of heavy discontinuities in tile original image, such as along tile hat compression. Most aliasing artifacts disappear after only a few iterations in the case of moderate coxnpression. Al- brim, have been preserved. -- For comparison, the method for reducing blocking arti- though the aliasing artifacts are more persistent for heavy facts proposed in the .]PEG standard [1] is shown in Fig- compression, the first few iterations still reduce the alias- ures 4(d) and 5(d). For a given block, the five lowest ing a great deal. Even for heavy compression, the aliasing - frequency AC coefficients are predicted from the DC coef- artifacts are eventually no longer perceptible. ficients of the given block and the 8 nearest blocks. The Information contained in high frequency subbands, such image is modeled as a 2-D second order polynomial over as the details in the hat of the original image, is completely these 9 blocks. This polynomial is defined by requiring that lost when those subbands are dropped. The postprocess- the mean of the polynomial over each block must match ing cannot reinvent this information which is completely the DC value for that block. To reduce the blocking, the absent from the compressed representation. However, for estimated AC coefficients corresponding to a DCT of this structures such _s edges which have information content -- polynomial are used if they do not contradict the transmit- spread across low frequencies as well as high frequencies, ted information. See [1] for details. Note that while tile the postprocessor is able to restore some high frequency blocking artifacts are reduced by this approach, they are information based on the low frequency components in the still visible. The overall improvement in the reconstructed compressed representation. This allows crisper edges which image is much less significaut than the improvement seen contain high frequencies to appear in the restored image. See for example the hat brim and along the edge of the in Figures 4(c) and 5(c). While the blocking artifact dominated the standard re- shoulder. As in the scalar quantization case, the struc- construction of the "Lenna" image, ringing artifacts are tures supported by the compressed representation and by dominant in document images such as text or graphics [14], the image model are restored while artifacts of the trans- see Figure 6. While JPEG may not be the most appropri- form coding process are suppressed. -- ate compression method for document images it can be V. CONCLUSION seen from the example shown in Figure 6 that ringing arti- facts are suppressed by the postprocessing algorithm. The The problem of image decompression has been cast as postprocessing algorithm removes ringing artifacts as well an ill-posed inverse problem, and a stochastic regulariza- as blocking artifacts. tion technique has been used to form a well-posed recon- struction algorithm. A statistical model for the image was B. Vector quanttzatzon produced which incorporated the convex Huber minimax The "Lenna" image was decomposed in the manner of function. The use of the Huber minimax function PT(') Mallat [15] by a three level separable snbband structure helps to maintain the discontinuities from the original im- using 2-D separable filters based on I-D Daubechies' 12-th age which produces high resolution edge boundaries. Since order wavelet filters [16]. Each of the subbands is vector PT() is convex, the resulting multidimensional ininimiza- quantized quantized independently with the transform co- tion problem is a constrained convex optimization problem. 5 (a) (b) (c) (d) Fig 4, Scalar quantre.at.m I)(:T ,-Oml,r,.ss,,,I mlag-s ({) 2_;.Ibpp) (hi origin,hi inlag.. (b) standard d_comprt.ssion (c) After 9-th iteration of postpro- cessing algorithm, 7" = 10 t,-I) t'++cop.sl:-tJ,'t,,I ;vltt_ A(' coctFtci,..nl t>r+:dicta)n Efficient computational algorithms can t)e used in the min- [4] tl. C. Reeve and .l.S. Lira, "Reduction of blocking imization. The propos_<l imag(_ decompression algorithm effects in image coding," Optical Engineering, vol. 23, produces reconstructed images which gr,mtly reduced the pp. 3,t-:17..lan./Feb. 1984. noticeable artifacts which exist using standard (lecompr,_s- [5] K. N. Ngan, D. W. Lin, and d. T, Liou, "Enhancement sion techniques. of image quality for low bit rate video coding," IEEE REFEI_,EN('E5 Tra,s. on Circulls and Systems, vol. 38, pp. 1221- 122,5. Oct. 1991. [6] R. fCosenholtz and A. Zakhor, "Iterative procedures [1] W. B. Pennebaker aml .l.L. Milchell, ./PEG: ,s'till Im- for reduction of blocking effects in transform image age Data Compre.ssion ,qlandard. New York: Van Nos- trand Reinhohl, 1993. coding," IEEE Trans. on Circuits and Systems for Video Technology, vol. 2, pp. 91-95, Mar. 1992. [2] R. L. Stevenson. "l{,,(luclio/_ of codling artifacts in transform image (-oding," in Procccdi,rts of lke 199:1 [7] B. Ramanmrthi and A. Gersho, "Nonlinear space- variant postprocessing of block coded in]ages," IEEE International Confcreltccs o_ .tco_tslws. 5'p(cch and Trans. on Acoustics. Speech, and Signal Processing, Signal Proec._sin[I, (Miuln';_l_olis, MN), pp. V:,I0[ .10-I. vol. ASSP-3-1, pp. 1258-1268, Oct. 1986. Apr. 1993. [3] T. P. O'Rourko and I{. l,. Stevenson, "'l,ulm)W,,I im- [8] K. Sara'r, "'Enhancement of low bit-rate coded images using edge detection and estimation," Computer Vi- age decompr,'ssion for reduced lransf'ornl coding arti- ,slou (;raphws a,d Image Processing: Graphical Mod- facts," in Pr'oc. ,s'I)IE lmrtq, and l,'tdro l'rm',s.st,!l [1. el._-a,d lmagc t)roccss_ny, vol. 5;1, pp. 52-62, Jan. vol. 2182, (San .h_se, ('A, [','l). 7-!t, 1!)!)1]. pp. 90 101. 1_)91. 1994. (a) (b) (c) (d) Fig 5 Scalar quantlzatlon D('T ,-_mq_v,,ss._d imag_-s ell 2t].t L,pp) (_,nlarged view) (a) original imago (bl standard decompression (c) After 9-th iteration of postprocessing algorit hm. T = 1.0 (,t) [-,.constrn,?t,-t with A( TM coefficient pr__-diction [9] R. R. Schultz and R. L. Stevenson, "hnl)roved defini- reduced artifacts," in Proc. of [S_8(T/SPIE Symposium tion image expansion," in Proceedil_gs of the 1992 ht- o1_ Electronic Ira.aging: Image and Video Compres- ternational Conference on Acoustics, Speech and Sig- sion, (San Jose, CA, Feb. 6-10, 1994), 1994. nal Processing, (San Francisco, CA), pp. [II:173-176, [15] S. G. Mallat, "A theory for multiresolution signal Mar. 1992. decomposition: The wavelet representation," IEEE [10] R. L. Stevenson and S. M. Schweizer, "A nonlinear TraT_s. oJ_Pattern Analysis and Machine Intelligence, filtering structure for image smoothing in mixed noise vol. ll, pp. 674-693, July 1989. environments," ,Ioum_al of Mathematical Imaging and [16] I. Daubechies, _'Ort.honormal bases of compactly sup- Vision, vol. 2, pp. 137-[54, Nov. 1992. ported wavelets," Communications on Pure and Ap- [11] G. K. Wallace, "The ,]PEG picture compression stan- plied Mathematics, vol. XLI, pp. 909-996, 1988. dard," IEEE Trans. o7_Consumer Electronics, vol. 38, [17] A. Gersho and R. M. Gray, Vector Quanlization and pp. 18-34, Feb. 1992. Signal Compression. Boston: Kluwer Academic Pub- [12] T. P. O'Rourke and R. L. Stevenson, "tluman visual lishers, 1992. system b_sed subt>and itnage compr<+ssiotl, '' in Proc. 31st Annual Allerton ('onfercnce on Commu_wation, Control, and Comlmtt,g, (Monticello, [L, Sept. 29 - Oct. 1, 1993), pp. 452 461, 19!)3. [13] P. J. Huber, Robust ,h'lattstte.s. Now York: .lohn \Vih_y & Sons, 1981. [14] Z. Fan and R. Eschbach. ".]PE(_ decompression with 7 .L 1.1_; U_[,3d, L tlll_lll @ aLlu n, The LaboratorF Workstations, e fra cameras, flame momtors and p_S I 11_ U_lJi:lI tlll_lll I devices and for ittlon, the Laborator_ Workstations, e cameras, frame momtors and p_S 1 lit;; IdUlJill tlll_lll n: devices and for __._tlo The Laborator_ Workstations, e fra cameras, flame monitors and p_S Fig. 6. Scalarquantization DCTcompressed images (0264 bpp): (al original image (b) enlarged (c) standard decompression with ringing artifacts(d) enlarged (e) After postprocessing algorithm, T = 1.0 (f) enlarged Note for rev,e,eers: halftonm9 artzfacts d_stort the letter edges. Th_s distortion zs not present in the orzginal 2rna9es and wtll not appear m the 91ossy zrnages ;n the final manuscript. 8 (_) (b) (_) (d) Fig. 7 VQ subband itqtages (l) 4-t-t bl:,pI (a) standard ,t,_compression (b) enlarged {c) after 9-th iteration of postprocessing algorithm, T = 10 (d) enlarged. Thomas P. ()'R()urke received Ilw B.S.E.E. Robert L. Stevenson was born on December degree (with h,,n,)rs) fr_mt lhe IJniw_rsily of 20, 1963, in Ilidley Park, Pennsylvania. He re- lllin,,is at lrrl_ana-Champaign in 1990 and the ceived the B.E.E. degree (summa cum laude) XI.S.E.E. fr,nn the Ilniversity of N,_tt'e Dame from the University of Delaware in 1986, and in 1993. Whih'at the Ifniversityof [llinoishe the Ph.D. in Electrical Engineering from Pur- part icipated iut In" c,,-,,l>erat ire e_lt,cat ion pro- due University in 1990. VChile at Purdue he gram with IBM C,wp. in I_exinglon, Kentucky. was supported by a National Science Foun- lie is curremly pursuillg tl'_' Ph.D. in l-2,1ec- dation Graduate Fellowship and a graduate trical Engin_.ering at the" lTniw'rsity of Notre fellowship from the DuPont Corporation. He Dame. Mr. ()'llonl'ke is a, meml_,'r of the joined the faculty of the Department of Elec- IEEE. the AI:\A. }'_la [(apl)a Nu. Tau Beta trical Engineering at the University of Notre Pi, and Phi Kappa Phi. l[is resear,'h inl,'rests include image com- Dame in 1990, where he is currently an Assistant Professor. Dr. pression, muJtidimensionM signal pr,.'cssing, image communi,'ation Stevenson is a member of the IEEE, Eta Kappa Nu, Tau Beta Pi, systems, and image pr,_cessing. and Phi Kappa Phi. His research interests include multidimensional signal processing, image processing, and computer vision. _)

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.