ebook img

Positive constrained approximation via RBF-based partition of unity method PDF

0.72 MB·
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Positive constrained approximation via RBF-based partition of unity method

Positive constrained approximation via RBF-based partition of unity method Alessandra De Rossi and Emma Perracchione Department of Mathematics “G. Peano”,University of Torino, via Carlo Alberto 10, I–10123 Torino, Italy 7 Abstract. Inthispaper,wediscusstheproblemofconstructingRadialBa- 1 sisFunction(RBF)-basedPartitionofUnity(PU)interpolantsthatarepositive 0 if data values are positive. More specifically, we compute positive localapprox- 2 imants by adding up several constraints to the interpolation conditions. This n approach, considering a global approximation problem and Compactly Sup- a ported RBFs (CSRBFs), has been previouslyproposedin [39]. Here, the use of J the PU technique enables us to intervene only locally and as a consequence to 5 2 reach a better accuracy. This is also due to the fact that we select the optimal numberofpositiveconstraintsbymeansofana priori errorestimateandwedo ] not restrict to the use of CSRBFs. Numerical experiments and applications to A population dynamics are provided to illustrate the effectiveness of the method N in applied sciences. . h t a 1 Introduction m [ Givenasetofsamples,thescattereddatainterpolationproblemconsistsinfind- ing an approximating function that matches the given measurements at their 1 correspondinglocations. Furthermore,dealing with applications, we often have v 0 additional properties, such as the non-negativity of the measurements, which 6 we wish to be preservedduring the interpolationprocess. Note that, since such 2 property is known as positivity-preserving property, to keep a common nota- 7 tion with existing literature, we use the term positive (instead of non-negative) 0 function values or interpolants. . 1 To preservesuch property,mostly consideringrationalspline functions with 0 C1 or C2 continuity, the recent research studies techniques which force the 7 1 approximants to be positive. As example, the conditions under which the pos- : itivity of a cubic piece may be lost are investigated in [30]. Moreover, in order v to preserve the positivity, the use of bicubic splines, coupled with a technique i X basedonaddingextraknots,hasbeeninvestigatedin[2,8]. Apositivefit isin- r steadobtained by means of rationalcubic splines in [21, 23]. The same authors a alsodevelopedapositivesurfaceconstructionschemeforpositivescattereddata arrangedover triangular grids [22]. Note that all the above mentioned methods depend on a mesh. However, the positivity-preserving problem is also well-known in the field of meshfree or meshless methods. They include Shepard-type approximants [27, 29] and RBF interpolants [7, 37]. While the positivity-preserving problem has been widely investigated for the Modified Quadratic Shepard’s (MQS) approximant [1, 6], it remains a challenging computational issue for RBF interpolation. Indeed, evenifsuchmeshfreeapproachhasbeenextensivelystudiedintherecentyears, especially focusing on the stability of the interpolant [13, 16], not a lot of ef- fort has been addressed to construct positivity-preserving approximants. Such 1 problemhasbeen studiedonly inparticularandwell-knowncases;for instance, in[33]itisanalyzedforthethinplatespline. Othermethods,whichfollowfrom thequasi-interpolationformulagivenin[40],havebeenproposedandeffectively performed to construct positive approximants of positive samples [36]. Focusing onRBFs, our scope consists in preservingthe positivity ofthe PU interpolantfor a wider family of kernels. In[39]a globalpositive RBFapproxi- mantisconstructedbyaddinguptotheinterpolationconditionsseveralpositive constraintsandconsideringCSRBFs. Eveniftheoptimalnumberofconstraints is not investigated, the results are promising and show that this technique has a better accuracy than the Constrained MQS (CMQS) approximant. However, since a global interpolant is used, adding up other constraints to preserve the positivity implies that the shape of the curve/surface is consequently globally modified. As pointed out in [39], this might lead to a considerable decrease of thequalityoftheapproximatingfunctionincomparisonwiththeunconstrained CSRBF interpolation. Thus here, in order to avoid such drawback, focusing on 2D data sets, the PU method is performed by imposing positive constraints on the local RBF interpolants. Such approach enables us to consider constrained interpolation problems only in those PU subdomains which do not preserve the required property. This leads to an accurate method compared with existing techniques [1,6,39]. Moreover,incontrastwith[39],we canconsidertrulylargedata sets. Specifically,inordertoconstructthe PositiveConstrainedPU(PC-PU)ap- proximant,following [39], we locally impose severalpositive constraints and we reducetosolveanoptimizationproblem. Thenumberofconstraintsisproperly selected by means of an a priori error estimate. This is a fundamental step to maintain a good accuracy of the fit. Moreover, differently from [39], a wider family of RBFs (not only compactly supported) is considered. The main dis- advantage of using CSRBFs is that they introduce large errors in the area in which the interpolant is negative. This is due to the fact that the shape of the fit is modified only within the support of the CSRBF and thus neighbouring points are not taken into account. Numerical evidence shows that the use of infinitely smooth globally defined RBFs leads to an improvement in this direc- tion. Moreover,in order to stress the effectiveness of the proposedtechnique in appliedsciences,weinvestigateseveralapplicationstobiomathematics. Finally, comparisonswiththeunconstrainedPUinterpolant,withtheShepard’smethod and with the one outlined in [39] are carried out. The guidelines of the paper are as follows. In Section 2 we investigate the positivityofthePUapproximantbyconsideringextrapositiveconstraints. The computational aspects of such method are analyzed in Section 3. Numerical experiments and applications to population dynamics are shown in Section 4 and 5, respectively. Finally, Section 6 deals with conclusions and future work. 2 Positive approximation of positive data values Inadditiontothe assignedinterpolationconditions,ourgoalconsistsinconsid- ering several positive constraints. This allows to preserve the positivity of the measurements. Therefore, our approach turns out to be meaningful especially in applications, indeed in order to avoid the violation of biological or physi- cal measurements, a positive fit is often necessary. Thus, before discussing the 2 proposedmethodandwithoutloosinggenerality,weconsideranexampleofuni- variateinterpolationtoillustratethe scope ofsuchnumericaltool andmotivate the reader. Example 2.1 (Motivations and targets). One of the most common tumor, affecting mainly men over sixty years old, is the prostate cancer. Luckily, this disease has a very slow growth and a reliable biomarker after prostatectomy, the so-called Prostate Specific Antigen (PSA), suitable for the early diagnosis. In particular, if its value is larger than 0.2 ng/mL, a relapse (a local or distal metastasis) occurs. Clinical data of prostatectomized patients are available in [18]. They are used to investigate the evolution of the relapse via mathematical models [31]. Therefore, in order to validate such models, a data fitting results essential [25]. For such scope, one can reconstruct the PSA curve with the standard PU inter- polant. Anyway, thepositivity ofthePSAvalues is notalways preserved through the interpolation process, violating the biological constraint. In order to avoid this problem, we propose a novel technique, namely the PC-PU method, which allows to preserve the positivity property, see Figure 1. 10 20 8 mL) mL) 15 g/ 6 g/ n n A ( A ( 10 S 4 S P P 2 5 0 0 0 10 20 30 40 50 0 10 20 30 40 50 t (months) t (months) Figure 1: Examples of curves fitting PSA values (plotted with blue dots). The classical PU interpolant is plotted in orange and the PC-PU approximant in blue. 2.1 The partition of unity method LetusconsideraboundedsetΩ R2. GivenasetofN distinctdatapoints(or ⊆ data sites or nodes) = (x ,y ),i =1,...,N Ω and a set of data values N i i X { }⊆ (or measurements or function values) = f = f(x ,y ),i = 1,...,N , the N i i i F { } PUmethoddecomposesthe domainΩ into dsubdomainsΩ , j =1,...,d, such j thatΩ d Ω [4,15,24,38]. Inliterature,thesubdomainsΩ aresupposed ⊆ j=1 j j to be circuSlar patches of the same radius δ covering the domain Ω. According to [38], associated with these subdomains, we choose a p-stable partition of unity, i.e a family of nonnegative functions W d , with W { j}j=1 j ∈ Cp(R2), such that i. supp(W ) Ω , j j ⊆ 3 ii. d W (x,y)=1 on Ω, j=1 j P iii. DβW Cβ , β N2 : β p, where C >0 is a constant and || j||L∞(Ωj) ≤ ξ|β| ∀ ∈ | |≤ β j ξ =diam(Ω ). j j Then, the global interpolant assumes the form d (x,y)= R (x,y)W (x,y), (x,y) Ω, (1) j j I ∈ Xj=1 whereR definesalocalinterpolantoneachPUsubdomainΩ andW :Ω j j j j R. −→ Remark 2.1. Since the functions W , j = 1,...,d, form a partition of unity, j if the local fits R , j = 1,...,d, satisfy the interpolation conditions, then the j global PU approximant inherits the interpolation property, i.e. d (x ,y )= R (x ,y )W (x ,y )= f W (x ,y )=f , i i j i i j i i i j i i i I Xj=1 j∈IX(xi,yi) where I(x ,y )= j/(x ,y ) Ω . i i i i j { ∈ } Inwhatfollows,the localinterpolantsR aredefined aslinearcombinations j of RBFs centred at (x ,y ), with (x ,y ) = Ω , for k =1,...,N , k k k k ∈XNj XN ∩ j j i.e. Nj R (x,y)= c φ(k)(x,y), (x,y) Ω , (2) j k ε ∈ j Xk=1 where N is the number of nodes on Ω and φ(k)(x,y) is a RBF centred at j j ε (x ,y ). A RBFdepends on the so-calledshape parameterε which governsthe k k flatness of the function. Among a large variety of RBFs, we can differentiate between compactly supported and globally defined RBFs. As examples of these two major classes, we consider the Wendland’s C2 function and the Inverse MultiQuadric (IMQ). The latter is strictly positive definite and given by 1 φ(k)(x,y)= , (3) ε 1+ε2r2(x,y) k p wherer2(x,y)=(x x )2+(y y )2isthesquareoftheEuclideandistancefrom k − k − k the centre (x ,y ). Moreover, in case of ill-conditioning of the interpolation k k matrix, it might be advantageous to work with locally supported functions. Indeed, they lead to sparse linear systems. Wendland found a class of RBFs whicharesmooth,locallysupportedandstrictlypositivedefinite. Forexample, the Wendland’s C2 function is defined as φ(k)(x,y)=(1 εr (x,y))4(4εr (x,y)+1), (4) ε − k + k where() denotesthetruncatedpowerfunction. Notethattheshapeparameter + · for a CSRBF identifies the support of the function. 4 The coefficients c Nj in (2) are determined by enforcing the N local { k}k=1 j interpolation conditions R (x ,y )=f , i=1,...,N , j i i i j where (x ,y ) , and f are the associated function values. Thus, the i i ∈ XNj i problem of finding the PU interpolant (1) requires to solve d linear systems of the form A(j)c=f, T T where c = c ,...,c , f = f ,...,f and the entries of the matrix 1 Nj 1 Nj A(j) RNj×(cid:0)Nj are (cid:1) (cid:0) (cid:1) ∈ A(j) =φ(k)(x ,y ), i,k =1,...,N . ik ε i i j If the considered RBF is strictly positive definite, the interpolant (1) is unique [15]. However, even if here we only focus on strictly positive definite RBFs,weremarkthattheuniquenessoftheinterpolantcanbe ensuredalsofor thegeneralcaseofstrictlyconditionallypositivedefinitefunctionsbymodifying (2), see e.g. [37]. Inordertobeabletoformulateerrorbounds,weneedsomefurtherassump- tions on the regularity of Ω and thus we give the following definitions. j Definition 2.1. The fill distance, which is a measure of data distribution, is given by h = sup min r (x,y) . XN,Ω (x,y)∈Ω(cid:18)(xi,yi)∈XN i (cid:19) Definition 2.2. An open and bounded covering Ω d is called regular for { j}j=1 (Ω, ) if the following properties are satisfied [38]: N X i. for each (x,y) Ω, the number of subdomains Ω with (x,y) Ω is j j ∈ ∈ bounded by a global constant C, ii. there exist a constant C > 0 and an angle θ (0,π/2) such that every r ∈ subdomain Ω satisfies an interior cone condition (with angle θ and radius j C h ), r XN,Ω iii. the local fill distances h are uniformly bounded by the global fill dis- XNj,Ωj tance h . XN,Ω Thus, after defining the space Cp(R2) of all functions f Cp whose deriva- ν ∈ tives of order β = p satisfy Dβf(x,y)= (( x2+y2)ν) for x2+y2 0, | | O −→ we consider the following convergence result [1p5, 38]. p Theorem 2.1. Let Ω R2 be open and bounded and suppose that = N ⊆ X (x ,y ),i = 1,...,N Ω. Let us consider a strictly conditionally posi- i i {tive definite RBF whic}h b⊆elongs to Cp(R2). Let Ω d be a regular covering ν { j}j=1 for (Ω, ) and let W d be p-stable for Ω d . Then the error between f NX(NΩ), where N{ j}isj=t1he native space o{f tjh}ej=b1asis function, and its PU φ φ ∈ interpolant (1) can be bounded by Dβf(x,y) Dβ (x,y) C′hp+2ν−|β| f , | − I |≤ XN,Ω | |Nφ(Ω) for all (x,y) Ω and all β p/2. ∈ | |≤ 5 2.2 The positive constrained partition of unity approxi- mant Ifwecomparethe resultreportedinTheorem2.1withtheglobalerrorestimate shownin[15,37],wecanseethatthePUinterpolantpreservesthelocalapprox- imation order for the global fit. In particular, the PU method can be thought as the Shepard’s methodwhere R areusedinsteadofthe data valuesf . Even j j if the classical Shepard’s approximant in its original formulation is overcome [29], it possesses a useful property; specifically, it lies within the range of the data. As a consequence, it is positive if the data values are positive [20]. The positivity-preservingpropertydoesnotholdinitsquadraticformulationnorfor the PU approximant presented in the previous section. In order to avoid such drawback for the PU method, we can directly act on the localRBF interpolants, followingthe strategyproposedin [39]. In such pa- per,aschemedevotedto constructapositiveglobalfitisperformedbydefining several constraints. Extensive results show the good performances of such ap- proach. Infact,thefitofpositivesamplesisalwayspositive,but,incomparison with the original unconstrained interpolation, a degrade of the quality of the approximationis observed. This is mainly due to the factthat a globalmethod isconsideredandthustheshapeofthesurfaceisconsequentlygloballymodified (and not only in the area in which the interpolant is negative). Therefore, here we propose a new formulation for a positive fit considering (1). Sufficient condition to have positive approximants on each subdomain Ω is j that the coefficients c of (2) are all positive. To such scope, following [39], at k first we choose Nˆ added data j (xˆ ,yˆ ),...,(xˆ ,yˆ ), Nj+1 Nj+1 Nj+Nˆj Nj+Nˆj onthesubdomainΩ . Then,thej-thapproximationproblemconsistsinfinding j a function Rˆ of the form j Nj Nj+Nˆj Rˆ (x,y)= c φ(k)(x,y)+ c φˆ(kˆ)(x,y), (x,y) Ω , (5) j k ε kˆ εkˆ ∈ j Xk=1 kˆ=XNj+1 such that Rˆ (x ,y )=f , i=1,...,N , c 0, i=1,...,N +Nˆ , (6) j i i i j i j j ≥ where φˆ(kˆ) are CSRBFs. εkˆ Note that in (5) we consider different supports for the CSRBFs. In particu- lar, if a constraint (xˆ ,yˆ) is added on Ω in a neighborhood of a point, namely i i j (x ,y ),weselectε suchthatonly(x ,y )belongstothesupportoftheCSRBF. i i ˆi i i This choice is due to the fact that, doing in this way, at least when Nˆ = N , j j the problem (5) subject to (6) admits solution. This is proved in [39] by using the Gordan’s Theorem [5]. A brief sketch of the proof will be given in what follows because important considerations arise. Let us define aT = φ(k)(x ,y ) Nj , k =1,...,N , k −{ ε i i }i=1 j aT = φˆ(kˆ)(x ,y ) Nj =(0,...,0, b ,0,...,0), kˆ =N +1,...,2N , kˆ −{ εkˆ i i }i=1 − kˆ j j aT =(f ,...,f ), 2Nj+1 1 Nj 6 where b are positive real numbers. Then, since a vector v such that aTv <0, kˆ i i=1,...,2N +1, does not exist, fromGordan’s Theorem, we know that there j exist nonnegative real numbers c ,...,c , such that 2Nj+1c a = 0 and 1 2Nj+1 i=1 i i c2Nj+1 >0. P Inpracticaluse,theaimistoaddasfewdataaspossible,therefore,according to[39],onecanfindtheminimumof 2Nj sign(c ),suchthat(6)issatisfied. i=Nj+1 i However, this approach does not gPuarantee an optimal solution in terms of accuracy. Here instead, with a technique described in the next section, we select the optimal number of added data Nˆ which yields maximal accuracy. j Nevertheless,ouraimconsistsinperturbing(2)withsmall quantities. Afeasible way is to find 1/2 Nj+Nˆj min c2 , (7) i i=XNj+1  such that (6) is satisfied. The local approach here proposed enables us to modify the shape of the surface only if a negative local approximantis found, in fact if the j-th original localfitoftheform(2)ispositive,wedonotneedtoaddotherdataandinthis case Rˆ = R on Ω . Therefore, for each subdomain, after selecting a suitable j j j numberofconstraintsNˆ ,whichcanalsobe0,thePC-PUapproximantassumes j the form d Nj Nj+Nˆj Iˆ(x,y)=  ckφ(εk)(x,y)+ ckˆφˆε(kˆkˆ)(x,y)Wj(x,y). (8) Xj=1Xk=1 kˆ=XNj+1  Remark2.2. In[39]theauthorslimittheirattentiontoCSRBFs. Hereinstead we will couple them with globally defined RBFs. In fact, even if specific supports of compactly supported kernels must be associated to the added constraints (see the second term in the right-hand side of (5)), we do not have any restrictions on the first term in the right-hand side of (5). Thus, we can use different types of RBFs. In what follows, we will point out that coupling RBFs and CSRBFs leads to a benefit in terms of accuracy. 3 The PC-PU approximant: algorithm and suit- able selection of positive constraints This section is devoted to describe the PC-PU algorithm. In particular, we focus on the suitable choice of the number of positive constraints. Even if the technique here discussed is robust enough to work in any domain Ω R2, for ⊆ simplicity we consider Ω=[0,1]2. 3.1 Selection of positive constraints Acting as explained in the previous section, we can ensure the positivity of the PU approximant. However, depending on the number of positive constraints, thismightleadtoalowaccuracy. In[39],foraglobalRBF-basedinterpolant,h 7 random data are selected as constraints if f is the minimum among all values h f , i=1,...,N. This criterion does not guarantee a good accuracy neither the i existence of a solution and thus we design an alternative approach enabling us to select a suitable number of positive constraints. The proposed method is based on an a priori error estimate. To this aim, several schemes have already been developed. Precisely, we focus on the so- calledcross validation algorithm,see[15,19]. Avariantofsuchmethod,known in literature as Leave One Out Cross Validation (LOOCV), is detailed in [26]. This approach is always used to find the optimal value of the shape parameter ofthe basisfunction. Here instead,foreachPUsubdomainweareinterestedin selecting a suitable number of constraints Nˆ . j To avoid complexities, let us first consider an interpolation problem on Ω j of the form (2). Moreover, let us define the j-th interpolant obtained leaving out the i-th data on Ω as j Nj Ri(x,y)= c φ(k)(x,y), (x,y) Ω , j k ε ∈ j k=X1,k6=i and let e =f Ri(x ,y ), i i− j i i betheerroratthei-thpoint. Then,thequalityofthe localfitisdeterminedby some norm of the vector of errors(e ,...,e )T, obtained by removing in turn 1 Nj one of the data points and comparing the resulting fit with the known value at the removed point. This implementation is computationally expensive. In fact, the matrix in- verse, which requires (N3) operations, must be computed for each node. O j This leads to a total computational cost of (N4) operations. Thus, follow- O j ing [15, 26], we simplify the computation to a single formula. Precisely, we calculate c i e = , (9) i −1 A(j) ii (cid:16) (cid:17) where c is the i-th coefficient of the interpolant based on the full data set and i (A(j))−1 is the i-th diagonal element of the inverse of the corresponding local ii interpolation matrix. In our case, in order to guarantee a positive fit, we deal with an augmented local problem and therefore, as error estimate, we compute the following quan- tity eˆ ,...,eˆ = c1 ,..., cNj+Nˆj , (10) (cid:16) 1 Nj+Nˆj(cid:17) Aˆ(j) −1 Aˆ(j) −1 (cid:16) 11(cid:17) (cid:16) Nj+NˆjNj+Nˆj(cid:17)  where the symmetric matrix Aˆ(j) is defined as φ(1)(x ,y ) φˆ(Nj+Nˆj)(x ,y )  ε 1 1 ··· εNj+Nˆj 1 1  Aˆ(j) = ... ... ... . (11)   φˆ(εNNjj++NNˆˆjj)(x1,y1) ··· φˆε(NNjj++NNˆˆjj)(xˆNj+Nˆj,yˆNj+Nˆj) 8 Moreover, in order to stress the dependence of the error on Nˆ , we use the j notation eˆ (Nˆ )= eˆ ,...,eˆ . (12) j j (cid:16) 1 Nj+Nˆj(cid:17) Note that in our case the coefficients are not determined by directly computing thesolutionofalinearsystem,buttheyarefoundoutbysolving(7),subjectto (6). Thus, to be more precise, we should refer to this criterion as LOOCV-like method, but in orderto keepcommonnotations,we will go oncalling it simply LOOCV. Indeed, we are able to fix a criterion which enables us to select a suitable number of positive constraints. Specifically, focusing on the maximum norm, we compute (10) for Nˆ = 1,...,N . Thus, if on Ω a negative fit is j j j observed, we add Nˆ positive constraints if j eˆ (Nˆ ) = min eˆ (k) , (13) j j ∞ j ∞ || || k=1,...,Nj|| || and the fit is positive. It is easy to see that we automatically ensure that the conditions (6) are satisfied, indeed they are fulfilled at least for Nˆ =N . j j The Nˆ new added data can be placed randomly within the j-th patch, but j numerically we observe that selecting well distributed points on Ω leads to j a better accuracy. As a consequence, on the subdomain Ω of centre (x¯ ,y¯ ) j j j and radius δ, we consider Nˆ positive constraints, distributed as the seeds on a j sunflower head, i.e. defined as [32, 35] (xˆ ,yˆ )=(x¯ +u cosη ,y¯ +u sinη ), (14) k k j k k j k k k =N +1,...,N +Nˆ , where j j j k 1/2 4kπ u =δ − and η = . k pNˆ 1/2 k 1+√5 j q − Weconcludethissectionwithanillustrativefigure. A2Dviewofapartition ofunity structurecoveringasetofscattereddatainthe unitsquareisshownin Figure 2 (left); in the right frame we plot a set of points computed with (14). 3.2 Description of the algorithm To make simpler the presentation, we summarize in the PC-PU Algorithm the steps needed to compute the PC-PU approximant. Atfirst,givenasetofscattereddatainΩ=[0,1]2,weconstructthePUsub- domains. Theyarecircularpatchescentredatagridofpoints = (x¯ ,y¯),i= d i i C { 1,...,d Ω of radius }⊆ 1 δ = . (15) rd According to [15], we choose the number of subdomains such that N/d 4. ≈ The PC-PU approximant is then computed at a grid of s evaluation points = (x˜ ,y˜),i=1,...,s , see Steps 1-3 of the PC-PU Algorithm. s i i E { } Once the partition of unity structure is built, a local interpolation problem needstobesolvedforeachPUsubdomain. Specifically,inthej-thlocalapprox- imation problem, only those data sites and evaluation points belonging to Ω j 9 1 0.5 0.8 0.4 0.6 y y 0.3 0.4 0.2 0.2 00 0.2 0.4 0.6 0.8 1 0.10.1 0.2 0.3 0.4 0.5 x x Figure 2: Left: an illustrative example of PU subdomains covering the domain Ω = [0,1]2. The red dots represent a set of scattered data and the blue circles identify the PU subdomains. Right: an illustrative example of added points computed with (14). areinvolved,see Steps 4-6ofthePC-PU Algorithm. Tofindsuchpointsthe so-called kd-tree partitioning structures are commonly and widely used [3, 15]. However,hereweusetheefficientpartitioningstructureproposedin[11]. Itisa multidimensional procedure which leads to a saving in terms of computational time with respect to the previous partitioning procedures proposed in [9, 10]. Onceboththesets and areorganizedamongthedifferentpatches,an N s X E interpolantoftheform(2)isconstructed,see Step 7ofthePC-PU Algorithm. Then,onlyifthelocalfitisnegativeweaddNˆ positiveconstraints,asexplained j in the previous subsection, see Steps 8-9 of the PC-PU Algorithm. In this way,for eachPUsubdomaina positive localRBFapproximantis computedfor each evaluation point. Finally, the global fit (8) is applied accumulating all the Rˆ and W , see Steps 10-11 of the PC-PU Algorithm. j j 4 Numerical experiments This section is devoted to show, by means of extensive numerical simulations, the performances of the PC-PU approximant. In [39] the authors point out that the global CSRBF constrained method possesses a better approximation behaviour than the CMQS approximant[1, 6]. Here, we compare our PC-PU fit with the original Shepard’s method. In fact, for this method the positivity-preservingproperty holds, while it is lost in the modified quadraticversion[20]. Then, comparisonswiththe globalmethod proposed in [39] and with the classical PU interpolant will be carried out. Ob- viously, since we perform a PU approximation, large data sets are considered. On the opposite, a global interpolant, such as the one outlined in [39], cannot handle large sets. Experiments are performed considering several sets of random nodes con- tained in the unit square Ω=[0,1]2, a gridof d= √N/2 2 subdomaincentres ⌊ ⌋ and a grid of s=80 80 evaluation points. × 10

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.