ebook img

Finite Difference Approximations and Numerical Greens Functions PDF

41 Pages·2.091 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Finite Difference Approximations and Numerical Greens Functions

Finite Difference Approximations and Numerical Greens Functions Copyright 2012 Alfred Paul Steffens Jr Preprint Draft, February 2012 1 Table of Contents Chapter 1: Finite Differences and Differential Equations 1 1. Finite Difference Approximation 1 2. The Green’s Function Comparison 2 3. The Role of Interpolation 3 4. Integration by Terms 3 Chapter 2: Interpolating Polynomials and Influence Functions 5 1. Polynomials in 1 Dimension 5 2. Polynomials in 2 Dimensions 6 3. Visual Interpretation in 2 D 7 4. 2 D Interpolation by Opposing Areas 8 Chapter 3: Introduction to Green’s Functions 11 1. Definition of Green’s Functions 11 2. Electrostatic Green’s Function 12 3. The Magic Rule 13 Chapter 4: A Simple Explanation of Green’s Functions 17 1. A Three Point Diagram 17 2. The One-Dimensional Solution 20 3. Visual Integration of the Boundary 23 4. A 2D Equivalent of Visual Integration 26 5. Inserting the Propagation Component 28 6. Relationship to General Green’s Functions 29 Chapter 5: Numerical Green’s Functions 33 1. The Pseudo Green’s Function Approximation 33 2. The Induced Boundary Charge Method 37 i Chapter 1 Finite Differences and Differential Equations §1.1. Finite Difference Approximation. The finite difference approximation of a differential equation is based on the idea that the derivative of a continuous-valuedfunction,beingitselfacontinuous-valuedfunction,canbeapproximatednearsomespecific point by the ratio of differences of the function values near that point and the independent variable values near that point dy y −y ≈ 1 0 (1.1.1) dx x −x 1 0 Byreplacingthederivativesintheexpressionforthedifferentialequationwitharatioofdifferencesweobtain an algebra problem to be solved for some discrete (or sampled) function value, y , in terms of neighboring j values y , y , ..., y , y , ..., such that j+1 j+2 j−1 j−2 y =a y +a y +...+b y +b y +... (1.1.2) j j−1 j−1 j−2 j−2 j+1 j+1 j+2 j+2 where the expression in (1.1.2) is solved simultaneously at each point in the numerical model. This can be seen in detail in the well-known works on numerical analysis. The continued reiteration of the resulting expressions may not converge (to a unique solution) or may not have enough accuracy to be useful. Hence,themajorworkinfinitedifferencingcentersonwhichapproachesconvergeandpredictingthe accuracy of the outcome. The subject of numerical integration appears to be an art rather than a rigorous branch of mathematics. The literature traditionally contains various separate but related topics: interpolation and extrapolation, zeros of polynomials, matrices, numerical quadrature, etc. The numerical solution of differential equations hasevolvedintotwoseparateareasofstudy. Ordinarydifferentialequationsarewidelysolvednumericallyin controlsystemsengineering. Thenumericalsolutionofpartialdifferentialequationshasremainedthearcane, albeit moribund, foundation of the generally accepted finite element algorithm used in aircraft engineering. A unifying concept for numerical integration that can include both ordinary differential equations (ODE) and partial differential equations (PDE), if it exists, has not been generally accepted. 1 §1.2. The Green’s Function Comparison. The subject of Green’s functions is almost never discussed as an approach to the numerical solution of differential equations. More of an academic subject, Green’s functions are typically studied in the mathe- matical physics works on electromagnetism or heat diffusion. An interesting aspect of this subject that goes unmentioned is that Green’s functions have been found for both oridinary differential equations and partial differential equations. Perhaps due to the segregation between the academic cultures of physics, applied mathematics, and engineering the subject of Green’s functions has not attracted the amount of attention given to other techniques. IntheareaofvectorcalculustheimpulseresponsesolutionissynthesizedfromGreen’sTheoremasaGreen’s function solution. The solution of a differential equation in U(x) is written in terms of the Green’s function and its gradients dG(x,x0) G(x,x0), (1.2.3) dx0 as a surface integral over the boundary weighted by dG/dx0 and a volume integral of the inhomogeneous term weighted by G(x,x0). I dG(x,x0) Z U(x)=− U(x0)dx0+ G(x,x0)f(x0)dx0 (1.2.4) dx0 S V One immediate observation that can be made about the above expression is that the integrals can be interpreted as the continuous, infinitesimal equivalents to sums in the finite domain. Proposition 1.2.1. The numerical solution to the simplest homogeneous differential equation, Laplace’s equation, in a small region is the weighted sum of the neighboring function samples. That is to say, the algebraic expression we obtain from ”replacing” the continuous-valued derivatives with finitedifferencesisactuallyaninterpolationoftheneighborvaluesand,possibly,perturbedbyasourceterm. Thecoefficientsoftheneighborvaluesareinterpolationfunctions(i.e.,influencefunctions)inthesamesense as the Lagrange interpolation formula. n X U(x0)= L (x)U +P(x0)f(x0) (1.2.5) j j j=1 The implication here is that the two terms in equation (1.2.5) correspond to the two terms in the Green’s function magic rule, (1.2.4). And furthermore, the numerical equivalent of the Green’s function gradient, dG(x,x0)/dx0, istheinterpolationfunctionattheneighborpointj. ThenumericalequivalentoftheGreen’s function itself is the factor P(x0), which is a synthesis of the interpolation functions. More precisely, the factorP(x’)wouldbethenumericalGreen’sfunction,butherethesourceandobservationlocationscoincide. In order for the factor P to be named a Green’s function it must be capable of propagating the source effect at x0 to the function value at x. P must be made a function of both x0 and x: P(x0)−→P(x,x0) 2 For the purposes of numerical integration that consideration may or may not be needed. The factor P(x0) is used to invert the effect of the source term at x0 to find the function value, U, at x0. §1.3. The Role of Interpolation. The importance of interpolation in numerical integration is well known. However, it may be useful to view interpolation as the central concept of numerical integration. Consider the following observation taken from the calculus of finite differences (Boole): Proposition 1.3.2. The Nth-order finite difference, equivalent to differencing the differential equation dny/dxn, is a generating function for producing interpolation functions. In other words, the algebraic expression obtained from replacing the Nth-order derivative with differences in equation (1.1.1) is an interpolation. The coefficients of the neighboring function values are interpolation functions. Itmaybeworthwhile,infindingsupportforthisidea,toreviewtheD operatorinGeorgeBoole’s 1860 book, A Treatise on the Calculus of Finite Differences. If we entertain the ideas in Proposition 1.2.1 and Proposition 1.3.2 we must include the general problem of an arbitrary differential equation, which is the sum of derivative terms in decreasing order. Consider first the highest-order derivative, and let its solution be a weighted sum of neighbor values and interpolation functions. What will be the effect of adding the lower-order derivative terms? The following proposition is an attempt to provide one answer to that question: Proposition 1.3.3. The sum of decreasing-order derivatives in a finite difference equation contribute adjustments to the interpolating functions obtained from the highest-order derivative. In other words, given an arbitrary differential equation approximated by a finite difference solution, the solution is still a weighted sum of the boundary neighbor points. However, the interpolation functions that would have resulted from only the highest derivative must be modified to satisfy the additional requirement of the lower derivatives. §1.4. Integration by Terms. Wediscussthefinitedifferenceapproachasanapproximation,andwesaythatthederivativesare”replaced” by their finite difference equivalents. But this is merely common terminology and is used for convenience. In the actual calculation of a solution it would be dangerous to overlook the fact that a derivative is not ”replaced”byafinitedifferencebutrather,itissuccessivelyintegrated. Thefirst-orderdifferenceisobtained from the first-order derivative by The Fundamental Theorem of Calculus: Z 1 dy y −y = dx0 (1.4.6) 1 0 dx0 0 We must assume that y(x) is an analytic function over the region, and so it has a convergent Taylor series at x, and so over a smaller and smaller region its geometric curve approaches a straight line. The reason for emphasizing this obvious fact is that the reduction of a differential equation to a finite difference should proceed according to a linearized integration over a differential region and not by trivial 3 substitution by differences. For example, one of the terms of the differential equation could be a parameter that is a function of x, say b(x). Its contribution to the difference equation would be a quadrature: Z 1 1 b(x0)dx0 ≈ (b +b ) (1.4.7) 2 1 0 0 The quadrature might be the average shown above, or it might be more useful to obtain a higher-order quadrature. The proper decision may be guided in light of Proposition 1.3.3. If we design a finite difference solution to be a weighted sum then the result of equation (1.2.5) could be adjusted to that end. Essentially, a weighted sum is usually defined such that the sum of the weights is unity. During the reduction to finite differences the resulting interpolation functions should be checked that they sum to unity. Additionally, the resulting interpolation functions should be considered influence functions. That is, the weight function of each neighbor value should be unity at that point and be zero at all other neighbor points. 4 Chapter 2 Interpolating Polynomials and Influence Functions §2.1. Polynomials in 1 Dimension. Foramoredetailed,introductoryaccountofpolynomialinterpolationseeElementaryTheoryandApplication of Numerical Analysis by David G. Moursund and Charles S. Duris, and Numerical Methods for Scientists and Engineers, second edition, by R. W. Hamming. Polynomial interpolation is the operation of finding an nth-order polynomial, P (x), that passes through n N+1knownfunctionpoints,f(x ),f(x ),...,f(x ),f(x ). Theinterpolatingpolynomialisknowntobe 1 2 n n+1 solvedbytwodifferentbutequivalentapproaches: theLagrangeInterpolationformula,andtheVandermonde matrix. The first of these is the Lagrange Interpolation formula, written n X P (x)= L (x)f(x ) (2.1.1) n j j j=0 where the coefficients, L (x), are the Lagrange interpolation functions. The interpolation functions are j themselves polynomials, each multiplying a corresponding data point, f(x ), of the data set to be ”fitted.” j The property of each Lagrange function, L (x), is that it is unity at the location x corresponding to the j j data point f(x ) which it multiplies, and is zero at all other data points, f(x ),k 6= j. In other words, j k this polynomial should be trivial to derive, because the zeros of the polynomial are known. As an example, consider an interpolating polynomial between three points, f(x ), f(x ), and f(x ). To find the Lagrange 1 2 3 function L (x) that multiplies the point at x we need only construct a polynomial whose zeros are at x 3 3 1 and x . 2 L (x)=(x−x )(x−x ) 3 1 2 And to make it be unity at x simply calculate the value at x=x and divide by the result, 3 3 L (x) (x−x )(x−x ) L (x)= 3 = 1 2 3 L (x ) (x −x )(x −x ) 3 3 3 1 3 2 ThesecondapproachistoobtaintheinterpolatingpolynomialfromthesolutionoftheVandermondematrix. For a set of N +1 data points construct an Nth-degree polynomial P (x)=a +a x+a x2+...+a xn (2.1.2) n 0 1 2 n whose coefficients are to be solved by matching the polynomial at the N +1 data points. The coefficients a are the solution to a set of simultaneous equations, equating the polynomial at each of the n+1 known j function values. 5 P (x )=a +a x +a x2+...+a xn n 1 0 1 1 2 1 n 1 P (x )=a +a x +a x2+...+a xn n 2 0 1 2 2 2 n 2 . . . P (x )=a +a x +a x2 +...+a xn n n 0 1 n 2 n n n which is equivalent to the matrix equation 1 x x2 x3 ... xna   P (x )  1 1 1 1 0 n 1 1 x x2 x3 ... xn a P (x )  2 2 2 2  1  n 2   .  . = .  (2.1.3)  ..  ..   ..  1 x x2 x3 ... xn a P (x ) n n n n n n n The matrix on the left-hand side of (2.1.3) is the Vandermonde matrix. Another way to view these two different approaches is that the interpolating polynomial can be constructed from a set of influence functions (i.e., the Lagrange functions) that are like propagators, each contributing the influence of the data point it multiplies. The polynomial found by solving the Vandermonde matrix equation (2.1.3) can be likewise constructed by using the Vandermonde matrix equation at each of the data points individually. Say, for the influence function at x the equation would be 1 1 x1 x21 x31 ... xn1 b0  1  1 x2 x22 x32 ... xn2 b1  0   ...  ... = ...  (2.1.4) 1 x x2 x3 ... xn b 0 n n n n n to obtain the influence function, L (x), 1 L (x)=b +b x+b x2+...+b xn (2.1.5) 1 0 1 2 n §2.2. Polynomials in 2 Dimensions. Suppose we want an interpolating polynomial between three points in the plane, x ,y , x ,y , and x ,y . 1 1 2 2 3 3 Wecanfollowthesamerecipeofmatchingpolynomialcoefficientsbysettingvaluesatthethreepoints. The polynomial P(x,y)=ax+by+c (2.2.6) is found using the set of simultaneous equations      x y 1 a P 1 1 1 x2 y2 1b= P2  (2.2.7) x y 1 c P 3 3 3 The solution is 6 P (y −y )+P (y −y )+P (y −y ) a= 1 3 2 2 1 3 3 2 1 x (y −y )+x (y −y )+x (y −y ) 1 3 2 2 1 3 3 2 1 P (x −x )+P (x −x )+P (x −x ) b=− 1 3 2 2 1 3 3 2 1 x (y −y )+x (y −y )+x (y −y ) 1 3 2 2 1 3 3 2 1 P (x y −x y )+P (x y −x y )+P (x y −x y ) c= 1 3 2 2 3 2 1 3 3 1 3 2 1 1 2 (2.2.8) x (y −y )+x (y −y )+x (y −y ) 1 3 2 2 1 3 3 2 1 This is a linear, two-dimensional interpolation in x and y. The first observation we make is how easy this formula was to generalized from the one-dimensional case of polynomial interpolation. In the Vandermonde matrix, the y coordinates were simply added to each simultaneous equation. It would seem that the above result provides a more direct alternative to the more common bilinear interpolation method. The points did not need to be arranged with rectangular grid spacing, which leads to the following observation: Proposition 2.2.1. Interpolation between points of a scalar field depends on distance, not direction. Clearly, the above two-dimensional interpolation can also be decomposed into a set of influence functions that weight the data values in the calculation. All that is needed is to recalculate the Vandermonde matrix to be unity at the coordinate x , y and zero at all others. Although it seems obvious from the examples j j given, it would be worthwhile to remember the following common property of these interpolations: Proposition 2.2.2. Interpolation can be written as a weighted sum. §2.3. Visual Interpretation in 2 D. Thecalculationinequation(2.2.8)becomesimpractical,evenonacomputer,forN greaterthan3. Theneed for more efficient methods for calculating interpolation coefficients leads to a re-examination of polynomial interpolationfromageometricperspective. Yourmathematicalintuitionwilltellyouthattheaboveformula appearstocontaincrossproducts. Itfact,thedenominatoristhecrossproductofthevectorsR~ connecting 21 points2to1andvectorR~ connectingpoint3topoint2. Inotherwords,thedenominatorisequaltotwice 32 theareaofthetriangle. Buttheinterpolationisaweightedsum,andthenormalizationoftheweightedsum is usually the sum of the weights. So, immediately we should expect that the weights are themselves areas. To see that the influence functions are areas consider the simple example when the triangle encloses the origin. Now each corner point of the triangle can be visualized as a vector from the origin to the point. For example, point x ,y is a vector R~ , and so on. Now suppose that we choose to calculate the value at the 1 1 1 origin itself. Then x=0, y =0, and the only term remaining to calculate is the coefficient c. Rewriting c as (x y −x y ) (x y −x y ) (x y −x y ) c=P 3 2 2 3 +P 1 3 3 1 +P 2 1 1 2 1 2A 2 2A 3 2A Noting that the numerators of the terms are R~ ×R~ , R~ ×R~ , and R~ ×R~ , it becomes evident that the 2 3 3 1 1 2 coefficient multiplying each point is twice the area of the triangular sector opposing it. Then the coefficient c can be rewritten as A A A c=P 23 +P 13 +P 12 (2.3.9) 1 A 2 A 3 A 7 Figure 2.1. Althoughwesetthecoordinatesxandytozero,theresultstillholdswhentheyarenonzero. Thecoefficients a and b are offsets to the calculation when x and y are not at the origin. There is a parallel here to the Lagrange interpolation formulas in one dimension. The analogous linear case in one dimension is just a line segment divided into two segments. The coefficients, i.e., influence functions for the line segments are the lengths of the opposing line segments. §2.4. 2 D Interpolation by Opposing Areas. The area opposite to the anchoring boundary point is the minimum dimensional quantity which can make the influence function vary between 1 at the anchor point and 0 at all other points, and goes to zero at the boundary lines. For polygons with the number of vertices, N, greater than three an opposing area can no longer be uniquely identified. Consider a four-sided polygon with an observation point x,y in its interior. The line segments between the observation point and the vertices subdivide the polygon into four triangles, S , S , S , and S . 1 2 3 4 Figure 2.1. Fortheinfluencefunctioncorrespondingtovertex1afunctionisneededwhichbecomesunityatpoint1and goestozeroattheotherthreepoints. Forthesimplercaseofthetrianglewecouldseethateachpointcould be matched to a corresponding sector opposite to the interior point, P. But with a four-sides figure there is 8

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.