ebook img

ML. Numerical Methods PDF

142 Pages·2013·0.96 MB·English
by  
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview ML. Numerical Methods

Contract POSDRU/86/1.2/S/62485 Universitatea Tehnic˘a din Cluj-Napoca Ioan Gavrea Mircea Ivan ML. Numerical Methods Universitatea Universitatea Tehnic˘a din Craiova ”Gheorghe Asachi” din Ia¸si Contents ML.1 Numerical Methods in Linear Algebra 7 ML.1.1 Special Types of Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 ML.1.2 Norms of Vectors and Matrices . . . . . . . . . . . . . . . . . . . . . . . . 9 ML.1.3 Error Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 ML.1.4 Matrix Equations. Pivoting Elimination . . . . . . . . . . . . . . . . . . . 16 ML.1.5 Improved Solutions of Matrix Equations . . . . . . . . . . . . . . . . . . . 20 ML.1.6 Partitioning Methods for Matrix Inversion . . . . . . . . . . . . . . . . . . 20 ML.1.7 LU Factorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 ML.1.8 Doolittle’s Factorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 ML.1.9 Choleski’s Factorization Method . . . . . . . . . . . . . . . . . . . . . . . . 31 ML.1.10 Iterative Techniques for Solving Linear Systems . . . . . . . . . . . . . . . 33 ML.1.11 Eigenvalues and Eigenvectors . . . . . . . . . . . . . . . . . . . . . . . . . 36 ML.1.12 Characteristic Polynomial: Le Verrier Method . . . . . . . . . . . . . . . . 38 ML.1.13 Characteristic Polynomial: Fadeev-Frame Method . . . . . . . . . . . . . . 39 ML.2 Solutions of Nonlinear Equations 41 ML.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 ML.2.2 Method of Successive Approximation . . . . . . . . . . . . . . . . . . . . . 42 ML.2.3 The Bisection Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 ML.2.4 The Newton-Raphson Method . . . . . . . . . . . . . . . . . . . . . . . . . 44 ML.2.5 The Secant Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 ML.2.6 False Position Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 ML.2.7 The Chebyshev Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 ML.2.8 Numerical Solutions of Nonlinear Systems of Equations . . . . . . . . . . . 48 ML.2.9 Newton’s Method for Systems of Nonlinear Equations . . . . . . . . . . . . 49 ML.2.10 Steepest Descent Method . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 ML.3 Elements of Interpolation Theory 53 ML.3.1 Lagrange Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 ML.3.2 Some Forms of the Lagrange Polynomial . . . . . . . . . . . . . . . . . . . 54 ML.3.3 Some Properties of the Divided Difference . . . . . . . . . . . . . . . . . . 61 ML.3.4 Mean Value Properties in Lagrange Interpolation . . . . . . . . . . . . . . 63 3 4 CONTENTS ML.3.5 Approximation by Interpolation . . . . . . . . . . . . . . . . . . . . . . . . 65 ML.3.6 Hermite-Lagrange Interpolating Polynomial . . . . . . . . . . . . . . . . . 65 ML.3.7 Finite Differences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 ML.3.8 Interpolation of Functions of Several Variables . . . . . . . . . . . . . . . . 71 ML.3.9 Scattered Data Interpolation. Shepard’s Method . . . . . . . . . . . . . . . 72 ML.3.10 Splines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 ML.3.11 B-splines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 ML.3.12 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 ML.4 Elements of Numerical Integration 81 ML.4.1 Richardson’s Extrapolation . . . . . . . . . . . . . . . . . . . . . . . . . . 81 ML.4.2 Numerical Quadrature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 ML.4.3 Error Bounds in the Quadrature Methods . . . . . . . . . . . . . . . . . . 83 ML.4.4 Trapezoidal Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 ML.4.5 Richardson’s Deferred Approach to the Limit . . . . . . . . . . . . . . . . 85 ML.4.6 Romberg Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 ML.4.7 Newton-Cotes Formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 ML.4.8 Simpson’s Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 ML.4.9 Gaussian Quadrature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 ML.5 Elements of Approximation Theory 91 ML.5.1 Discrete Least Squares Approximation . . . . . . . . . . . . . . . . . . . . 91 ML.5.2 Orthogonal Polynomials and Least Squares Approximation . . . . . . . . . 93 ML.5.3 Rational Function Approximation . . . . . . . . . . . . . . . . . . . . . . . 95 ML.5.4 Pad´e Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 ML.5.5 Trigonometric Polynomial Approximation . . . . . . . . . . . . . . . . . . 97 ML.5.6 Fast Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 ML.5.7 The Bernstein Polynomial . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 ML.5.8 B´ezier Curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 ML.5.9 The METAFONT Computer System . . . . . . . . . . . . . . . . . . . . . 107 ML.6 Integration of Ordinary Differential Equations 109 ML.6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 ML.6.2 The Euler Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 ML.6.3 The Taylor Series Method . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 ML.6.4 The Runge-Kutta Method . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 ML.6.5 The Runge-Kutta Method for Systems of Equations . . . . . . . . . . . . . 112 ML.7 Integration of Partial Differential Equations 115 ML.7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 ML.7.2 Parabolic Partial-Differential Equations . . . . . . . . . . . . . . . . . . . 116 CONTENTS 5 ML.7.3 Hyperbolic Partial Differential Equations . . . . . . . . . . . . . . . . . . . 116 ML.7.4 Elliptic Partial Differential Equations . . . . . . . . . . . . . . . . . . . . . 117 ML.7 Self Evaluation Tests 119 ML.7.1 Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 ML.7.2 Answers to Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 Index 136 Bibliography 139 6 ML.1 Numerical Methods in Linear Algebra ML.1.1 Special Types of Matrices Let m,n(R) be the set of all m n type matrices with real entries, where m,n are positive M × integers ( n(R) := n,n(R)). M M DEFINITION ML.1.1.1 A matrix A n(R) is said to be strictly diagonally dominant when its entries ∈ M satisfy the condition n a > a ii ij | | | | Xj=1 j=i 6 for each i = 1,...,n. THEOREM ML.1.1.2 A strictly diagonally dominant matrix is nonsingular. Proof. Consider the linear system AX = 0, A n(R), ∈ M which has a nonzero solution X = [x1,...,xn]t n,1(R). Let k be an index such that ∈ M x = max x . k j | | 1 j n| | ≤ ≤ Since n a x = 0, kj j j=1 X 7 8 ML.1. NUMERICAL METHODS IN LINEAR ALGEBRA we have n a x = a x . kk k kj j − j=1 X j=k 6 This implies that n n x j a a | | a . kk kj kj | | ≤ | | x ≤ | | k j=1 | | j=1 X X j=k j=k 6 6 This contradicts the strict diagonal dominance of A. Consequently, the only solution to AX = 0 is X = 0, a condition equivalent to the nonsingularity of A. Another special class of matrices is called positive definite. DEFINITION ML.1.1.3 A matrix A n(R) is said to be positive definite if ∈ M det(XtAX) > 0 for every X n,1(R), X = 0. ∈ M 6 Note that, for X = [x ,...,x ]t, we have 1 n n n det(XtAX) = a x x . ij j i i=1 j=1 XX Using the definition (ML.1.1.3) to determine whether a matrix is positive definite or not can be difficult. The next result provides some conditions that can be used to eliminate certain matrices from consideration. THEOREM ML.1.1.4 If the matrix A n(R) is symmetric and positive definite, then: ∈ M (1) A is nonsingular; (2) a > 0, for each k = 1,...,n; kk (3) max a < max a ; kj ii 1 k=j n| | 1 i n| | ≤ 6 ≤ ≤ ≤ (4) (a )2 < a a , for each i,j = 1,...,n, i = j. ij ii jj 6 Proof. (1) If X = 0 is a vector which satisfies AX = 0, then 6 det(XtAX) = 0. This contradicts the assumption that A is positive definite. Consequently, AX = 0 has only the zero solution and A is nonsingular. (2) For an arbitrary k, let X = [x ,...,x ]t be defined by 1 n 1, when j = k, x = j 0, when j = k, j = 1,2,...,n. (cid:26) 6 ML.1.2. NORMS OF VECTORS AND MATRICES 9 Since X = 0, a = det(XtAX) > 0. kk 6 (3) For k = j define X = [x ,...,x ]t by 1 n 6 0, when i = j and i = k, 6 6 x = 1, when i = j, i  − 1, when i = k.  Since X = 0, ajj + akk ajk akj = det(XtAX) > 0. But At = A, so ajk = akj and 6 − − 2a < a +a . kj jj kk Define Z = [z ,...,z ]t where 1 n 0, when i = j and i = k, z = 6 6 i 1, when i = j or i = k, (cid:26) Then det(ZtAZ) > 0, so 2a < a +a . We obtain kj kk jj − a +a kk jj a < max a . kj ii | | 2 ≤ 1 i n| | ≤≤ Hence max a < max a . kj ii 1 k=j n| | 1 i n| | ≤ 6 ≤ ≤≤ (4) For i = j, define X = [x ,...,x ]t by 1 n 6 0, when k = j and k = i, 6 6 x = α, when k = i, k  1, when k = j.  where α represents an arbitrary real number. Since X = 0,  6 0 < det(XtAX) = a α2 +2a α+a . ii ij jj As a quadratic polynomial in α with no real roots, the discriminant must be negative. Thus 4(a2 a a ) < 0 ij − ii jj and a2 < a a . ij ii jj ML.1.2 Norms of Vectors and Matrices DEFINITION ML.1.2.1 A vector norm is a function : n,1(R) R satisfying the following conditions: k∙k M → (1) X 0 for all X n,1(R), k k ≥ ∈ M (2) X = 0 if and only if X = 0, k k (3) αX = α X for all α R and X n,1(R), k k | |k k ∈ ∈ M (4) X + Y X + Y for all X,Y n,1(R). k k ≤ k k k k ∈ M 10 ML.1. NUMERICAL METHODS IN LINEAR ALGEBRA The most common vector norms for n-dimensional column vectors with real number coeffi- cients, X = [x1,...,xn]t n,1(R), are defined by: ∈ M n X = x , 1 i k k | | i=1 X n X = x2, k k2 v i ui=1 uX X = tmax x , i k k∞ 1 i n| | ≤≤ The norm of a vector gives a measure for the distance between an arbitrary vector and the zero vector. The distance between two vectors can be defined as the norm of the difference of the vectors. The concept of distance in n,1(R) is also used to define the limit of a sequence M of vectors. DEFINITION ML.1.2.2 A sequence (X(k))∞k=1 of vectors in Mn,1(R) is said to be convergent to a vector X n,1(R), with respect to the norm , if ∈ M k ∙ k lim X(k) X = 0. k k − k →∞ The notion of vector norm will be extended for matrices. DEFINITION ML.1.2.3 A matrix norm on the set n(R) is a function : n(R) R satisfying the M k ∙ k M → conditions: (1) A 0, k k ≥ (2) A = 0 if and only if A = 0, k k (3) αA = α A , k k | |k k (4) A + B A + B , k k ≤ k k k k (5) AB A B , k k ≤ k k ∙ k k for all matrices A,B n(R) and any real number α. ∈ M It is not difficult to show that if is a vector norm on n,1(R), then k∙k M A := max AX k k X =1k k k k is a matrix norm called the natural norm or the induced matrix norm associated with the vector norm. In this text, all matrix norms will be assumed to be natural matrix norms unless specified otherwise.

Description:
PWS-KENT Publishing. Company, 1993. [Blu72]. E. K. Blum. Numerical analysis and computation theory and practice. Addison-Wesley. Publishing Co., Reading, Mass.-London-Don Mills, Ont., 1972. Addison-Wesley Series in. Mathematics. [Boo60]. G. Boole. A treatise of the calculus of finite differences.
See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.