ebook img

Notes on Mathematics - 102 (linear algebra) PDF

255 Pages·2012·1.697 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Notes on Mathematics - 102 (linear algebra)

Notes on Mathematics - 1021 Peeyush Chandra, A. K. Lal, V. Raghavendra, G. Santhanam 1Supported bya grant from MHRD 2 Contents I Linear Algebra 7 1 Matrices 9 1.1 Definition of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.1.1 Special Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.2 Operations on Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.2.1 Multiplication of Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.3 Some More Special Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 1.3.1 Submatrix of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.3.1 Block Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 1.4 Matrices over Complex Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2 Linear System of Equations 19 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.2 Definition and a Solution Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.2.1 A Solution Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.3 Row Operations and Equivalent Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.3.1 Gauss Elimination Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.4 Row Reduced Echelon Form of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.4.1 Gauss-JordanElimination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.4.2 Elementary Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.5 Rank of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.6 Existence of Solution of Ax=b . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.6.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.6.2 Main Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 2.6.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 2.7 Invertible Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 2.7.1 Inverse of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 2.7.2 Equivalent conditions for Invertibility . . . . . . . . . . . . . . . . . . . . . . . . . 37 2.7.3 Inverse and Gauss-JordanMethod . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 2.8 Determinant. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 2.8.1 Adjoint of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 2.8.2 Cramer’s Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 2.9 Miscellaneous Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 3 Finite Dimensional Vector Spaces 49 3.1 Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 3.1.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 3.1.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3 4 CONTENTS 3.1.3 Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 3.1.4 Linear Combinations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 3.2 Linear Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 3.3 Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3.3.1 Important Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 3.4 Ordered Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 4 Linear Transformations 69 4.1 Definitions and Basic Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 4.2 Matrix of a linear transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 4.3 Rank-Nullity Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 4.4 Similarity of Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 5 Inner Product Spaces 87 5.1 Definition and Basic Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 5.2 Gram-Schmidt OrthogonalisationProcess . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 5.3 Orthogonal Projections and Applications. . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 5.3.1 Matrix of the OrthogonalProjection . . . . . . . . . . . . . . . . . . . . . . . . . . 103 6 Eigenvalues, Eigenvectors and Diagonalization 107 6.1 Introduction and Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 6.2 diagonalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 6.3 Diagonalizable matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 6.4 Sylvester’s Law of Inertia and Applications . . . . . . . . . . . . . . . . . . . . . . . . . . 121 II Ordinary Differential Equation 129 7 Differential Equations 131 7.1 Introduction and Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 7.2 Separable Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 7.2.1 Equations Reducible to Separable Form . . . . . . . . . . . . . . . . . . . . . . . . 134 7.3 Exact Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 7.3.1 Integrating Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 7.4 Linear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 7.5 Miscellaneous Remarks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 7.6 Initial Value Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 7.6.1 Orthogonal Trajectories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 7.7 Numerical Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 8 Second Order and Higher Order Equations 153 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 8.2 More on Second Order Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 8.2.1 Wronskian. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 8.2.2 Method of Reduction of Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 8.3 Second Order equations with Constant Coefficients . . . . . . . . . . . . . . . . . . . . . . 160 8.4 Non Homogeneous Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 8.5 Variation of Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 8.6 Higher Order Equations with Constant Coefficients . . . . . . . . . . . . . . . . . . . . . . 166 CONTENTS 5 8.7 Method of Undetermined Coefficients. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 9 Solutions Based on Power Series 175 9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 9.1.1 Properties of Power Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 9.2 Solutions in terms of Power Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 9.3 Statement of Frobenius Theorem for Regular (Ordinary) Point . . . . . . . . . . . . . . . 180 9.4 Legendre Equations and Legendre Polynomials . . . . . . . . . . . . . . . . . . . . . . . . 181 9.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 9.4.2 Legendre Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 III Laplace Transform 189 10 Laplace Transform 191 10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 10.2 Definitions and Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 10.2.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 10.3 Properties of Laplace Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 10.3.1 Inverse Transforms of Rational Functions . . . . . . . . . . . . . . . . . . . . . . . 199 10.3.2 Transform of Unit Step Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 10.4 Some Useful Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 10.4.1 Limiting Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 10.5 Application to Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202 10.6 Transform of the Unit-Impulse Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 IV Numerical Applications 207 11 Newton’s Interpolation Formulae 209 11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 11.2 Difference Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 11.2.1 Forward Difference Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 11.2.2 BackwardDifference Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 11.2.3 Central Difference Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 11.2.4 Shift Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214 11.2.5 Averaging Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214 11.3 Relations between Difference operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214 11.4 Newton’s Interpolation Formulae . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 12 Lagrange’s Interpolation Formula 221 12.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 12.2 Divided Differences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 12.3 Lagrange’s Interpolation formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 12.4 Gauss’s and Stirling’s Formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 13 Numerical Differentiation and Integration 229 13.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 13.2 Numerical Differentiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 13.3 Numerical Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 6 CONTENTS 13.3.1 A General Quadrature Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 13.3.2 Trapezoidal Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 13.3.3 Simpson’s Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 14 Appendix 239 14.1 System of Linear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 14.2 Determinant. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 14.3 Properties of Determinant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246 14.4 Dimension of M +N . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250 14.5 Proof of Rank-Nullity Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 14.6 Condition for Exactness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 Part I Linear Algebra 7 Chapter 1 Matrices 1.1 Definition of a Matrix Definition 1.1.1 (Matrix) A rectangular array of numbers is called a matrix. We shall mostly be concerned with matrices having real numbers as entries. Thehorizontalarraysofamatrixarecalleditsrows andthe verticalarraysarecalledits columns. A matrix having m rows and n columns is said to have the order m n. × A matrix A of order m n can be represented in the following form: × a a a 11 12 1n ··· a a a A= 2..1 2..2 ·.·.· 2..n,  . . . .    a a a   m1 m2 ··· mn   th th where a is the entry at the intersection of the i row and j column. ij In a more concise manner, we also denote the matrix A by [a ] by suppressing its order. ij a a a 11 12 1n ··· a a a Remark 1.1.2 Some books also use  2...1 2...2 ·.·.·. 2...n to represent a matrix.   a a a   m1 m2 ··· mn   1 3 7 Let A= . Then a =1, a =3, a =7, a =4, a =5, and a =6. 11 12 13 21 22 23 "4 5 6# A matrix having only one column is called a column vector; and a matrix with only one row is called a row vector. Whenever a vector is used, it should be understood from the context whether it is a row vector or a column vector. Definition 1.1.3 (Equality of two Matrices) TwomatricesA=[a ]andB =[b ]havingthesameorder ij ij m n are equal if a =b for each i=1,2,...,m and j =1,2,...,n. ij ij × Inotherwords,twomatricesaresaidtobeequaliftheyhavethesameorderandtheircorresponding entries are equal. 9 10 CHAPTER 1. MATRICES Example 1.1.4 The linear system of equations 2x+3y = 5 and 3x+2y = 5 can be identified with the 2 3 : 5 matrix . "3 2 : 5# 1.1.1 Special Matrices Definition 1.1.5 1. A matrix in which each entry is zero is called a zero-matrix, denoted by 0. For example, 0 0 0 0 0 0 = and 0 = . 2 2 2 3 × "0 0# × "0 0 0# 2. A matrix having the number of rows equal to the number of columns is called a square matrix. Thus, its order is m m (for some m) and is represented by m only. × 3. In a square matrix, A = [a ], of order n, the entries a ,a ,...,a are called the diagonal entries ij 11 22 nn and form the principal diagonal of A. 4. A square matrix A = [a ] is said to be a diagonal matrix if a = 0 for i = j. In other words, the ij ij 6 4 0 non-zero entries appear only on the principal diagonal. For example, the zero matrix 0 and n "0 1# are a few diagonal matrices. AdiagonalmatrixDofordernwiththediagonalentriesd ,d ,...,d isdenotedbyD =diag(d ,...,d ). 1 2 n 1 n If d =d for all i=1,2,...,n then the diagonal matrix D is called a scalar matrix. i 1 if i=j 5. A square matrix A=[a ] with a = ij ij ( 0 if i=j 6 is called the identity matrix, denoted by I . n 1 0 0 1 0 For example, I = , and I = 0 1 0 . 2 3   "0 1# 0 0 1     The subscript n is suppressed in case the order is clear from the context or if no confusion arises. 6. A square matrix A=[a ] is said to be an upper triangular matrix if a =0 for i>j. ij ij A square matrix A=[a ] is said to be an lower triangular matrix if a =0 for i<j. ij ij A square matrix A is said to be triangular if it is an upper or a lower triangular matrix. 2 1 4 Forexample 0 3 1 isanuppertriangularmatrix. Anuppertriangularmatrixwillberepresented   − 0 0 2  −  a a  a  11 12 1n ··· 0 a a by  ... 2...2 ·.·.·. 2...n.    0 0 a   ··· nn   1.2 Operations on Matrices Definition 1.2.1 (Transpose of a Matrix) The transpose of an m n matrix A = [a ] is defined as the ij × n m matrix B =[b ], with b =a for 1 i m and 1 j n. The transpose of A is denoted by At. ij ij ji × ≤ ≤ ≤ ≤

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.