ebook img

Modern Computing Methods PDF

176 Pages·176·9.855 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Modern Computing Methods

MODERN COMPUTING METHODS SECOND EDITION PHILOSOPHICAL LIBRARY NEW YORK Engin. Library Published 1961 by Philosophical Library, Inc. 15 East 40th Street, New York 16, X.Y. All rights reserved Printed in England for Philosophical Library by John Wright <bSons Ltd., at the Stonebridge Press, Bristol n-&5?505 ....... CONTENTS Introduction ...... page v 1. Linear Equations and Matrices: Direct Methods 1 2. Linear Equations and Matrices: Direct Methods on Automatic Computers 13 3. Latent Roots and Vectors of Matrices . 22 ..... 4. Linear Equations and Matrices : Iterative Methods 34 .... 5. Linear Equations and Matrices: Error Analysis 41 ..... 6. Zeros of Polynomials 53 7. Finite-difference Methods 62 8. Chebyshev Series 71 9. Ordinary Differential Equations: Initial-value Problems . 80 10. Ordinary Differential Equations: Boundary-value Problems 93 11. Hyperbolic Partial Differential Equations 101 ...... 12. Parabolic and Elliptic Partial Differential Equations 112 13. Evaluation .of Lim.its;.Use .of Re.curren.ce R.elati.ons . 122 14. Evalua.tion.of In.tegra.ls ...... 130 15. Tabulation of Mathematical Functions . . . 137 Bibliography 145 Index 167 1n INTRODUCTION The first edition of Modern Computing Methods was based on lectures delivered by various members of the staff of Mathematics Division, N.P.L., as part ofa vacation course on 'Computers for Electrical Engineer ing Problems,, organized by the Electrical Engineering Department of the Imperial College of Science and Technology, and attended by repre sentatives of industrial firms. The course was designed to teach the basic principles of the use of analogue machines, high-speed digital computers, and the techniques of numerical mathematics involved in the solution of problems in electrical engineering. Numerical methods are required in all branches of science, and the techniques are generally independent of the source of the problem. For example, the same type of differential equation may represent a problem in physiology as well as a problem in electrical engineering. The oppor tunity was therefore taken to present, as one of the N.P.L. series of Notes on Applied Science, suitably edited versions of those lectures contributed to the course by members of Mathematics Division. The success of this first edition has encouraged the authors to under take a complete revision, in the course of which the booklet has been very largely rewritten. The object has been to bring the material up to date, particularly with regard to methods suitable for automatic compu tation, and the principal changes are as follows. The chapter on Relaxation Methods has been replaced by one on Linear Equations and Matrices: Iterative Methods, while the chapter in the original edition headed Computation of Mathematical Functions has been expanded into two chapters entitled Evaluation of Limits; Use of Recurrence Relations and Evaluation of Integrals. Most of the other chapters have had new material added, some being completely rewritten, and the order of the chapters has been changed. In addition, the chapters on Linear Equations and Matrices: Error Analysis and on Chebyshev Series are new. The authors make no apology for the varying level of treatment of the different topics; this is inevitable if the account is to be kept within a comparatively small compass. Some of the new material is given in greater detail than classical material already well catered for in available text-books. It is hoped that the resulting booklet will prove useful both as a working manual for those engaged in computational work and as a basis for courses innumerical analysis inuniversities and technicalcolleges. The first edition was written by L. Fox, E. T. Goodwin, J. G, L. Michel, F. W. J. Olver and J. H. Wilkinson. The present edition has been pre pared by C. W. Clenshaw, E. T. Goodwin, D. W. Martin, G. F. Miller, F. W. J. Olver and J. H. Wilkinson, all members of the staff of Mathe matics Division, N.P.L. Valuable criticisms and suggestions have also been made by L. Fox, now Director of the Oxford University Computing Laboratory, and many other members of Mathematics Division, particu larly E. L. Albasiny, J. G. Hayes and T. Vickers. Mrs. I. Goode has collated the material, prepared the printer,s copy and helped to see the work through the press. E. T. Goodwin Superintendent Mathematics Division National Physical Laboratory May 1960 VI | LINEAR EQUATIONS AND MATRICES: DIRECT METHODS DEFINITIONS AND PROPERTIES 1. A general set of n linear simultaneous algebraic equations in n un knowns a1,a2, ...,a, can be written in the form a112.1 +a1222 +...+a1n2n = bi, ası”. Fazza's #... +agnºn = bar - (1) an121-Hanaº-H... +annºn = bn. The coefficients are form a square matria oforder n, 011 012 ... 01n A | “0.m “02.2 ... 0.* , (2) 0n1 0m2 ... 0nn which is to be considered simply as an array of numbers, and the column of constants b, similarly forms acolumn matria or vector b.The unknownsa, form a vector x. Equations (1) can then be written in the shortened form Ax = b. (3) The equality sign means that each element of the product vector Ax is equal to the corresponding element of the vector b, and the left of (1) gives the rule for premultiplication of a vector by a matrix. 2. The solution of (1) can be written in general terms as *1 = 0.11b1 + x1aba +...+ xinba, *2 = 0.21b1 + x2ab, +...+ xanbn, (4) wn = anib, + xnaba-H...+ xnnba, or in matrix notation as x = A-1b, (5) l where A-1 has the same form as (2) with a replaced by oz.The matrix A-1 scalled the inverse orreciprocal ofthe matrix A. i The elements or, ofA-4 depend only on the elements of A.Itisclear from (4) that aknowledge ofthe or, would enable the solutions of(1) to be obtained with relative ease for any set ofconstants by,but the deter mination ofthe or, s not trivial. One method of obtaining them s i i suggested by equations (4): ifinthese equations we write bi = 1, b2 = ba =...= bn = 0, we obtain the elements of the first column of the inverse. It follows that the various columns ofA-" can befound insuccession by solving equa tions (1) with the right-hand sides replaced by successive columns of the matrix 1 0 0 ... . 0 0 1 0 ... . 0 n = O 0 1 --- - O - (6) I 0 0 0 ... 0 1 This isthe unit matria oforder n oridentity matrix, socalled invirtue of the relation nx=x (7) I for any vector x.The suffix nisusually omitted when there is nopossible ambiguity. 3. The main properties ofmatrices required inpractice are those of addition, multiplication, and transposition. Matrices can beadded only when ofthe same order, and ifBisthe matrix (2) inwhichaisreplaced by b,then au +bu, ala +big, ..., ain-H bin A+B = ael +b×1, a22 +bag, “..., a2n +ban (8) 0n1+bni, 0,12+bn2, •**> 0nn +ban eleIfmeantsmaktarrisx, tAhatisis,muevlteiprylieedlembyenat nisummbueltriplk,iedthebyrek.sulting matrix has Square matrices ofthe same order can bemultiplied, togive aubu-Falabel +a1aba1 +..., aubig-Falabad-Falabaº-H..., AB =|asibil-Fassbau-Fassbal 4:..., asibia-Fassbºa-Fassbas-F ..., ------------------------------------------------------------------------------ Ifrthwecoulusmentheofnomtaattiroixn r,A,(A)a,ncd,(Ar,)c,totodendoetneotreestpheectivreeslyulttheofrmthulrtoipwlyainngd 2 corresponding elements ofrr and cs and adding the results {scalar product), we can write (9) in the simpler form r^A)c^B) r1(A)c2(B) ... ^(A)c^B) AB = r2(A)Cl(B) r2(A)c2(B) ... r2(A)cn(B) (10) rn(A)Cl(B) rn(A)c2(B) %(A)cn(B) From (10) itis obvious that in general AB ^BA, so that the order ofmulti plication is important. In (10) we refer to AB as B premultiplied by A, or multiplied on the left by A, or as A postmultiplied by B, or multiplied on the right by B. The transposed matrix of A, called A' or Ar, is derived from A by interchanging rows and columns. If a matrix is symmetric, so that ars = a^, then A' = A, and A'A = AA\ (11) Other important cases in which the order of multiplication is immaterial are contained in the equations AI = IA, (12) AA-1 = A~1A = I. (13) The transpose of a product is given by (AB)' = B'A', (14) and its inverse by ^^ ^ = A-1; (15) note that in both operations the order of multiplication is reversed. 4. Associated with a matrix A is its determinant, denoted by det A or |A|. Whereas the matrix is an array of numbers and can be regarded in manyways as an operator, the determinantis apure number. Forexample, «i a* a* h bn 6o = a1 b9 -o2 b. b* +a, &i b? c2 c3 Cl c3 Cl «2 = a1(b2c3-b3c2)-a2(b1c3-b3c1) +a3(b1c2-b2c1). (16) - The sign associated with ar is ( )r+1, and the general rule for evaluation is obvious. The determinant obtained by omitting the row and column containing av is called a minor of order 2 of the original determinant. It can be shown that the inverse A-1 of A is given by in .A 131 41 1 -A -A, IA] 12 M2 (17) where the minors Ars which are obtained by omitting from the original determinant the row and column containing ar8, occur with alternate signs and are transposed in comparison with the corresponding elements ars ofA. 5. When |A| = 0, it is clear from (17) that the matrix A has no inverse. Such a matrix is called singular, and the corresponding linear equations have in general no solution. If |A| = 0, the rows of A are not linearly independent; at least one can be obtained by linear combination of the others. For example, in the equations - 3/1+-X%+ £3 O1; = Xy x^-\- 2x3 o2, (18) Zx1 +x2+ ±x3 = 63, it can be verified that the determinant 1 1 -1 2 = 0, 1 4 and the third of (18) is obtained by adding twice the first to the second. In this case the equations are incompatible unless 2b1 +b2 = b3, and ifthis holds we have effectively only two equations in three unknowns and there is an infinity of solutions. If the equations are homogeneous, that is, the constants br are all zero, the equations have no solution other than x1 = x2 = ...= xn = 0 unless the determinant vanishes, in which case we can omit one eq-uation and solve the rest to find the ratios of the xr, provided that the n 1 equations are not themselves linearly dependent. For example, in the homogeneous set ofequations corresponding to (18), we can omit the last equation and solve xjx^+xjx^+l = 0, (19) - finding x1\x3 = 1-5, xjx3 = 0-5, which also satisfy the remaining equa tion 3x1/x3 +x2\x3 +4 = 0. 6. In solving a set of equations, it is a great advantage to be able to use the same number of decimal places throughout any one stage of the computation. It is a further advantage if the same number can be used at every stage. These advantages can be gained very simply by multiply ing the various rows and columns of coefficients by powers of ten so as to make the largest coefficient in each row and column, including the column of constants, Me between 0-1 and 1-0. Multiplying the rows does not affect the solution, and multiplying the columns involves only a trivial change in the unknowns. The same procedure should be carried out on a matrix of which the inverse is required. In this case, however, the multiplier of each row must subsequently be multiplied into the corresponding column of the inverse obtained, and the multiplier of each column into the corre sponding row of the inverse, in order to recover the inverse of the original matrix.

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.