ebook img

Methods of theoretical physics 1 PDF

189 Pages·2005·0.777 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Methods of theoretical physics 1

Methods of Theoretical Physics: I ABSTRACT First-order and second-order di(cid:11)erential equations; Wronskian; series solutions; ordi- nary and singular points. Orthogonal eigenfunctions and Sturm-Liouville theory. Complex analysis, contour integration. Integral representations for solutions of ODE’s. Asymptotic expansions. Methods of stationary phase and steepest descent. Generalised functions. Books E.T. Whittaker and G.N. Watson, A Course of Modern Analysis. G. Arfken and H. Weber, Mathematical Methods for Physicists. P.M. Morse and H. Feshbach, Methods of Theoretical Physics. Contents 1 First and Second-order Di(cid:11)erential Equations 3 1.1 The Di(cid:11)erential Equations of Physics . . . . . . . . . . . . . . . . . . . . . . 3 1.2 First-order Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2 Separation of Variables in Second-order Linear PDE’s 7 2.1 Separation of variables in Cartesian coordinates . . . . . . . . . . . . . . . . 7 2.2 Separation of variables in spherical polar coordinates . . . . . . . . . . . . . 10 2.3 Separation of variables in cylindrical polar coordinates . . . . . . . . . . . . 12 3 Solutions of the Associated Legendre Equation 13 3.1 Series solution of the Legendre equation . . . . . . . . . . . . . . . . . . . . 13 3.2 Properties of the Legendre polynomials . . . . . . . . . . . . . . . . . . . . 18 3.3 Azimuthally-symmetric solutions of Laplace’s equation . . . . . . . . . . . . 24 3.4 The generating function for the Legendre polynomials . . . . . . . . . . . . 27 3.5 The associated Legendre functions . . . . . . . . . . . . . . . . . . . . . . . 30 3.6 The spherical harmonics and Laplace’s equation . . . . . . . . . . . . . . . . 33 3.7 Another look at the generating function . . . . . . . . . . . . . . . . . . . . 37 4 General Properties of Second-order ODE’s 40 4.1 Singular points of the equation . . . . . . . . . . . . . . . . . . . . . . . . . 41 4.2 The Wronskian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 4.3 Solution of the inhomogeneous equation . . . . . . . . . . . . . . . . . . . . 48 4.4 Series solutions of the homogeneous equation . . . . . . . . . . . . . . . . . 49 4.5 Sturm-Liouville Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 5 Functions of a Complex Variable 96 5.1 Complex Numbers, Quaternions and Octonions . . . . . . . . . . . . . . . . 96 5.2 Analytic or Holomorphic Functions . . . . . . . . . . . . . . . . . . . . . . . 107 5.3 Contour Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 5.4 Classi(cid:12)cation of Singularities . . . . . . . . . . . . . . . . . . . . . . . . . . 125 5.5 The Oppenheim Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 5.6 Calculus of Residues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 5.7 Evaluation of real integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 5.8 Summation of Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 1 5.9 Analytic Continuation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 5.10 The Gamma Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 5.11 The Riemann Zeta Function . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 5.12 Asymptotic Expansions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 5.13 Method of Steepest Descent . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 6 Non-linear Di(cid:11)erential Equations 188 6.1 Method of Isoclinals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 6.2 Phase-plane Diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 7 Cartesian Vectors and Tensors 195 7.1 Rotations and re(cid:13)ections of Cartesian coordinate . . . . . . . . . . . . . . . 195 7.2 The orthogonal group O(n), and vectors in n dimensions . . . . . . . . . . . 198 7.3 Cartesian vectors and tensors . . . . . . . . . . . . . . . . . . . . . . . . . . 200 7.4 Invariant tensors, and the cross product . . . . . . . . . . . . . . . . . . . . 204 7.5 Cartesian Tensor Calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 2 1 First and Second-order Di(cid:11)erential Equations 1.1 The Di(cid:11)erential Equations of Physics It is a phenomenological fact that most of the fundamental equations that arise in physics are of second order in derivatives. These may be spatial derivatives, or time derivatives in various circumstances. We call the spatial coordinates and time, the independent variables of the di(cid:11)erential equation, while the (cid:12)elds whose behaviour is governed by the equation are called the dependent variables. Examples of dependent variables are the electromag- netic potentials in Maxwell’s equations, or the wave function in quantum mechanics. It is frequently the case that the equations are linear in the dependent variables. Consider, for example, the scalar potential (cid:30) in electrostatics, which satis(cid:12)es 2(cid:30)= 4(cid:25)(cid:26) (1.1) r (cid:0) where (cid:26) is the charge density. The potential (cid:30) appears only linearly in this equation, which is known as Poisson’s equation. In the case where there are no charges present, so that the right-hand side vanishes, we have the special case of Laplace’s equation. OtherlinearequationsaretheHelmholtzequation 2 +k2 = 0,thedi(cid:11)usionequation r 2 @ =@t = 0, the wave equation 2 c(cid:0)2@2 =@t2 = 0, andthe Schro(cid:127)dingerequation r (cid:0) r (cid:0) h(cid:22)2=(2m) 2 +V ih(cid:22)@ =@t = 0. (cid:0) r (cid:0) Thereasonforthelinearityofmostofthefundamentalequationsinphysicscanbetraced back to the fact that the (cid:12)eldsin the equations do not usuallyact as sources for themselves. Thus, for example, in electromagnetism the electric and magnetic (cid:12)elds respond to the sources that create them, but they do not themselves act as sources; the electromagnetic (cid:12)elds themselves are uncharged; it is the electrons and other particles that carry charges that act as the sources, while the photon itself is neutral. There are in fact generalisations of Maxwell’s theory, known as Yang-Mills theories, which play a fundamental ro^le in the description of the strong and weak nuclear forces, which are non-linear. This is precisely because the Yang-Mills (cid:12)elds themselves carry the generalised type of electric charge. Anotherfundamentaltheorythathasnon-linearequationsofmotionisgravity,described byEinstein’sgeneraltheoryofrelativity. Thereasonhereisverysimilar; allformsofenergy (mass)actassourcesforthegravitational(cid:12)eld. Inparticular,theenergyinthegravitational (cid:12)eld itself acts as a source for gravity, hence the non-linearity. Of course in the Newtonian limitthegravitational (cid:12)eldisassumedtobeveryweak,andallthenon-linearitiesdisappear. In fact there is every reason to believe that if one looks in su(cid:14)cient detail then even the linear Maxwell equations will receive higher-order non-linear modi(cid:12)cations. Our best 3 candidate for a uni(cid:12)ed theory of all the fundamental interactions is string theory, and the way in which Maxwell’s equations emerge there is as a sort of \low-energy" e(cid:11)ective theory, which will receive higher-order non-linear corrections. However, at low energy scales, these terms will be insigni(cid:12)cantly small, and so we won’t usually go wrong by assuming that Maxwell’s equations are good enough. The story with the order of the fundamental di(cid:11)erential equations of physics is rather similar too. Maxwell’s equations, the Schro(cid:127)dingerequation, andEinstein’s equations are all ofsecondorderinderivativeswithrespectto(atleastsomeof)theindependentvariables. If you probe more closely in string theory, you (cid:12)ndthat Maxwell’s equations and the Einstein equations will also receive higher-order corrections that involve larger numbers of time and space derivatives, but again, these are insigni(cid:12)cant at low energies. So in some sense one shouldprobablyultimately take the view that the fundamental equations of physicstend to be of second order in derivatives because those are the only important terms at the energy scales that we normally probe. We should certainly expect that at least second derivatives will be observable, since these are needed in order to describe wave-like motion. For Maxwell’s theory the existence of wave-like solutions (radio waves, light, etc.) is a commonplace observation, and probably in the not too distant future gravitational waves will be observed too. 1.2 First-order Equations Di(cid:11)erential equations involving only one independent variable are called ordinary di(cid:11)eren- tials equations, or ODE’s, by contrast with partial di(cid:11)erential equations, or PDE’s, which have more than one independent variable. Even (cid:12)rst-order ODE’s can be complicated. One situation that is easily solvable is the following. Suppose we have the single (cid:12)rst- order ODE dy = F(x): (1.2) dx The solution is, of course, simply given by y(x) = xdx0F(x0) (note that x0 here is just a name for the \dummy" integration variable). ThisRis known as \reducing the problem to quadratures," meaning that it now comes down to just performing an inde(cid:12)nite integral. Of course it may or may not be be that the integral can be evaluated explicitly, but that is a di(cid:11)erent issue; the equation can be regarded as having been solved. More generally, we could consider a (cid:12)rst-order ODE of the form dy = F(x;y): (1.3) dx 4 AspecialclassoffunctionF(x;y)forwhichcancanagaineasilysolvetheequationexplicitly is when P(x) F(x;y) = ; (1.4) (cid:0)Q(y) implying that (1.3) becomes P(x)dx+Q(y)dy = 0, since then we can reduce the solution to quadratures, with x y dx0P(x0)+ dy0Q(y0)= 0: (1.5) Z Z Note that no assumption of linearity is needed here. A rather more general situation is when P(x;y) F(x;y) = ; (1.6) (cid:0)Q(x;y) andthedi(cid:11)erential P(x;y)dx+Q(x;y)dy isexact, whichmeans thatwe can(cid:12)nda function ’(x;y) such that d’(x;y) = P(x;y)dx+Q(x;y)dy: (1.7) Of course there is no guarantee that such a ’ will exist. Clearly a necessary condition is that @P(x;y) @Q(x;y) = ; (1.8) @y @x since d’ = @’=@xdx+@’=@ydy, which implies we must have @’ @’ = P(x;y); = Q(x;y); (1.9) @x @y since second partial derivatives of ’ commute: @2’ @2’ = : (1.10) @x@y @y@x In fact, one can also see that (1.8) is su(cid:14)cient for the existence of the function ’; the condition (1.8) is known as an integrability condition for ’ to exist. If ’ exists, then solving the di(cid:11)erential equation (1.3) reduces to solving d’ = 0, implying ’(x;y) = c =constant. Once ’(x;y) is known, this implicitly gives y as a function of x. If P(x;y) and Q(x;y) do not satisfy (1.8) then all is not lost, because we can recall that solving the di(cid:11)erential equation (1.3), where F(x;y) = P(x;y)=Q(x;y) means solving (cid:0) P(x;y)dx+Q(x;y)dy = 0, which is equivalent to solving (cid:11)(x;y)P(x;y)dx+(cid:11)(x;y)Q(x;y)dy = 0; (1.11) where (cid:11)(x;y) is some generically non-vanishing but as yet otherwise arbitrary function. If we want the left-hand side of this equation to be an exact di(cid:11)erential, d’ = (cid:11)(x;y)P(x;y)dx+(cid:11)(x;y)Q(x;y)dy; (1.12) 5 then we have the less restrictive integrability condition @((cid:11)(x;y)P(x;y)) @((cid:11)(x;y)@Q(x;y)) = ; (1.13) @y @x where we can choose (cid:11)(x;y) to be more or less anything we like in order to try to ensure that this equation is satis(cid:12)ed. It turns out that some such (cid:11)(x;y), known as an integrating factor, always exists in this case, and so in principle the di(cid:11)erential equation is solved. The only snag is that there is no completely systematic way for (cid:12)nding (cid:11)(x;y), and so one is not necessarily guaranteed actually to be able to determine (cid:11)(x;y). 1.2.1 Linear (cid:12)rst-order ODE Consider the case where the function F(x;y) appearing in (1.3) is linear in y, of the form F(x;y) = p(x)y+q(x). Then the di(cid:11)erential equation becomes (cid:0) dy +p(x)y = q(x); (1.14) dx whichisinfactthemostgeneralpossibleformfora(cid:12)rst-orderlinearequation. Theequation can straightforwardly be solved explicitly, since now it is rather easy to (cid:12)nd the required integrating factor (cid:11) that renders the left-hand side an exact di(cid:11)erential. In particular, (cid:11) is just a function of x here. Thus we multiply (1.14) by (cid:11)(x), dy (cid:11)(x) +(cid:11)(x)p(x)y =(cid:11)(x)q(x); (1.15) dx and require (cid:11)(x) to be such that the left-hand side can be rewritten as dy d((cid:11)(x)y) (cid:11)(x) +(cid:11)(x)p(x) = ; (1.16) dx dx so that (1.15) becomes dy d(cid:11)(x) (cid:11)(x) + y = (cid:11)(x)q(x): (1.17) dx dx Di(cid:11)erentiating the right-hand side of (1.16), we see that (cid:11)(x) must be chosen so that d(cid:11)(x) = (cid:11)(x)p(x); (1.18) dx implying that we shall have x (cid:11)(x) = exp dx0p(x0) : (1.19) (cid:16)Z (cid:17) (Thearbitraryintegration constant justamountstoaconstant additiveshiftoftheintegral, and hence a constant rescaling of (cid:11)(x), which obviously is an arbitrariness in our freedom to choose an integrating factor.) 6 With (cid:11)(x) in principle determined by the integral (1.19), it is now straightforward to integrate the di(cid:11)erential equation written in the form (1.16), giving 1 x y(x) = dx0(cid:11)(x0)q(x0): (1.20) (cid:11)(x) Z Note that the arbitrariness in the choice of the lower limit of the integral implies that y(x) has an additive part y (x) amounting to an arbitrary constant multiple of 1=(cid:11)(x), 0 x y (x) = C exp dx0p(x0) : (1.21) 0 (cid:0) (cid:16) Z (cid:17) This is the general solution of the homogeneous di(cid:11)erential equation where the \source term" q(x) is taken to be zero. The other part, y(x) y (x) in (1.20) is the particular 0 (cid:0) integral, which is a speci(cid:12)c solution of the inhomogeneous equation with the source term q(x) included. 2 Separation of Variables in Second-order Linear PDE’s 2.1 Separation of variables in Cartesian coordinates Iftheequationofmotioninaparticularproblemhassu(cid:14)cientsymmetriesoftheappropriate type, we can sometimes reduce the problem to one involving only ordinary di(cid:11)erential equations. A simple example of the type of symmetry that can allow this is the spatial translationsymmetryoftheLaplaceequation 2 = 0orHelmholtzequation 2 +k2 = r r 0 written in Cartesian coordinates: @2 @2 @2 + + +k2 = 0: (2.1) @x2 @y2 @z2 Clearly, this equation retains the same form if we shift x, y and z by constants, x x+c ; y y+c ; z z+c : (2.2) 1 2 3 (cid:0)! (cid:0)! (cid:0)! Thisisnottosaythatanyspeci(cid:12)csolution oftheequationwillbeinvariantunder(2.2), but it does mean that the solutions must transform in a rather particular way. To be precise, if (x;y;z) is one solution of the di(cid:11)erential equation, then (x+c ;y+c ;z+c ) must be 1 2 3 another. As is well known, we can solve (2.1) by looking for solutions of the form (x;y;z) = X(x)Y(y)Z(z). Substituting into (2.1), and dividing by , gives 1 d2X 1 d2Y 1 d2Z + + +k2 = 0: (2.3) X dx2 Y dy2 Z dz2 7 The(cid:12)rstthreetermsontheleft-handsidecoulddependonlyonx, y andz respectively, and so the equation can only be consistent for all (x;y;z) if each term is separately constant, d2X d2Y d2Z +a2X = 0; +a2Y = 0; +a2Z = 0; (2.4) dx2 1 dy2 2 dz2 3 where the constants satisfy a2 +a2 +a2 = k2; (2.5) 1 2 3 and the solutions are of the form X eia1x; Y eia2y; Z eia3z: (2.6) (cid:24) (cid:24) (cid:24) Theseparationconstantsa canbeeitherreal, givingoscillatory solutionsinthatcoordinate i direction, or imaginary, giving exponentially growing and decaying solutions, provided that the sum (2.5) is satis(cid:12)ed. It will be the boundary conditions in the speci(cid:12)c problem being solved that determine whether a given separation constant a should be real or imaginary. i The general solution will be an in(cid:12)nite sum over all the basic exponential solutions, (x;y;z) = c(a ;a ;a )eia1xeia2yeia3z: (2.7) 1 2 3 a1X;a2;a3 wheretheseparationconstants (a ;a ;a )canbearbitrary, save onlythattheymustsatisfy 1 2 3 the constraint (2.5). At this stage the sums in (2.7) are really integrals over the continuous ranges of (a ;a ;a ) that satisfy (2.5). Typically, the boundary conditions will ensure that 1 2 3 thereisonlyadiscretein(cid:12)nityofallowedtripletsofseparationconstants,andsotheintegrals becomes sums. In a well-posed problem, the boundary conditions will also fully determine the values of the constant coe(cid:14)cients c(a ;a ;a ). 1 2 3 Consider, for example, a potential-theory problem in which a hollow cube of side 1 is composedofconductingmetalplates,where(cid:12)veofthemareheldatpotentialzero,whilethe sixth is held at a constant potential V. The task is to calculate the electrostatic potential (x;y;z) everywhere inside the cube. Thus we must solve Laplace’s equation 2 = 0; (2.8) r subject to the boundary conditions that (0;y;z) = (1;y;z) = (x;0;z) = (x;1;z) = (x;y;0) = 0; (x;y;1) = V : (2.9) (we take the face at z = 1 to be at potential V, with the other (cid:12)ve faces at zero potential.) 8 Since we are solving Laplace’s equation, 2 = 0, the constant k appearing in the r Helmholtz example above is zero, and so the constraint (2.5) on the separation constants is just a2+a2 +a2 = 0 (2.10) 1 2 3 here. Clearly to match the boundary condition (0;y;z) = 0 in (2.9) at x = 0 we must have X(0) = 0, which means that in the solution X(x) = Aeia1x+Be(cid:0)ia1x) (2.11) for X(x), one must choose the constants so that B = A, and hence (cid:0) X(x) = A(eia1x e(cid:0)ia1x)= 2iA sina x: (2.12) 1 (cid:0) Thus we have either the sine function, if a is real, or the hypebolic sinh function, if a is 1 1 imaginary. But we also have the boundary condtion that (1;y;z) = 0, which means that X(1) = 0. This determines that a must be real, so that we get oscillatory functions for 1 X(x) that can vanish at x= 1 as well as at x= 0. Thus we must have X(1) = 2iA sina = 0; (2.13) 1 implying a = m(cid:25), where m is an integer, which without loss of generality can be assumed 1 to be greater than zero. Similar arguments apply in the y direction. With a and a 1 2 determined to be real, (2.5) shows that a must be imaginary. The vanishing of (x;y;0) 3 implies that our general solution is now established to be (x;y;z) = b sin(m(cid:25)x) sin(n(cid:25)y) sinh((cid:25)z m2 +n2): (2.14) mn m>0n>0 X X p Note that we now indeed have a sum over a discrete in(cid:12)nity of separation constants. Finally, the boundary condition (x;y;1) = V on the remaining face at z = 1 tells us that V = b sin(m(cid:25)x) sin(n(cid:25)y) sinh((cid:25) m2 +n2): (2.15) mn m>0n>0 X X p This allows us to determine the constants b . We use the orthogonality of the sine func- mn tions, which in this case is the statement that if m and p are integers we must have 1 dx sin(m(cid:25)x) sin(p(cid:25)x)= 0 (2.16) Z0 if p and m are unequal, and 1 dx sin(m(cid:25)x) sin(p(cid:25)x) = 1 (2.17) 2 Z0 9

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.