ebook img

Elementary Probability Theory with Stochastic Processes PDF

331 Pages·1975·14.284 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Elementary Probability Theory with Stochastic Processes

Undergraduate Texts in Mathematics Editors: F. W. Gehring P. R. Halmos · Advisory Board: C. DePrima I. Herstein J. Kiefer W. LeVeque Kai Lai Chung Elementary Probability Theory with Stochastic Processes Springer Science+Business Media, LLC Kai Lai Chung Professor of Mathematics Stanford University Department of Mathematics Stanford, California 94305 AMS Subject Classification ( 197 4) Primary: 6001 Secondary: 60C05, 60110, 60115, 60J20, 60J75 Library of Congress Cataloging in Publication Data Chung, Kai Lai, 1917- Elementary probability theory with stochastic processes. (Undergraduate texts in mathematics) Bibliography: p. Includes index. 1. Probabilities. 2. Stochastic processes. I. Title. QA273.C5774 1975 519.2 75-30544 All rights reserved. No part of this book may be translated or reproduced in any form without written permission from Springer Science+ Business Media, LLC. © 1974 , 1975 by Springer Science+B usiness Media New York Originally published by Springer-Verlag New York in 1975 Softcover reprint of the hardcover 2nd edition 1975 Second Printing ISBN 978-1-4757-5116-1 ISBN 978-1-4757-5114-7 (eBook) DOI 10.1007/978-1-4757-5114-7 Preface to the First Edition In the past half-century the theory of probability has grown from a minor isolated theme into a broad and intensive discipline interacting with many other branches of mathematics. At the same time it is playing a central role in the mathematization of various applied sciences such as statistics, opera tions research, biology, economics and psychology-to name a few to which the prefix "mathematical" has so far been firmly attached. The coming-of-age of probability has been reflected in the change of contents of textbooks on the subject. In the old days most of these books showed a visible split personality torn between the combinatorial games of chance and the so-called "theory of errors" centering in the normal distribution. This period ended with the appearance of Feller's classic treatise (see [Feller l]t) in 1950, from the manuscript of which I gave my first substantial course in probability. With the passage of time probability theory and its applications have won a place in the college curriculum as a mathematical discipline essential to many fields of study. The elements of the theory are now given at different levels, sometimes even before calculus. The present textbook is intended for a course at about the sophomore level. It presupposes no prior acquaintance with the subject and the first three chapters can be read largely without the benefit of calculus. The next three chapters require a working knowledge of infinite series and reiated topics, and for the discussion involving random variables with densities some calculus is of course assumed. These parts dealing with the "continuous case" as distinguished from the "discrete case" are easily separated and may be postponed. The contents of the first six chapters should form the backbone of any meaningful first introduction to probability theory. Thereafter a reasonable selection includes: §7.1 (Poisson distribution, which may be inserted earlier in the course), some kind of going over of §7.3, 7.4, 7.6 (normal distribution and the law oflarge numbers), and §8.1 (simple random walks which are both stimulating and useful). All this can be covered in a semester but for a quarter system some abridgment will be necessary. Specifi cally, for such a short course Chapters 1 and 3 may be skimmed through and the asterisked material omitted. In any case a solid treatment of the normal approximation theorem in Chapter 7 should be attempted only if time is available as in a semester or two-quarter course. The final Chapter 8 gives a self-contained elementary account of Markov chains and is an extension of the main course at a somewhat more mature level. Together with the aster isked sections 5.3, 5.4 (sequential sampling and Polya urn scheme) and 7.2 (Poisson process), and perhaps some filling in from the Appendices, the material provides a gradual and concrete passage into the domain of sto- t Names in square brackets refer to the list of General References on p. 307. William Feller (1906-1970). vi Preface to the First Edition chastic processes. With these topics included the book will be suitable for a two-quarter course, of the kind that I have repeatedly given to students of mathematical sciences and engineering. However, after the preparation of the first six chapters the reader may proceed to more specialized topics treated e.g. in the above mentioned treatise by Feller. If the reader has the adequate mathematical background, he will also be prepared to take a formal rigorous course such as presented in my own more advanced book [Chung 1]. Much thought has gone into the selection, organization and presentation of the material to adapt it to classroom uses, but I have not tried to offer a slick package to fit in with an exact schedule or program such as popularly demanded at the quick-service counters. A certain amount of flexibility and choice is left to the instructor wh0 can best judge what is right for his class. Each chapter contains some easy reading at the beginning, for motivation and illustration, so that the instructor may concentrate on the more formal aspects of the text. Each chapter also contains some slightly more challenging topics (e.g., §1.4, 2.5) for optional sampling. They are not meant to deter the beginner but to serve as an invitation to further study. The prevailing empha sis is on the thorough and deliberate discussion of the basic concepts and techniques of elementary probability theory with few frills and minimal tech nical complications. Many examples are chosen to anticipate the beginners' difficulties and to provoke better thinking. Often this is done by posing and answering some leading questions. Historical, philosophical and personal comments are inserted to add flavor to this lively subject. It is my hope that the reader will not only learn something from the book but may also derive a measure of enjoyment in so doing. There are over two hundred exercises for the first six chapters and some eighty more for the last two. Many are easy, the harder ones indicated by asterisks, and all answers gathered at the end of the book. Asterisked sections and paragraphs deal with more special or elaborate material and may be skipped, but a little browsing in them is recommended. The author of any elementary textbook owes of course a large debt to innumerable predecessors. More personal indebtedness is acknowledged be low. Michel Nadzela wrote up a set of notes for a course I gave at Stanford in 1970. Gian-Carlo Rota, upon seeing these notes, gave me an early impetus toward transforming them into a book. D. G. Kendall commented on the first draft of several chapters and lent further moral support. J. L. Doob volunteered to read through most of the manuscript and offered many helpful suggestions. K. B. Erickson used some of the material in a course he taught. A. A. Balkema checked the almost final version and made numerous improve ments. Dan Rudolph read the proofs together with me. Perfecto Mary drew those delightful pictures. Gail Lemmond did the typing with her usual ef ficiency and dependability. Finally, it is a pleasure to thank my old publisher Springer-Verlag for taking my new book to begin a new series of under graduate texts. K. L. C. March 1974. Preface to the Second Edition A determined effort was made to correct the errors in the first edition. This task was assisted by: Chao Hung-po, J. L. Doob, R. M. Exner, W. H. Fleming, A.M. Gleason, Karen Kafador, S. H. Polit, and P. van Moerbeke. Miss Kafador and Dr. Polit compiled particularly careful lists of suggestions. The most distressing errors were in the Solutions to Problems. All of them have now been checked by myself from Chapter 1 to 5, and by Mr. Chao from Chapter 6 to 8. It is my fervent hope that few remnant mistakes remain in that sector. A few small improvements and additions were also made, but not all advice can be heeded at this juncture. Users of the book are implored to send in any criticism and commentary, to be taken into consideration in a future edition. Thanks are due to the staff of Springer-Verlag for making this revision possible so soon after the publication of the book. K. L. C. CONTENTS Preface Chapter 1: Set 1 1.1 Sample sets I 1.2 Operations with sets 3 1.3 Various relations 6 1.4 Indicator 12 Exercises 16 Chapter 2: Probability 18 2.1 Examples of probability 18 2.2 Definition and illustrations 21 2.3 Deductions from the axioms 28 2.4 Independent events 31 2.5 Arithmetical density 36 Exercises 39 Chapter 3: Counting 43 3.1 Fundamental rule 43 3.2 Diverse ways of sampling 46 3.3 Allocation models; binomial coefficients 52 3.4 How to solve it 59 Exercises 66 Chapter 4: Random Variables 71 4.1 What is a random variable? 71 4.2 How do random variables come about? 75 4.3 Distribution and expectation 80 4.4 Integer-valued random variables 86 4.5 Random variables with densities 90 4.6 General case 100 Exercises 104 Appendix 1 : Borel Fields and General Random Variables 109 Chapter 5 : Conditioning and Independence 111 5.1 Examples of conditioning Ill 5.2 Basic formulas 116 Contents X 5.3 Sequential sampling 125 5.4 Polya's urn scheme 129 5.5 Independence and relevance 134 5.6 Genetical models 144 Exercises 150 Chapter 6: Mean, Variance and Transforms 156 6.1 Basic properties of expectation 156 6.2 The density case 160 6.3 Multiplication theorem; variance and covariance 164 6.4 Multinomial distribution 171 6.5 Generating function and the like 177 Exercises 185 Chapter 7: Poisson and Normal Distributions 192 7.1 Models for Poisson distribution 192 7.2 Poisson process 199 7.3 From binomial to normal 210 7.4 Normal distribution 217 7.5 Central limit theorem 220 7.6 Law of large numbers 227 Exercises Appendix 2: Stirling's Formula and DeMoivre-Laplace's Theorem 237 Chapter 8: From Random Walks to Markov Chains 240 8.1 Problems of the wanderer or gambler 240 8.2 Limiting $Chemes 246 8.3 Transition probabilities 252 8.4 Basic structure of Markov chains 260 8.5 Further developments 267 8.6 Steady state 274 8.7 Winding up (or down?) 286 Exercises 296 Appendix 3: Martingale 305 General References 306 Answers to Problems 309 Index 323 Chapter 1 Set 1.1. Sample sets These days school children are taught about sets. A second grader* was asked to name "the set of girls in his class." This can be done by a complete list such as: "Nancy, Florence, Sally, Judy, Ann, Barbara, " A problem arises when there are duplicates. To distinguish between two Barbaras one must indicate their family names or call them B1 and B2. The same member cannot be counted twice in a set. The notion of a set is common in all mathematics. For instance in geom etry one talks about "the set of points which are equi-distant from a given point." This is called a circle. In algebra one talks about "the set of integers which have no other divisors except l and itself." This is called the set of prime numbers. In calculus the domain of definition of a function is a set of numbers, e.g., the interval (a, b); so is the range of a function if you remember what it means. In probability theory the notion of a set plays a more fundamental role. Furthermore we are interested in very general kinds of sets as well as specific concrete ones. To begin with the latter kind, consider the following examples: (a) a bushel of apples; (b) fifty five cancer patients under a certain medical treatment; (c) all the students in a college; (d) all the oxygen molecules i:Q. a given container; (e) all possible outcomes when six dice are rolled; (f) all points on a target board. Let us consider at the same time the following "smaller" sets: (a') the rotten apples in that bushel; (b') those patients who respond positively to the treatment; (c') the mathematics majors of that college; (d ') those molecules which are traveling upwards; (e ') those cases when the six dice show different faces; (f') the points in a little area called the "hull's eye" on the board. * My son Daniel. 1 2 Set We shall set up a mathematical model for these and many more such examples that may come to mind, namely we shall abstract and generalize our intuitive notion of "a bunch of things." First we call the things points, then we call the bunch a space; we prefix them by the word "sample" to distinguish these terms from other usages, and also to allude to their statistical origin. Thus a sample point is the abstraction of an apple, a cancer patient, a student, a molecule,, a possible chance outcome, or an ordinary geometrical point. The sample space consists of a number of sample points, and is just a name for the totality or aggregate of them all. Any one of the examples (a)-(f) above can be taken to be a sample space, but so also may any one of the smaller sets in (a ')-(f'). What we choose to call a space [a universe] is a relative matter. Let us then fix a sample space to be denoted by !J, the capital Greek letter omega. It may contain any number of points, possibly infinite but at least one. (As you have probably found out before, mathematics can be very pedantic!) Any of these points may be denoted by w, the small Greek letter omega, to be distinguished from one another by various devices such as adding sub scripts or dashes (as in the case of the two Barbaras if we do not know their family names), thus wr, w2, w', .... Any partial collection of the points is a subset of!J, and since we have fixed Q we will just call it a set. In extreme cases a set may be Q itself or the empty set which has no point in it. You may be surprised to hear that the empty set is an important entity and is given a special symbol 0. The number of points in a set S will be called its size and denoted by IS f, thus it is a nonnegative integer or oo. In particular 101 = 0. A particular set Sis well defined if it is possible to tell whether any given point belongs to it or not. These two cases are denoted respectively by w E::: S; w e_ S. Thus a set is determined by a specified rule of membership. For instance, the sets in (a')-(f') are well defined up to the limitations of verbal descriptions. One can always quibble about the meaning of words such as "a rotten apple," or attempt to be funny by observing, for instance, that when dice are rolled on a pavement some of them may disappear into the sewer. Some people of a pseudo-philosophical turn of mind get a lot of mileage out of such caveats, but we will not indulge in them here. Now, one sure way of specifying a rule to determine a set is to enumerate all its members, namely to make a complete list as the second grader did. But this may be tedious if not impossible. For example, it will be shown in §3.1 that the size of the set in (e) is equal to 66 = 46656. Can you give a quick guess how many pages of a book like this will be needed just to record all these possibilities of a mere throw of six dice? On the other hand it can be described in a systematic and unmistakable way as the set of all ordered 6-tuples of the form below:

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.