ebook img

Probability Theory: A Concise Course PDF

157 Pages·1977·4.981 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Probability Theory: A Concise Course

r(cid:1)-- (cid:1)(cid:1)- -) (cid:1)t of, d(cid:1)l-(cid:1) 1(cid:1) 's 0(cid:1)A PROBABILITY THEORY ACONCISE COURSE PROBABI LITY THEORY A CONCISE COURSE YA.ROZANOV Revised English Edition Translated and Edited by Richard A. Silverman DOVER PUBLICATIONS, INC. NEW YORK Copyright ( 1969 by Richard A. Silverman. All rights reserved under Pan American and International Copyright Conventions. Published in Canada by General Publishing Com- pany, Ltd., 30 Lesmill Road, Don Mills, Toronto, Ontario. Published in the United Kingdom by Constable and Company, Ltd., 10 Orange Street, London WC2H 7EG. This Dover edition, first published in 1977, is an unabridged and slightly (orrected republication of the revised English edition published by Pren- tice-Hall Inc., Englewood Cliffs, N. J., in 1969 un- der the title Introductory Probability Theory. International Standard Book Number: 0-486-63544-9 Library of Congress Catalog Card Number: 77-78592 Manufactured in the United States of America Dover Publications, Inc. 180 Varick Street New York, N, Y. 10014 EDITOR'S PREFACE This book is a concise introduction to modern probability theory and certain of its ramifications. By deliberate succinctness of style and judicious selection of topics, it manages to be both fast-moving and self-contained. The present edition differs from the Russian original (Moscow, 1968) in several respects: 1. It has been heavily restyled with the addition of some new material. Here I have drawn from my own background in probability theory, information theory, etc. 2. Each of the eight chapters and four appendices has been equipped with relevant problems, many accom- panied by hints and answers. There are 150 of these problems, in large measure drawn from the excellent collection edited by A. A. Sveshnikov (Moscow, 1965). 3. At the end of the book I have added a brief Bibliography, containing suggestions for collateral and supplementary reading. R. A. S. V CONTENTS BASIC CONCEPTS, Page 1. 1. Probability and Relative Frequency, 1. 2. Rudiments of Combinatorial Analysis, 4. Problems, 10. 2 COMBINATION OF EVENTS, Page 13. 3. Elementary Events. The Sample Space, 13. 4. The Addition Law for Probabilities, 16. Problems, 22. 3 DEPENDENT EVENTS, Page 25. 5. Conditional Probability, 25. 6. Statistical Independence, 30. Problems, 34. 4 RANDOM VARIABLES, Page 37. 7. Discrete and Continuous Random Variables. Distribution Functions, 37. vii Viii CONTENTS 8. Mathematical Expectation, 44. 9. Chebyshev's Inequality. The Variance and Correlation Coefficient, 48. Problems, 50. 5 THREE IMPORTANT PROBABILITY DISTRIBUTIONS, Page 54. 10. Bernoulli Trials, The Binomial and Poisson Distribu- tions, 54. 11. The De Moivre-Laplace Theorem. The Normal Dis- tribution, 59. Problems, 65. 6 SOME LIMIT THEOREMS, Page 68. 12. The Law of Large Numbers, 68. 13. Generating Functions. Weak Convergence of Probability Distributions, 70. 14. Characteristic Functions. The Central Limit Theorem, 75. Problems, 80. 7 MARKOV CHAINS. Page 83. 15. Transition Probabilities, 83. 16. Persistent and Transient States, 87. 17. Limiting Probabilities. Stationary Distributions, 93. Problems, 98. 8 CONTINUOUS MARKOV PROCESSES, Page 102. 18. Definitions. The Sojourn Time, 102. 19. The Kolmogorov Equations, 105. 20. More on Limiting Probabilities. Erlang's Formula, 109. Problems, 112. APPENDIX 1 INFORMATION THEORY, Page 115. APPENDIX 2 GAME THEORY, Page 121. CONTENTS iX APPENDIX 3 BRANCHING PROCESSES, Page 127. APPENDIX 4 PROBLEMS OF OPTIMAL CONTROL, Page 136. BIBLIOGRAPHY, Page 143. INDEX, Page 145. BASIC CONCEPTS 1. Probability and Relative Frequency Consider the simple experiment of tossing an unbiased coin. This experiment has two mutually exclusive outcomes, namely "heads" and "tails." The various factors influencing the outcome of the experiment are too numerous to take into account, at least if the coin tossing is "fair." Therefore the outcome of the experiment is said to be "random." Everyone would certainly agree that the "probability of getting heads" and the "prob- ability of getting tails" both equal J. Intuitively, this answer is based on the idea that the two outcomes are "equally likely" or "equiprobable," because of the very nature of the experiment. But hardly anyone will bother at this point to clarify just what he means by "probability." Continuing in this vein and taking these ideas at face value, consider an experiment with a finite number of mutually exclusive outcomes which are equiprobable, i.e., "equally likely because of the nature of the experiment." Let A denote some event associated with the possible outcomes of the experiment. Then the probabilityP (A) of the event A is defined as the fraction of the outcomes in which A occurs. More exactly, P(A) =N(A) N where N is the total number of outcomes of the experiment and N(A) is the number of outcomes leading to the occurrence of the event A. Example 1. In tossing a well-balanced coin, there are N= 2 mutually exclusive equiprobable outcomes ("heads" and "tails"). Let A be either of 2 BASIC CONCEPTS CHAP. I these two outcomes. Then N(A) = :1, and hence P(A) = -1 2 Example 2. In throwing a single unbiased die, there are N = 6 mutually exclusive equiprobable outcomes, namely getting a number of spots equal to each of the numbers I through 6. Let A be the event consisting of getting an even number of spots. Then there are N(A) = 3 outcomes leading to the occurrence of A (which ones?), and hence P(A) 6 2 Example 3. In throwing a pair of dice, there are N 36 mutually exclusive equiprobable events, each represented by an ordered pair (a, b), where a is the number of spots showing on the first die and b the number showing on the second die. Let A be the event that both dice show the same number of spots. Then A occurs whenever a = b, i.e., n(A) = 6. Therefore P(A) == 36 = 6 Remark. Despite its seeming simplicity, formula (1.1) can lead to nontrivial calculations. In fact, before using (1.1) in a given problem, we must find all the equiprobable outcomes, and then identify all those leading to the occurrence of the event A in question. The accumulated experience of innumerable observations reveals a remarkable regularity of behavior, allowing us to assign a precise meaning to the concept of probability not only in the case of experiments with equi- probable outcomes, but also in the most general case. Suppose the experi- ment under consideration can be repeated any number of times, so that, in principle at least, we can produce a whole series of "independent trials under identical conditions,"' in each of which, depending on chance, a particular event A of interest either occurs or does not occur. Let n be the total number of experiments in the whole series of trials, and let n(A) be the number of experiments in which A occurs. Then the ratio n(A) n is called the relativefrequency of the event A (in the given series of trials). It turns out that the relative frequencies n(A)/n observed in different series of XConcerning the notion of independence, see Sec. 6, in particular footnote 2, p. 31.

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.