Intermediate Mathematica I Statistics Intermediate Mathematica I Statistics G.P. BEAUMONT Senior Lecturer, Department of Statistics and Computer Science, Royal Holloway College. London CHAPMAN AND HALL LONDON AND NEW YORK First published 1980 by Chapman and Hall Ltd 11 New Fetter Lane, London EC4P 4EE © G. P. Beaumont 1980 ISBN-13: 978-0-412-15480-5 e-ISBN-13: 978-94-009-5794-7 DOl: 10.1007/978-94-009-5794-7 Published in the USA by Chapman and Hall in association with Methuen, Inc. 733 Third Avenue, New York, NY 10017 This paperback edition is sold subject to the condition that it shall not, by way of trade or otherwise, be lent, re-sold, hired out, or otherwise circulated without the publisher's prior consent in any form of binding or cover other than that in which it is published and without a similar condition including this condition being imposed on the subsequent purchaser All rights reserved. No part of this book may be reprinted, or reproduced or utilized in any form or by any electronic, mechanical or other means, now known or hereafter invented, including photocopying and recording, or in any iriformation storage or retrieval system, without permission in writing from the publisher British Library Cataloguing in Publication Data Beaumont, G P Intermediate mathematical statistics. I. Mathematical statistics I. Title 519.5 QA276 79-4061 Contents Preface Page IX Acknowledgements XI Notation XII Standard Distributions XIV Introduction 1 Sufficiency 1.1 Introduction 9 1.2 Factorization criterion 13 1.3 Distribution of statistics conditional on a sufficient statistic 16 1.4 Joint sufficiency 16 1.5 Minimal sufficiency 19 2 Unbiased point estimators 2.1 Introduction 23 2.2 Rao-Blackwell theorem 27 2.3 The role of sufficient statistics 29 2.4 Completeness 31 2.5 Joint completeness 33 2.6 Sufficiency, completeness and independence 36 2.7 Minimum-variance bounds 40 2.8 Computation of a minimum-variance bound 44 2.9 Minimum attainable variance 45 2.10 Mean square error 47 2.11 Two parameters 48 VI Contents 3 Elementary decision theory and Bayesian methods 3.1 Comments on classical techniques 52 3.2 Loss functions 53 3.3 Decision theory 55 3.4 Bayes decisions 59 3.5 Using data 63 3.6 Computing posterior distributions 67 3.7 Conjugate distributions 72 3.8 Distribution of the next observation 74 3.9 More than one parameter 75 3.10 Decision functions 77 3.11 Bayes estimators 80 3.12 Admissibility 82 4 Methods of estimation 4.1 Introduction 85 4.2 Maximum likelihood estimation 86 4.3 Locating the maximum likelihood estimator 87 4.4 Estimation of a function of a parameter 88 4.5 Truncation and censoring 90 4.6 Estimation of several parameters 92 4.7 Approximation techniques 94 4.8 Large-sample properties 97 4.9 Method of least squares 100 4.10 Normal equations 104 4.11 Solution of the normal equations (non-singular case) 106 4.12 Use of matrices 107 4.13 Best unbiased linear estimation 108 4.14 Covariance matrix 111 4.15 Relaxation of assumptions 113 5 Hypothesis testing I 5.1 Introduction 116 5.2 Statistical hypothesis 118 5.3 Simple null hypothesis against simple alternative 120 5.4 Applications of the Neyman-Pearson theorem 125 5.5 Uniformly most powerful tests for a single parameter 130 5.6 Most powerful randomized tests 132 Contents vii 5.7 Hypothesis testing as a decision process 133 5.8 Minimax and Bayes tests 138 6 Hypothesis testing II 6.1 Two-sided tests for a single parameter 140 6.2 Neyman-Pearson theorem extension (non- randomized version) 141 6.3 Regular exponential family of distributions 145 6.4 Uniformly most powerful unbiased test of e = eo e eo against =1= 146 6.5 Nuisance parameters 149 6.6 Similar tests 150 6.7 Composite hypotheses-several parameters 155 6.8 Likelihood ratio tests 159 6.9 Bayes methods 165 6.10 Loss function for one-sided hypotheses 165 e eo e eo 6.11 Testing = against =1= 169 7 Interval estimation 7.1 One parameter, Bayesian confidence intervals 173 7.2 Two parameters, Bayesian confidence regions 175 7.3 Confidence intervals (classical) 177 7.4 Most selective limits 180 7.5 Relationship to best tests 181 7.6 Unbiased confidence intervals 182 7.7 Nuisance parameters 184 7.8 Discrete distributions 185 7.9 Relationship between classical and Bayesian intervals 187 7.10 Large-sample confidence intervals 187 Appendix 1 Functions of random variables A 1.1 Introduction 193 A1.2 Transformations: discrete distributions 194 A1.3 Continuous distributions 196 A 1.4 The order statistics 202 Appendix 2 The regular exponential family of distributions A2.1 Single parameter 207 A2.2 Several parameters 210 viii Contents A2.3 The regular exponential family of bivariate distributions 211 Further exercises 214 Brief solutions to further exercises 229 Further reading 242 Author index 243 Subject index 244 Preface This book covers those basic topics which usually form the core of intermediate courses in statistical theory; it is largely about estima tion and hypothesis testing. It is intended for undergraduates following courses in statistics but is also suitable preparatory read ing for some postgraduate courses. It is assumed that the reader has completed an introductory course which covered probability, random variables, moments and the sampling distributions. The level of mathematics required does not go beyond first year calculus. In case the reader has not acquired much facility in handling matrices, the results in least squares estimation are first obtained directly and then given an (optional) matrix formulation. If techniques for changing from one set of variables to another have not been met, then the appendix on these topics should be studied first. The same appendix contains essential discussion of the order statistics which are frequently used for illustrative purposes. Introductory courses usually include the elements of hypothesis testing and of point and interval estimation though the treatment must perforce become rather thin since at that stage it is difficult to provide adequate justifications for some procedures-plausible though they may seem. This text discusses these important topics in considerable detail, starting from scratch. The level is nowhere advanced and proofs of asymptotic results are omitted. Methods deriving from the Bayesian point of view are gradually introduced and alternate with the more usual techniques. Many illustrative examples have been included since the average student typically grasps the import of a theorem by seeing how it is Preface x applied. Each chapter contains exercises which either give practice in techniques or take a previous example a little further. At the end of the book will be found a selection of typical questions for which brief solutions are provided. London, June 1979 G. P. B.