ebook img

Bayesian Statistics 9 PDF

788 Pages·2011·11.907 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Bayesian Statistics 9

Title Pages University Press Scholarship Online Oxford Scholarship Online Bayesian Statistics 9 José M. Bernardo, M. J. Bayarri, James O. Berger, A. P. Dawid, David Heckerman, Adrian F. M. Smith, and Mike West Print publication date: 2011 Print ISBN-13: 9780199694587 Published to Oxford Scholarship Online: January 2012 DOI: 10.1093/acprof:oso/9780199694587.001.0001 Title Pages Bayesian Statistics 9 Bayesian Statistics 9 (p.iv) Great Clarendon Street, Oxford OX2 6DP Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide in Oxford New York Auckland Cape Town Dar es Salaam Hong Kong Karachi Kuala Lumpur Madrid Melbourne Mexico City Nairobi Page 1 of 3 Title Pages New Delhi Shanghai Taipei Toronto With offices in Argentina Austria Brazil Chile Czech Republic France Greece Guatemala Hungary Italy Japan Poland Portugal Singapore South Korea Switzerland Thailand Turkey Ukraine Vietnam Oxford is a registered trade mark of Oxford University Press in the UK and in certain other countries Published in the United States by Oxford University Press Inc., New York © Oxford University Press 2011 The moral rights of the authors have been asserted Database right Oxford University Press (maker) First published 2011 All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, without the prior permission in writing of Oxford University Press, or as expressly permitted by law, or under terms agreed with the appropriate reprographics rights organization. Enquiries concerning reproduction outside the scope of the above should be sent to the Rights Department, Oxford University Press, at the address above You must not circulate this book in any other binding or cover and you must impose the same condition on any acquirer British Library Cataloguing in Publication Data Data available Library of Congress Cataloging in Publication Data Data available Typeset by editors using TEX Printed in Great Britain on acid-free paper by CPI Antony Rowe, Chippenham, Wiltshire ISBN 978–0–19–969458–7 Page 2 of 3 Title Pages 1 3 5 7 9 10 8 6 4 2 Preface University Press Scholarship Online Oxford Scholarship Online Bayesian Statistics 9 José M. Bernardo, M. J. Bayarri, James O. Berger, A. P. Dawid, David Heckerman, Adrian F. M. Smith, and Mike West Print publication date: 2011 Print ISBN-13: 9780199694587 Published to Oxford Scholarship Online: January 2012 DOI: 10.1093/acprof:oso/9780199694587.001.0001 (p.v) Preface The Ninth Valencia International Meeting on Bayesian Statistics was held in Benidorm (Alicante, Spain), 150 kilometres south of Valencia, from June 3rd to June 8th 2010, in conjunction with the Tenth World Meeting of the International Society for Bayesian Analysis (ISBA). Valencia 9/ISBA 10 continued the tradition of this premier conference series—established in 1979 with the First Valencia International Meeting—as the forum for a definitive overview of current concerns and activities in Bayesian statistics. In this tradition, Valencia 9/ISBA 10 encompassed an enormous range of theoretical and applied research, highlighting the breadth, vitality and impact of Bayesian thinking in interdisciplinary research across many fields as well as the corresponding growth and vitality of core theory and methodology. The Valencia organizing committee invited experts in Bayesian statistics to present papers, each of which was followed by discussion led by an invited discussant. These Proceedings * contain the 23 written versions of the invited papers together with their discussions. A further 40 talks, 3 tutorials and over 300 posters were presented in additional sessions organized by ISBA; a number of the resulting papers will be published, following a rigorous refereeing process, in the flagship journal of ISBA, Bayesian Analysis. Page 1 of 4 Preface The Valencia 9 invited papers cover a broad range of topics. Foundational and core theoretical issues in statistics are addressed by several authors. Bernardo describes and overviews the use of reference priors and information‐based loss functions in a general and comprehensive approach to objective Bayesian estimation and testing, representing the major growth in the O‐Bayes literature in the last several years. Goldstein addresses fundamental conceptual and theoretical issues surrounding the interpretation of multiple sources and forms of uncertainty in the analysis of computer simulation, a critical and fast‐growing area of applied Bayesian statistics. Meng explores Bayesian‐frequentist interfaces, identifying Bayesian themes in new methods of adjusted profile likelihood while concluding that such approaches are generally invalid and incoherent, while Richardson, Evans and Robins discuss prior specification and reparametrization issues in causal inference. The continued development of new and refined computational methods for complex Bayesian modelling is reflected in several papers. Chopin and (p.vi) Jacob introduce new methods of sequential Monte Carlo simulation based on free energy methods in physics, Huber and Schott describe novel adaptive Monte Carlo methods for marginal likelihood computations, while Lopes, Carvalho, Johannes and Polson describe and exemplify refined sequential simulation methods based on particle learning concepts. Linking computational innovation with novel applied Bayesian decision theory, Gramacy and Lee discuss advances in optimization of broad interest in statistics and allied fields. Methodology and substantive applications of flexible Bayesian modelling approaches are represented in several papers. Dunson and Bhattacharya discuss advances in non‐ parametric Bayesian modelling for regression and classification, while Schmidt and Rodríguez develop non‐stationary spatial models for multivariate count data. The concept of sparsity modelling using structured priors in increasingly large and complex models is a pervasive theme in modern multivariate analysis. Frühwirth‐ Schnatter and Wagner discuss shrinkage and variable selection in random effects models, Polson and Scott present detailed theoretical development of Bayesian regularization and shrinkage under new classes of priors, while both Richardson, Bottolo and Rosenthal and Vannucci and Stingo study sparsity modelling in multivariate regression and related models with substantive applications in genomics. The theory and methodology of graphical modelling has represented a substantial growth area in Bayesian statistics and allied fields, and is represented in several papers. Consonni and La Rocca discuss the development and specification of prior distributions and model assessment in directed graphical models, Ickstadt, Bornkamp, Grzegorczyk, Wieczorek, Sheriff, Grecco and Zamir develop approaches to non‐ parametric Bayesian network modelling, and Meek and Wexler develop new computational methods for approximate Bayesian inference in a wide class of graphical models. While interdisciplinary applications are evident in many of the papers, several focus on Page 2 of 4 Preface advances in methodology for a specific applied field. Among these, Carvalho, Lopes and Aguilar describe structured and dynamic Bayesian factor models and their rôles and uses in financial econometrics and portfolio decision making, while public policy related applications for drug surveillance are discussed by Madigan, Ryan, Simpson and Zorych in the context of pharmacovigilance. Studies in the physical and environmental sciences are represented by Loredo, who discusses advances in Bayesian analysis in astronomy and astrophysics, and by Tebaldi, Sansó and Smith, who discuss the rôles and relevance of hierarchical Bayesian modelling in climate change studies. Detailed applications in the molecular biosciences include the papers by Louis, Carvalho, Fallin, Irizarry, Li and Ruczinski, concerning (p.vii) Bayesian methods for statistical genetics using high‐throughput sequence data, and by Wilkinson, who develops modelling and parameter estimation for stochastic dynamic networks in systems biology. Valencia 9 represents the final meeting in the series. From 2012 on, the biennial ISBA World Meetings will carry the flag forward. For over 30 years, the Valencia meetings have marked the tremendous growth of Bayesian statistics, and the corresponding broad adoption of Bayesian methods in applications in many fields. The meetings have also, from the first in 1979, helped to define and engender a professional collegiality that permeates the currently vibrant international intellectual community. Over these three decades, Bayesian methods have moved centrally into statistical work in many applied fields. Promoted and enabled by computational advances, the increasing adoption of Bayesian models and methods by non‐statisticians and applied statistical researchers from many fields has now moved to a level where the relevance and applicability of structured, model‐based probabilistic reasoning is widely understood and accepted. As this continues, we are also experiencing progressive breakdown of the historical prejudice against Bayesian thinking that was—in the late 1970s—a key reason for the establishment of the Valencia meetings. This change in statistical science at a fundamental level is a reason to celebrate the increasing success of Bayesian thinking, and to recognize the rôole played by the Valencia meetings over these three decades. Valencia 9 would not have been successful without the collaboration with ISBA and much appreciated financial support from the Universitat de Val encia, the Section on Bayesian Statistical Science (SBSS) of the American Statistical Association, and the US National Science Foundation (NSF), National Institutes of Health (NIH) and Office of Naval Research Global (ONRG). We are also most grateful to Maylo Albiach, Lizbeth Román, Vera Toma‐ zella and Dolores Tortajada for their invaluable assistance on matters administrative, technical and social, and in particular to Dolores Tortajada for preparing the final L A T X version of these E Proceedings. J. M. Bernardo M. J. Bayarri J. O. Berger Page 3 of 4 Preface A. P. Dawid D. Heckerman A. F. M. Smith M. West (p.viii) Notes: (*) The Proceedings of previous meetings have been published: the first by the University Press, Valencia (1980), the second by North Holland, Amsterdam (1985), and the third, fourth, fifth, sixth, seventh and eighth by The Clarendon Press, Oxford (1988, 1992, 1996, 1999, 2003, 2007). The editors in each case were the members of the organizing committee. Integrated Objective Bayesian Estimation and Hypothesis Testing University Press Scholarship Online Oxford Scholarship Online Bayesian Statistics 9 José M. Bernardo, M. J. Bayarri, James O. Berger, A. P. Dawid, David Heckerman, Adrian F. M. Smith, and Mike West Print publication date: 2011 Print ISBN-13: 9780199694587 Published to Oxford Scholarship Online: January 2012 DOI: 10.1093/acprof:oso/9780199694587.001.0001 Integrated Objective Bayesian Estimation and Hypothesis Testing José M. Bernardo DOI:10.1093/acprof:oso/9780199694587.003.0001 Abstract and Keywords The complete final product of Bayesian inference is the posterior distribution of the quantity of interest. Important inference summaries include point estimation, region estimation and precise hypotheses testing. Those summaries may appropriately be described as the solution to specific decision problems which depend on the particular loss function chosen. The use of a continuous loss function leads to an integrated set of solutions where the same prior distribution may be used throughout. Objective Bayesian methods are those which use a prior distribution which only depends on the assumed model and the quantity of interest. As a consequence, objective Bayesian methods produce results which only depend on the assumed model and the data obtained. The combined use of intrinsic discrepancy, an invariant information‐based loss function, and appropriately defined reference priors, provides an integrated objective Bayesian solution to both estimation and hypothesis testing problems. The ideas are illustrated with Page 1 of 82 Integrated Objective Bayesian Estimation and Hypothesis Testing a large collection of non‐trivial examples. Keywords: Foundations, Decision Theory, Kullback–Leibler Divergence, Intrinsic Discrepancy, Reference Analysis, Reference Priors, Point Estimation, Interval Estimation, Region Estimation, Precise Hypothesis Testing, Hardy–Weinberg Equilibrium, Contingency Tables Summary The complete final product of Bayesian inference is the posterior distribution of the quantity of interest. Important inference summaries include point estimation, region estimation and precise hypotheses testing. Those summaries may appropriately be described as the solution to specific decision problems which depend on the particular loss function chosen. The use of a continuous loss function leads to an integrated set of solutions where the same prior distribution may be used throughout. Objective Bayesian methods are those which use a prior distribution which only depends on the assumed model and the quantity of interest. As a consequence, objective Bayesian methods produce results which only depend on the assumed model and the data obtained. The combined use of intrinsic discrepancy, an invariant information‐based loss function, and appropriately defined reference priors, provides an integrated objective Bayesian solution to both estimation and hypothesis testing problems. The ideas are illustrated with a large collection of non‐trivial examples. Keywords and Phrases: Foundations; Decision Theory; Kullback–Leibler Divergence; Intrinsic Discrepancy; Reference Analysis; Reference Priors; Point Estimation; Interval Estimation; Region Estimation; Precise Hypothesis Testing; Hardy–Weinberg Equilibrium; Contingency Tables. 1. Introduction From a Bayesian viewpoint, the final outcome of any problem of inference is the posterior distribution of the vector of interest. Thus, given a probability model ℳ = {p(z ǀω),z ∈ z Z,ω ∈ Ω} which is assumed to describe the mechanism which has generated the available data z, all that can be said about any function θ(ω) ∈ Θ of the parameter vector ω is contained in its posterior distribution p(θ ǀ z). This is deduced from standard probability theory arguments via the posterior distribution p(ω ǀ z) ∝ p(z ǀ ω) p(ω) which is based on the assumed prior p(ω). To facilitate the assimilation of the inferential contents of p(θ ǀ z), one often tries to summarize the information contained in this posterior by (i) providing θ values which, in the light of the data, are likely to be close to its true value (estimation) and by (ii) (p.2) measuring the compatibility of the data with hypothetical values θ ∈ Θ 0 ⊂ Θ of the vector of interest which might have been suggested by the research context 0 (hypothesis testing). One would expect that the same prior p(ω), whatever its basis, could be used to provide both types of summaries. However, since the pioneering book by Jeffreys (1961), Bayesian methods have often made use of two radically different types of priors, some for estimation and some for hypothesis testing. We argue that this is certainly not necessary, and probably not convenient, and describe a particular form of doing this within the framework of Bayesian decision theory. Many of the ideas described Page 2 of 82 Integrated Objective Bayesian Estimation and Hypothesis Testing below have already appeared in the literature over the past few years. Thus, this is mainly an up‐to‐date review paper, which unifies notation, definitions and available results. However, it also contains some previously unpublished material. Section 2 formalizes the decision theoretic formulation of point estimation, region estimation and precise hypothesis testing, and emphasizes that the results are highly dependent on the choices of both the loss function and the prior distribution. Section 3 reviews a set of desiderata for loss functions to be used in stylized non‐problem‐specific theoretical inference, and defines the intrinsic discrepancy, an invariant information‐based loss function, which is suggested for general use in those circumstances. Section 4 describes objective Bayesian methods as those using a prior distribution which only depends on the assumed model, and reviews some basic concepts behind reference priors, a particular form of objective prior functions which is proposed for general use. In multiparameter problems, reference priors are known to depend on the quantity of interest; a criterion is proposed to select joint priors which could safely be used for a set of different quantities of interest. In Section 5, the combined use of the intrinsic discrepancy and appropriately chosen reference priors is proposed as an integrated objective Bayesian solution to both estimation and hypothesis testing problems. The theory is illustrated via many examples. 2. Bayesian Inference Summaries Let z be the available data which are assumed to have been generated as one random observation from model ℳ = {p(zǀω),z ∈ Z,ω ∈ Ω}. Often, but not always, data will z consist of a random sample z = {x ,…,x } from some distribution q(x ǀ ω), with x ∈ �; 1 n in this case p(zω) =∏n q(x ω) and Z = � n. Let θ(ω) be the vector of interest. i=1 i Without loss of generality, the model may explicitly be expressed in terms of θ so that ℳ z = {p(z ǀ θ, λ), z ∈ Z, θ ∈ Θ, λ ∈ Λ}, where λ is some appropriately chosen nuisance parameter vector. Let p(θ, λ) = p(λ ǀ θ) p(θ) be the assumed prior, and let p(θ ǀ x) be the corresponding marginal posterior distribution of θ. Appreciation of the inferential contents of p(θ ǀ z) may be enhanced by providing both point and region estimates of the vector of interest θ, and by declaring whether or not some context suggested specific value θ (or maybe a set of values Θ ), is (are) compatible with the observed data z. A 0 0 large number of Bayesian estimation and hypothesis testing procedures have been proposed in the literature. We argue that their choice is better made in decision‐ theoretical terms. Although it has been argued that the use of loss functions may not be directly relevant for inference problems, it is generally accepted that better inference procedures may often be obtained with the aid of decision‐theoretic machinery; this is certainly our point of view. Let ℓ{θ , (θ, λ)} describe, as a function of the (unknown) parameter values (θ, λ) which 0 have generated the available data, the loss to be suffered if, working with (p.3) model ℳ , the value θ were used as a proxy for the unknown value of θ. As summarized below, z 0 point estimation, region estimation and hypothesis testing may all be appropriately described as specific decision problems using a common prior distribution and a common loss function. The results, which are obviously all conditional on the assumed model ℳ , z Page 3 of 82

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.