ebook img

Maximum Entropy and Bayesian Methods: Boise, Idaho, USA, 1997 Proceedings of the 17th International Workshop on Maximum Entropy and Bayesian Methods of Statistical Analysis PDF

299 Pages·1998·15.709 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Maximum Entropy and Bayesian Methods: Boise, Idaho, USA, 1997 Proceedings of the 17th International Workshop on Maximum Entropy and Bayesian Methods of Statistical Analysis

Maximum Entropy and Bayesian Methods Fundamental Theories of Physics An International Book Series on The Fundamental Theories ofP hysics: Their Clarification, Development and Application Editor: ALWYN VAN DER MERWE, University ofD enver, U.S.A. Editoral Advisory Board: LAWRENCE P. HORWITZ, Tel-Aviv University, Israel BRIAN D. JOSEPHSON, University of Cambridge, U.K. CLIVE KILMISTER, University ofL ondon, U.K. PEKKA J. LAHTI, University ofTurku, Finland GUNTER LUDWIG, Philipps-Universitiit, Marburg, Germany NATHAN ROSEN, Israel Institute of Technology, Israel ASHER PERES, Israel Institute of Technology, Israel EDUARD PRUGOVECKI, University of Toronto, Canada MENDEL SACHS, State University ofN ew York at Buffalo, U.S.A. ABDUS SALAM, International Centre for Theoretical Physics, Trieste, Italy HANS-JURGEN TREDER, Zentralinstitut/iir Astrophysik der Akademie der Wissenschaften, Germany Volume 98 Maximum Entropy and Bayesian Methods Boise, Idaho, D.S.A., 1997 Proceedings of the i7th international Workshop on Maximum Entropy and Bayesian Methods of Statistical Analysis edited by Gary J. Erickson Joshua T. Rychert Department ofE lectronical Engineering, Boise State University, Boise, ldaho, U.S.A. and C. Ray Smith Fayetteville, Tennessee, U.S.A. SPRINGER-SCIENCE+BUSINESS MEDIA, B.V. A C.I.P. Catalogue record for this book is available from the Library of Congress. ISBN 978-94-010-6111-7 ISBN 978-94-011-5028-6 (eBook) DOI 10.1007/978-94-011-5028-6 Printed on acid-free pap er AlI Rights Reserved © 1998 Springer Science+Business Media Dordrecht Originally published by Kluwer Academic Publishers in 1998 Softcover reprint ofthe hardcover Ist edition 1998 No part ofthe material protected by this copyright notice may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying, recording or by any information storage and retrieval system, without written permission from the copyright owner CONTENTS IN MEMORY OF EDWIN T. JAYNES ................................... vii PREFACE ....................................................................................................................... .ix MASSIVE INFERENCE AND MAXIMUM ENTROPY John Skilling. ...................................................................................................... 1 CV -NP BA YESIANISM BY MCMC Carlos Rodriguez ............................................................................................. 15 WHICH ALGORITHMS ARE FEASIBLE? A MAXENT APPROACH D. E. Cooke, V. Kreinovich, and L. Longpre ................................................... 25 MAXIMUM ENTROPY, LIKELIHOOD AND UNCERTAINTY: A COMPARISON Amos Golan ..................................................................................................... .3 5 PROBABILISTIC METHODS FOR DATA FUSION Ali Mohammed-Djafari ..................................................................................... 57 WHENCE THE LAWS OF PROBABILITY? Anthony J. M. Garrett ....................................................................................... 71 BAYESIAN GROUP ANALYSIS W. von der Linden, V. Dose, and A. Ramaswami ............................................ 87 SYMMETRY-GROUP JUSTIFICATION OF MAXIMUM ENTROPY METHOD AND GENERALIZED MAXIMUM ENTROPY METHODS IN IMAGE PROCESSING Olga Kosheleva ............................................................................................... 101 PROBABILITY SYNTHESIS, HOW TO EXPRESS PROBABILITIES IN TERMS OF EACH OTHER Anthony 1. M. Garrett ..................................................................................... 115 INVERSION BASED ON COMPUTATIONAL SIMULATIONS Kenneth Hanson, G. S. Cunningham, and S. S. Saquih ................................ .l21 MODEL COMPARISON WITH ENERGY CONFINEMENT DATA FROM LARGE FUSION EXPERIMENTS R. Preuss, V. Dose, and W. von der Linden ................................................... 137 DECONVOLUTION BASED ON EXPERIMENTALLY DETERMINED APPARATUS FUNCTIONS V. Dose, R. Fischer, and W. von der Linden .................................................. 147 A BAYESIAN APPROACH FOR THE DETERMINATION OF THE CHARGE DENSITY FROM ELASTIC ELECTRON SCATTERING DATA A. Mohammad-Djafari and H. G. Miller ........................................................ 153 vi CONTENTS INTEGRAT ED DEFORMABLE BOUNDARY FINDING USING BAYESIAN STRATEGIES Amit Chakraborty and James Duncan ............................................................ 171 SHAPE RECONSTRUCTION IN X-RAY TOMOGRAPHY FROM A SMALL NUMBER OF PROJECTIONS USING DEFORMABLE MODELS Ali Mohammad-Djafari and Ken Sauer. ......................................................... 183 AN EMPIRICAL MODEL OF BRAIN SHAPE James Gee and L. Le Briquer. ......................................................................... 199 DIFFICULTIES APPLYING BLIND SOURCE SEPARATION TECHNIQUES TO EEGANDMEG Kevin H. Knuth ............................................................................................... 209 THE HISTORY OF PROBABILITY THEORY Anthony J. M. Garrett ..................................................................................... 223 WE MUST CHOOSE THE SIMPLEST PHYSICAL THEORY: LEVIN-LI-VITANYI THEOREM AND ITS POTENTIAL PHYSICAL APPLICATIONS D. Fox, M. Schmidt, M. Koshelev, V. Kreinovich, L. Longpre, and J. Kuhn ......................................................... 239 MAXIMUM ENTROPY AND ACAUSAL PROCESSES: ASTROPHYSICAL APPLICATIONS AND CHALLENGES M. Koshe1ev ................................................................................................... .253 COMPUTATIONAL EXPLORATION OF THE ENTROPIC PRIOR OVEk SPACES OF LOW DIMENSIONALITY Holly E. Fitzgerald and Everett G. Larson ..................................................... .263 ENVIRONMENTA LLY -ORIENTED PROCESSING OF MULTI-SPECTRAL SATELLITE IMAGES: NEW CHALLENGES FOR BAYESIAN METHODS S. A. Starks and V. Kreinovich ...................................................................... .271 MAXIMUM ENTROPY APPROACH TO OPTIMAL SENSOR PLACEMENT FOR AEROSPACE NON-DESTRUCTIVE TESTING R. Osegueda, C. Ferregut, M.J. George, J. M. Gutierrez, and V. Kreinovich ............................................................... .277 MAXIMUM ENTROPY UNDER UNCERTAINTY Henryk Gzyl. ................................................................................................... 291 SUBJECT INDEX ......................................................................................................... 296 IN MEMORY OF EDWIN T. JAYNES With the passing of Edwin Thompson Jaynes on April 30, 1998, his many friends in the MAXENT community and beyond must say good-bye to a very special person. His openness and unselfishness, his independent and original thought, and his uncompromising high standards have made an indelible impact. His written work was so lucid that it was in and of itself a pleasure to read; his speaking style was every bit as penetrating and intelligible as his writing. Beyond his prodigious scientific contributions and wisdom, much more could be said about those personal qualities which made Ed's friendship over the years a rare privilege. But as anyone who knew him understands, Ed believed that such matters are by their nature private, and would have been uncomfortable with public profession of the grief which naturally accompanies this loss. He will be keenly missed. vii PREFACE This volume has its origin in the Seventeenth International Workshop on Maximum Entropy and Bayesian Methods, MAXENT 97. The workshop was held at Boise State University in Boise, Idaho, on August 4 - 8, 1997. As in the past, the purpose of the workshop was to bring together researchers in different fields to present papers on applications of Bayesian methods (these include maximum entropy) in science, engineering, medicine, economics, and many other disciplines. Thanks to significant theoretical advances and the personal computer, much progress has been made since our first Workshop in 1981. As indicated by several papers in these proceedings, the subject has matured to a stage in which computational algorithms are the objects of interest, the thrust being on feasibility, efficiency and innovation. Though applications are proliferating at a staggering rate, some in areas that hardly existed a decade ago, it is pleasing that due attention is still being paid to foundations of the subject. The following list of descriptors, applicable to papers in this volume, gives a sense of its contents: deconvolution, inverse problems, instrument (point-spread) function, model comparison, multi sensor data fusion, image processing, tomography, reconstruction, deformable models, pattern recognition, classification and group analysis, segmentation/edge detection, brain shape, marginalization, algorithms, complexity, Ockham's razor as an inference tool, foundations of probability theory, symmetry, history of probability theory and computability. MAXENT 97 and these proceedings could not have been brought to final form without the support and help of a number of people. In particular, SCP Global Technologies helped with the realization of MAXENT 97 . The editors, Gary Erickson, Josh Rychert, and Ray Smith, express their gratitude to all the speakers and are appreciative for the additional time and effort authors expended in producing a finished manuscript. This preface must end on a sad note: Professor E. T. Jaynes died on April 30, 1998, in St. Louis, Missouri. ix MASSIVE INFERENCE AND MAXIMUM ENTROPY JOHN SKILLING Department of Applied Mathematics and Theoretical Physics University of Cambridge England CB3 9EW Abstract. In data analysis, maximum entropy (MaxEnt) has been used to re construct measures i.e. positive, additive distributions) from limited data. The MaxEnt prior was originally derived from the "monkey model" in which quanta of uniform intensity could appear randomly in the field of view. To avoid undue digitisation, the quanta had to be small, and this led to difficulties with the Law of Large Numbers, and to unavoidable approximations in computing the poste rior. A better way of avoiding digitisation is to give the quanta variable intensity with an exponential prior, that being the natural MaxEnt assignment. We call this technique "Massive Inference" (MassInf). Although the entropy formula no longer appears in the prior, MassInf results show improved quality. MassInf is also capable of assigning a simple prior for polarized images. Key words: Maximum entropy, infinitely divisible, polarization, regularization 1. History of maximum entropy As presented by Jaynes (1957), the Principle of Maximum Entropy (PME) is a rule for assigning probability distributions: in making inferences on the basis of partial infomation we are to use that probability distribution which has maximum entropy subject to whatever ensemble-average constraints are known. Given some mean values (1) that include normalisation L: Pi = 1, the probability distribution p is to be assigned by maximising its entropy (2) (Shannon, 1948) subject to the given constraints. In statistical mechanics, the entropy S is derivable from the combinatoric number of ways II n = N!I nil ~ exp(S), (3) G. J. Erickson et al. (eds.), Maximum Entropy and Bayesian Methods © Springer Science+Business Media Dordrecht 1998 2 JOHN SKILLING of dividing an ensemble of N systems into cells with occupation ni having mean N Pi. Alternatively, if individual mean values mi are assigned, each occupation number can be given a Poisson distribution with correct mean, leading to The PME amounts to the entirely reasonable prescription of giving equal in trinsic weight to each individual state. When the constraints D include a defining set of physical variables such as energy and volume, the maximum entropy dis tribution gives accurate predictions of other average quantities with different R. Fluctuations are generally O(N-l/2) small, and deviations larger than this indi cate an unacknowledged extra constraint. It is tempting and productive to apply this successful formalism to data anal ysis, and this has been done both directly and indirectly. Indirect applications use PME to assign the posterior probability distribution Pr(fID) of the quan tity f being sought. It is supposed that the data are ensemble-average constraints Dk = J Rk(f) Pr(f)df, so that the PME becomes applicable. The commonest such application is the derivation of a power spectrum from autocorrelation coeffi cients D of a time-series f. Actually, data tend to be observations of the particular object being investigated at the time, and the proper analysis is Bayesian. A prior probability Pr(f) needs to be assigned (by PME or other insight), but it is then not determined by the data, whose role is to modulate the prior through the usual likelihood function Pr(DIf). For a direct application of MaxEnt, we suppose that we seek the distribution f of some positive, additive quantity such as the intensity of light in an image, or the flux of energy along a spectrum. Mathematicians call such an object a "measure" . We then proceed with the "monkey model" (Gull and Daniell, 1978) in which f is identified with a number n of quanta of some unknown but presumed small strength q. With m similarly rescaled, Stirling's approximation for large n yields the Quantified Maximum Entropy (QME) prior II Pr(f) ex: exp(aS)j fil/2, (5) where m is a set of weights that models any a priori non-equivalence of the cells i, and a = 1jq is an unknown but apparently large hyper-parameter. When apply ing this, a cannot in fact be particularly large, lest the prior dominate the like lihood. Fortunately, the same entropy form can be derived from symmetry argu ments which avoid combinatoric modelling (Shore and Johnson, 1980). Related arguments (Skilling, 1989) suggest the QME prior, implicitly assuming that the desirable entropy maximum should yield a useful selection from the posterior dis tribution. Were the data observations Dk = ~ Rkili to be exact, the PME could be used to assign a corresponding f, just as in statistical mechanics. Even if the data are noisy and thus subsumed into a likelihood function Pr(DIf), the PME can still be used to assign a particular f from among those with some acceptable fit Pr(f) ~ Po to the data. Visually, distributions assigned by maximum entropy are often of high quality and utility. Entropy is a good regulariser.

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.