ebook img

EEG/MEG Source Reconstruction: Textbook for Electro-and Magnetoencephalography PDF

429 Pages·2022·24.613 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview EEG/MEG Source Reconstruction: Textbook for Electro-and Magnetoencephalography

Thomas R. Knösche Jens Haueisen EEG/MEG Source Reconstruction Textbook for Electro-and Magnetoencephalography EEG/MEG Source Reconstruction Thomas R. Knösche • Jens Haueisen EEG/MEG Source Reconstruction Textbook for Electro-and Magnetoencephalography Thomas R. Knösche Jens Haueisen Max Planck Institute for Human Cognitive Institute of Biomedical Engineering and Brain Sciences and Informatics Leipzig Technische Universität Ilmenau Germany Ilmenau Germany ISBN 978-3-030-74916-3 ISBN 978-3-030-74918-7 (eBook) https://doi.org/10.1007/978-3-030-74918-7 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustra- tions, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar meth- odology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland V Preface At the time of the seminal discovery of the human electroencephalogram (EEG) by Hans Berger in the 1920s, this technology was the first and only possibility to non- invasively watch the human brain at work at its intrinsic time resolution. It remained so for several decades until the late 1960s, when the human magnetoencephalogram (MEG) was discovered by David Cohen, adding a complementary view to the EEG. In the initial phases of EEG and MEG, a huge treasure of empirical knowledge about the interpretation of these data in terms of the general state of the brain and its diseases was accumulated. However, the relationship of these signals to their exact origins in the brain remained mostly vague and indirect. Starting in the 1970s, novel neuroimaging methods emerged: first, positron emission tomography (PET) and single-photon emission computerized tomography (SPECT), and later, functional magnetic resonance imaging (fMRI). These techniques allowed, for the first time, to spatially pinpoint active brain areas with relatively high precision. Unfortunately, they reflected neural activity only indirectly and its dynamics were largely lost. Still, EEG and MEG remained the only ways to directly observe the dynamics of interacting neurons. More powerful computational hardware in the 1980s led to research efforts aiming to endow EEG and MEG with more spatial information by mathematically modeling the mixing of the signals through the head’s conductive tissues and the inversion of these models. The field of source reconstruction or source localization was born. These techniques allow for studying the spatial and temporal organization of dynamic net- works of neurons and thereby greatly facilitate the understanding of the actual mecha- nisms behind normal and pathological brain function. Such understanding leads to completely novel approaches in diagnosis and treatment that affect millions of patients with disorders such as Alzheimer's, Parkinson's, epilepsy, schizophrenia, and depres- sion, to name but a few. Source reconstruction techniques have also revolutionized our understanding of the way we think, feel, and behave. Over the last four decades, a large number of EEG/MEG source reconstruction methods were developed. However, because of the inherent non-uniqueness of the solutions of the associated inverse problems, none of them could claim to be the ulti- mate and universal solution to the source reconstruction problem. Instead, each method embodied different assumptions about the underlying neural activity, the geometry and properties of the head tissues, and the noise in the data. These assump- tions are often implicit and not directly obvious to the user. This makes it difficult to assess their appropriateness in a particular situation and thereby hampers the selection of suitable methods as well as the adequate interpretation of results. In this book, we offer a unified perspective on a broad range of EEG/MEG source reconstruction methods with particular emphasis on their respective assumptions about sources, data, head tissues, and sensor properties. As a unifying principle, we use Bayes’ theorem, which states in beautiful simplicity how different sources of informa- tion are combined by taking into account their probability distributions. This view allows us to track down the actual assumptions buried in each method and perform fair comparisons between them in the light of particular situations. While the selection of methods treated in this book cannot be complete, the unifying framework should also provide insights into new and currently unknown source reconstruction techniques. This book is intended as basic reading for everybody who is engaged with EEG/ MEG source reconstruction, be it as a method developer or as a user. In particular, the latter group may gain from a systematic insight into the nature of source reconstruc- tion algorithms and their often implicit assumptions, for appropriate selection of methods and a more accurate interpretation of results. While no specialized knowledge is required and everything is developed starting from quite basic concepts, some basic V I Preface understanding of applied mathematics, including linear algebra, differential and inte- gral calculus, and probability theory, is necessary. Finally, we would like to thank the numerous people without whom this book would not have been possible. Two sources of inspiration and direct input were essential, our joint master’s course on inverse problems in bioelectromagnetism and our joint series of international Ph.D. summer schools. We thank our students as well as our many speakers at the summer schools. We also thank our co-workers Shih-Cheng Chien, Christoph Dinh, Lorenz Esch, Eva-Maria Dölker, Patrique Fiedler, Uwe Graichen, Alexander Hunold, Thomas Jochmann, Sascha Klee, Stephan Lau, Bojana Petkovic, Herrmann Sonntag, Daniel Strohmeier, Mirko Fuchs, Tim Kunze, and Konstantin Weise for their valuable input. Thomas R. Knösche Leipzig, Germany Jens Haueisen Ilmenau, Germany December 2020 VII Mathematical Notation and Symbols This book uses standardized notation for many mathematical and physical expressions and quantities, which are listed and explained in this section. Besides these, there are numerous additional symbols of only local scope, which are explained in the text, in order to keep this section compact. Note that in some cases, the same symbol is assigned different meanings, where common conventions exist about these symbols and the respective contexts are clearly separate. General Rules and Symbols The rules concerning lower/upper case letters are sometimes deviated from, in order to comply with general conventions, especially for electric/magnetic field quantities (e.g.,  A for the magnetic vector potential). a,ϕ Lower case italic letters (Latin or Greek) denote scalar quantities.   a,ϕ Arrows above the symbols mark (column) vectors in three-dimensional space. a,ϕ Bars above the symbol mark (column) vectors in any space. = = a,ϕ Double bars mark tensors. A,Φ Upper case letters (Latin or Greek) denote matrices. Vectors and matrices can be represented by bracketed expression, where omitted ele- ments are replaced by dots. Note that the elements can be vectors or matrices them- selves: (cid:31)x (cid:28) (cid:31)X (cid:30) X (cid:28) 1 11 1M (cid:29) (cid:31) (cid:26), (cid:29) (cid:31) (cid:29) (cid:31) (cid:26),(cid:25)x (cid:30)x (cid:24) (cid:29) (cid:26) (cid:29) (cid:26) 1 N (cid:30)(cid:29)xN(cid:27)(cid:26) (cid:30)(cid:29)XN1 (cid:30) XNM(cid:27)(cid:26) aT, aT, AT Transpose of a vector or matrix. A-1 Inverse of a matrix. A+ Moore-Penrose pseudoinverse of a matrix. If A has linearly independent columns: A+=(ATA)−1AT; if the rows are independent: A+(cid:31)(cid:30)T(AAT)−1. A×B Matrix (dot) product. For vectors identical to the scalar product. The dot is often omitted. AB Element-wise (Hadamard) product. AÄB Kronecker product, where each element of A is replaced by the product of this element and the entire matrix B.   a×b Vector (cross) product of two three- dimensional vectors. I Identity matrix, with 1 in diagonal and 0 elsewhere. 1 Identity vector, with every element equal to 1. diag(A) Diagonalization of matrix A: all non-diagonal elements are set to zero. V III Mathematical Notation and Symbols trace(A)=∑N A Trace of a square matrix. i=1 ii det(A)= A Determinant of a square matrix. Version A bears some confusion potential with norms (see below) and is therefore only used where brevity is absolutely mandatory. vec(A) Operator that concatenates the columns of a matrix into a single column vector. f(⋅),g(⋅) General function (other characters are possible), (⋅) is replaced by a comma-separated list of arguments. If the function returns a vector, it is written with a bar: f(cid:31)·(cid:30). f(x) Function evaluation for a particular value of the argument. This is often x=x0 used instead of the simple f(x ) if the function involves derivatives. 0 F(⋅),G(⋅) General functional (other characters are possible), (⋅) is replaced by a comma-separated list of function arguments. M Real vector space of M dimensions; often used to indicate the dimension- ality of a vector: x∈M. ¶G Boundary of domain G. For example, if G is a three-dimensional volume, then ¶G is its surface. i Imaginary unit. " Universal quantifier, e.g., y(x)=1,∀x:x<1, meaning y(x) equals 1 for all x with the property x<1. $ Existential quantifier, e.g., ∃x:x<1, meaning there exists an x with the property x<1. D ifferential Operators dny(x) ∂ny(x,z) nth total/partial derivative of function y with respect to , dxn ∂xnxn variable x. dx(t) d2x(t) x(t)= ,x(t)= First and second temporal derivative. dt dt2 ∇=∂∂x, ∂∂y, ∂∂z Ntharebela-d oimpeernastioorn faolr s sppaactei)a.l derivatives (usually applied in  ∇⋅a Divergence of a vector field. ∇⋅a Gradient of a scalar field.  ∇×a Curl of a vector field. (cid:31)(cid:30)∂∂x22,∂∂y22,∂∂z22 Ltharpeela-dceim (deenlstiao)n oapl esrpaatcoer. for second spatial derivative in IX Mathematical Notation and Symbols If the operand depends on several location vectors, the ∇ or ∆ operator can be speci-   fied, e.g., ∇ ⋅1 r −r . This subscript can be omitted, if the operand either depends r2 1 2  only on a single location vector or if the derivative is with respect to r. Multiplication dots (·) are sometimes omitted in large equations. ¶y J Jacobi matrix of a vector valued function y= f(x), denotingJ = i . ij ¶x j Β Matrix approximating the Laplacian (second spatial derivative) on a grid. Note that the symbol also has other uses.  G (r) Green’s function of a differential operator (here ∆ operator, see above). ∆ N orms xp=(∑in=1xi p)1/p p vector norm. X p=(∑in=1∑mj=1xij p)1/p p matrix norm. If the subscript is omitted, the  (Euclidian) norm is meant. Note that upper sub- 2 2 scripts at norms denote the power, e.g. x is the square of the  norm. 2 2 I ntegrals We use the notation ∫ f(x)dx, rather than the variant ∫dx f(x). If the integration variable is a position vector on a surface or in space, double or triple integral symbols can be used: b Integral of f(r) over scalar r from a to b. ∫ f(r)dr a Integral of f(r), defined in three-dimensional space, over the surface S. ∫∫f(r)d2r s Integral of f(r), defined in three-dimensional space, over the volume V. ∫∫∫ f(r)d3r V Integral of f(r), defined in n dimensional space, over the volume V. The ∫f(r)dnr index n is omitted when undetermined. V X Mathematical Notation and Symbols S pecial Functions H(t)=0:t<0 Heaviside step function. 1:t≥0 (cid:31)(t)(cid:30)∞:t(cid:30)0with∫+∞(cid:31)(t)dt(cid:30)1 Dirac delta function. 0:t(cid:29)0 −∞ (cid:31) (cid:30)1:i(cid:30) j Kronecker delta. ij 0:i(cid:29) j N((cid:31),(cid:30)) Multivariate normal (Gaussian) distribution with vector expected values µ and covariance matrix Σ.  h(r) Base function, used for BEM and FEM. I ndexing Indices are written as lower case italic letters, while counting limits are uppercase ital- ics, like in åK a . Some letters have (relatively) fixed meanings in this context: k=1 k n,N Index of measurement channel, number of channels. m,M Index of source (mostly dipole) position, number of dipole positions. D Number of dipoles (in Bayesian context, in order to avoid confusion with model M). t,T Index of time step, number of time steps. N Number of surfaces in a BEM model. S N Number of elements (usually triangles) on surface k. E,k N Number of vertices on surface k. V,k G eneral Quantities y,y,Y General measurement value(s) (e.g., EEG or MEG), as single scalar, vector, or matrix (columns for time steps). A hat symbol (e.g., y) indicates that the data is estimated and not measured. e,e,E Error or residual term, as scalar, vector, or matrix. n,n,N Noise term, as scalar, vector, or matrix. (cid:31),(cid:31),(cid:30) General source parameter(s) (e.g., dipole strength or position), as single scalar, vector, or matrix (columns for time steps). (cid:31) (cid:30)(cid:31) Prior/posterior source covariance matrix. prior post ϑ , ϑ Prior/posterior source expected vector. prior post ∑ Noise covariance matrix.

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.