Table Of ContentINSTITUTEOFPHYSICSPUBLISHING PHYSICSINMEDICINEANDBIOLOGY
Phys.Med.Biol.47(2002)961–975 PII:S0031-9155(02)29872-3
Iterative solution of dense linear systems arising from
the electrostatic integral equation in MEG
JussiRahola1,3andSatuTissari2,3
1SimulintuOy,Niittykummuntie6A8,02200Espoo,Finland
2CSC—ScientificComputingLtd,POBox405,02101Espoo,Finland
E-mail:satu.tissari@csc.fi
Received19October2001
Published1March2002
Onlineatstacks.iop.org/PMB/47/961
Abstract
We study the iterative solution of dense linear systems that arise from
boundary element discretizations of the electrostatic integral equation in
magnetoencephalography (MEG). We show that modern iterative methods
can be used to decrease the total computation time by avoiding the time-
consuming computation of the LU decomposition of the coefficient matrix.
More importantly, the modern iterative methods make it possible to avoid
the explicit formation of the coefficient matrix which is needed when a
large number of unknowns are used. To study the convergence of iterative
solvers we examine the eigenvalue distributions of the coefficient matrices.
For the sphere we show how the eigenvalues of the integral operator
are approximated by the eigenvalues of the coefficient matrix when the
collocation and Galerkin methods are used as discretization methods. The
collocation method approximates the eigenvalues of the integral operator
directly. The Galerkin method produces a coefficient matrix that needs to
be preconditioned in order to maintain optimal convergence speed. With
the ILU(0) preconditioner iterative methods converge fast and independent
of the number of discretization points for both the collocation and Galerkin
approaches. The preconditioner has no significant effect on the total
computationaltime.
1. Introduction
In the inverse problem of magnetoencephalography (MEG) (Ha¨ma¨la¨inenetal 1993) the
sourceofelectricalactivityinthebrainistobelocalizedfrommagneticfieldmeasurements
recorded outside the head. When the same method is applied to the heart, it
3 PartsoftheworkforthispaperwerecarriedoutintheParallelAlgorithmsProject,CERFACS,42Av.GCoriolis,
31057ToulouseCedex01,France.
0031-9155/02/060961+15$30.00 ©2002IOPPublishingLtd PrintedintheUK 961
962 JRaholaandSTissari
is called magnetocardiography (MCG). The related electrical recording techniques,
electroencephalography(EEG)andelectrocardiography(ECG),areinwideclinicaluse. The
sourcelocalizationusingEEGisamuchmorechallengingproblemthanthatofMEGbecause
ofthe spreadingof the electricpotentialon the scalp causedbythe insulatingskull. In this
paperwewillbeconcentratingontheMEG,butsomeoftheresultsareapplicabletotheother
measurementtechniques.
TosolvetheinverseproblemusingMEG,asourcedistribution,suchasacurrentdipole,
is assumed for the electric activity in the brain. In addition to this, a conductor model of
the brain is needed. In the source localization, a series of forward problemsis solved, i.e.,
the magneticfield outsidethe headcaused by a knownsource is calculated. The computed
and measured magnetic fields are then compared and the source parameters (e.g., location,
orientationandstrengthofthedipole)areadjustedusinganonlinearoptimizationmethod.
In the current paper we use a homogeneous single-compartment conductor model for
the brain. It is sufficient to solve the integral equation for the electric potential on the
surface of the brain as the skull insulates the surrounding tissues (Ha¨ma¨la¨inenandSarvas
1989). For the MEG, the magnetic field outside the head can be obtained from the
computed potential. For the accurate localization of the electric activity we need to
have a realistically shaped model for the surface of the brain given by a triangular mesh.
For the EEG source localization problems it is necessary to use a multi-compartment
integral equation where piecewise homogeneous conductivity is assumed. The multi-
compartment integral equations and their iterative solution will not be discussed in this
paper.
Inearlierstudiesofthebioelectromagneticinverseproblems,adirectsolverhasbeenused
forthe dense linear systems arising from the boundaryelementdiscretizations. Techniques
basedonLUfactorizationsofthecoefficientmatrixarepracticalbecauseaseriesofproblems
has to be solved with the same coefficientmatrix. On the other hand, the LU factorization
basedonGaussianeliminationisratherexpensivetocomputeandthusformsacomputational
bottlenecktogetherwiththeexplicitformationofthecoefficientmatrix.
Modern iterative techniques for dense linear systems have been applied in many
applicationareas. However,inthebioelectromagneticliterature,therehavebeenonlysome
references to the use of the simple Gauss–Seidel method for the inverse problem in MEG
(Ha¨ma¨la¨inenetal1993). WeapplymodernKrylov-subspaceiterativemethodstothisproblem
andpresentconvergenceandperformanceresultsfortheiterativesolvers.
Theconvergenceofiterativesolversdependsonthedistributionoftheeigenvaluesofthe
coefficientmatrices. Westudythetheoreticaleigenvaluesoftheelectrostaticintegraloperator
in the case of a spherical source region. We describe how the theoretical eigenvalues are
mappedtotheeigenvaluesofthecoefficientmatrixwhentheintegralequationisdiscretized
with the collocation and Galerkin methods. We study the eigenvalues of the coefficient
matricesalsofortherealisticbrain-shapedconductormodels.
The convergence of iterative solvers can be accelerated with the use of proper
preconditioningtechniques.Fromtheeigenvalueanalysisweproposeasimplepreconditioning
strategyandexplainwhythisstrategyshouldbeanefficientone.
Themostimportantreasonforourstudyofthemoderniterativemethodspresentedinthe
present paper is not to replace the LU decomposition with the studied iterative methods as
suchbuttocombineamoderniterativemethodwithamethodwhichmakesitpossibletosolve
thematrixequationwithoutexplicitlyformingthematrix. Inanotherpaperwehavestudied
a precorrected FFT method combined with a modern iterative method (TissariandRahola
2001). Thecombinationofthetwomethodsgivesaveryefficienttoolforthesolutionofthe
forwardproblemwithalargenumberofunknowns.
IterativesolutionofdenselinearsystemsinMEG 963
Inthetraditionalmethod,thebottleneckistheformationofthecoefficientmatrixandits
LUdecomposition,whichtakesahugeamountoftime. Thecomplexitiesofthephasesare
O(N2) forthe matrixformation, O(N3) forthe computationof the LU decomposition,and
O(N2)forthesolutionusingtheLUdecomposition(withasmallconstantofproportionality).
Incontrast,thewholecomplexityfortheiterativemethodandtheprecorrectedFFTmethodis
onlyO(NlogN). Thusforlargeenoughproblems,thewholeiterativesolutionisfasterthan
theapplicationoftheLUdecompositioninthetraditionalmethod. Inaddition,thememory
requirementsare significantly diminished. A large number of unknownsare needed in the
evaluation of the improvements in the source localization accuracy due to the realistically
shapedconductormodels.
2. Boundaryelementmethodformulation
In this section we introduce the potential integral equation and discuss the properties of
the corresponding integral operators. We introduce some notation that is used when the
eigenvaluesoftheintegraloperatorarecomputed. Wealsobrieflydiscussthediscretization
schemesandthedeflationoftheresultingmatrixequation.
2.1. Potentialintegralequationandoperator
IntheforwardproblemofMEGweneedtocomputethepotentialV(r)onthesurfaceofthe
brainduetothesource. Thebrainismodelledasahomogeneousconductorofconductivityσ
1
thatoccupiesthevolumeboundedbythesurface(cid:6). Thepotentialintegralequationisgiven
byHa¨ma¨la¨inenetal(1993):
(cid:1)
σ 1 (r(cid:3)−r)·n(r(cid:3))
V(r)=2 0V∞(r)+ V(r(cid:3)) dS(cid:3) r ∈(cid:6) (1)
σ 2π |r(cid:3)−r|3
1 (cid:6)
whereV∞(r)isthepotentialcausedbythesourceinaninfinitemediumofunitconductivity
σ ,nistheoutwardunitnormalanddS(cid:3) istheelementofarea.
0
The above integral equation is well known in potential theory (see, e.g., Sloan 1992)
andisrelatedtotheexteriorDirichletproblemfortheLaplaceequation. Theequationisan
integralequationofthesecondkindthatusestheso-calleddouble-layerrepresentationofthe
potential. Theoperatorintheequationiscalledtheelectrostaticintegraloperator.
LetusdefinetheintegraloperatorK actingonthepotentialV(r):
(cid:1) 0
1 (r(cid:3)−r)·n(r(cid:3))
K V(r)= V(r(cid:3)) dS(cid:3). (2)
0 2π |r(cid:3)−r|3
(cid:6)
ThefunctionK V(r)satisfiestheLaplaceequationforallfunctionsV(r)andallpointsrthat
0
arenotonthesurface(cid:6)(Sloan1992).
On the surface the integral operator becomes singular. When r is a point outside the
domainboundedby (cid:6) and approachesa pointr on (cid:6), the so-called jump relations(Sloan
0
1992)yield
lim K V(r)=KV(r )−V(r ) (3)
0 0 0
r→r0∈(cid:6)
wheretheelectrostaticintegraloperatorKisdefinedasanimproperintegralby
(cid:1)
1 (r(cid:3)−r)·n(r(cid:3))
KV(r)= lim V(r(cid:3)) dS(cid:3). (4)
(cid:10)→02π |r(cid:3)−r|3
(cid:6),|r(cid:3)−r|>(cid:10)
Intheaboveintegralasmallenvironmentofr isexcludedandsubsequentlyalimitistaken
wherethesizeoftheenvironmentdecreasestozero(deMunck1992).
964 JRaholaandSTissari
Theintegralinequation(1)mustbeunderstoodinthesenseofaCauchyprincipalvalue,
andtheintegralequationisthusequivalentto
σ
V(r)=2 0V∞(r)+KV(r). (5)
σ
1
2.2. Discretizationofthepotentialintegralequation
We will now discuss the discretization of the integral equation (5). We will express the
potentialasalinearcombinationofNglobalbasisfunctionsh (r):
j
(cid:2)N
V(r)= V h (r). (6)
j j
j=1
The geometry is described by a triangular mesh of the surface of the brain. We use either
piecewise constant basis functions, the unknownsbeing the values of the potential on each
triangle,orpiecewiselinearbasisfunctions,wheretheunknownsarethevaluesofthepotential
attheverticesofthetriangularmesh. Inthelattercase,eachglobalbasisfunctionisnonzero
onthetrianglesthatsharethegivenvertex.
To form the coefficient matrix from the integral equation we use either the collocation
or the Galerkin method of weighted residuals. In the collocation approach we require that
theintegralequationbesatisfiedattheso-calledcollocationpointsc . Thenumberofthese
i
pointsisthesameasthenumberofunknowns. Forthepiecewiseconstantbasisfunctionsthe
collocationpointsareatthecentresofthetriangles. Forthepiecewiselinearbasisfunctions
thecollocationpointsareattheverticesofthetriangles.
Wecannowinsertthepotentialexpansion(6)intoequation(5)andapplythecollocation
method, noting that a basis functionh (r) is one at the collocation pointc and zero at all
j j
othercollocationpoints. ThisleadstothesystemoflinearequationsfortheunknownsV
i
(cid:2)N
σ
V =2 0V∞(c )+ V Kh (c ) i =1,...,N. (7)
i i j j i
σ
1 j=1
Inmatrixnotationthisreadsas
(I −G)x =b (8)
wherexisthevectorofunknownsV ,bistheright-handsidevectorgivenby
i
σ
b =2 0V∞(c ) (9)
i i
σ
1
andtheelementsofthematrixGaregivenby
g =Kh (c ). (10)
ij j i
TheGalerkinapproachmeansthatafterthepotentialexpansion(6)issubstitutedintothe
integralequation(5),theequationismultipliedbyeachofthebasisfunctionsh(r)initsturn
i
andafterthattheresultisintegratedoverthesurface(cid:6). TheGalerkinmethodwasstudiedby
Mosheretal(1999)andindependentlybythepresentauthors(TissariandRahola1998).
Tosimplifythenotationweintroduceaninnerproductdefinedby
(cid:1)
(cid:9)f(r),g(r)(cid:10)= f(r)g(r)dS. (11)
(cid:6)
NowthelinearequationsarisingfromtheGalerkinapproachforNtriangles(i = 1,...,N)
canbewrittenas
(cid:2)N (cid:2)N
σ
V (cid:9)h (r),h (r)(cid:10)=2 0(cid:9)h (r),V∞(r)(cid:10)+ V (cid:9)h (r),Kh (r)(cid:10). (12)
j i j i j i j
σ
j=1 1 j=1
IterativesolutionofdenselinearsystemsinMEG 965
Inmatrixnotationthiscanbewrittenas
(C−F)x =b (13)
wheretheelementsofmatrixCaregivenby
c =(cid:9)h (r),h (r)(cid:10) (14)
ij i j
andthoseofFby
f =(cid:9)h (r),Kh (r)(cid:10) (15)
ij i j
whiletheright-handsidebisgivenby
σ
b =2 0(cid:9)h (r),V∞(r)(cid:10). (16)
i i
σ
1
Whenpiecewiselinearbasisfunctionsareused,thematrixCisasparsematrixwithc
ij
beingnonzeroonlyifverticesiandj shareanedgeinthetriangularmesh. ThematrixFisa
densematrix. Eachglobalbasisfunctionh (r)isnonzeroonthetrianglessharingthevertexi.
i
Inpractice,weuselocalelementwisebasisfunctionsthatarenonzeroonlywithinonetriangle,
wheretheir value is one at one of the verticesand zero at the two others. The matrices are
assembledtrianglebytriangleusinglocalbasisfunctionsandutilizinganalyticallyintegrated
elements(deMunck1992).
2.3. Deflation
Theintegralequationdeterminesthepotentialonlyuptoanadditiveconstant. Inotherwords,
ifthepotentialV(r)satisfiestheintegralequation,sodoesthepotentialV(r)+v . Interms
1
ofthecoefficientmatricesthisimpliesthatthelinearsystems(8)and(13)aresingular.
We can remove the singularity by requiring that the sum of the potentials V is zero,
i
thereby√fixingtheconstantv1. Inmatrixterms,wewilladdthenormalizedvectorofallones,
e = 1/ N(11... 1)T, multiplied by a constant γ, to all equations, so that the modified
coefficientmatrixwillbe
A+γeeT (17)
where A refers to the original coefficient matrix. This procedure is called a deflation
(Barnardetal1967).
The form of the coefficient matrix implies that e is the normalized eigenvector of the
matrix corresponding to the eigenvalue zero: Ae = 0. The deflation has the effect that
the simple eigenvalue zero is moved to the value γ and the rest of the eigenvalues remain
unchanged. Wewilldiscussthechoiceofγ inthenextsection.
3. Eigenvalues
3.1. Eigenvaluesoftheintegraloperatoranddifferentdiscretizations
Inthissectionwestudytheeigenvaluesofthepotentialintegraloperatorandthecorresponding
coefficient matrices. The distribution of the eigenvalues governs the convergence of the
iterativesolvers,andsuggestshowtodeflatethelinearsystem.
First,wewillshowhowthecollocationandGalerkinmethodswillapproximatethetrue
eigenvalues of the integral operator 1−K appearing in the integral equation (5). For this
operator,wewillwritetheeigenvalueequation
(1−K)V(r)=λV(r) (18)
966 JRaholaandSTissari
whereλisaneigenvalueandV(r)isaneigenfunction. Tosolvethisnumerically,weusethe
potentialexpansion(6). Whenweapplythecollocationmethodtothepreviousequation,we
obtainthealgebraiceigenvalueproblem
(I −G)x =λx (19)
where the matrix G and the vector x are the same as in the previous section. Thus the
eigenvalues of the coefficient matrix (1 − G) in equation (8) for the collocation method
approximatedirectlytheeigenvaluesoftheintegralequation.
FortheGalerkinmethodthediscretizedeigenvalueequationisageneralizedeigenvalue
problem
(C−F)x =λCx. (20)
ThisequationcanbepremultipliedbythematrixC−1totransformittotheordinaryeigenvalue
problem
(I −C−1F)x =λx. (21)
Thus,theeigenvaluesoftheGalerkincoefficientmatrixC−Ffromequation(13)arenotthe
sameastheeigenvaluesofthematrixI−C−1F ofequation(21)aboveandhencetheydonot
approximatetheeigenvaluesoftheintegralequationdirectly.
Forasphere,theeigenvaluesoftheintegraloperator1−Kappearinginequation(5)canbe
foundwithastraightforwardcalculationusingthesphericalharmonics(AhnerandArenstorf
1986). Theresultisgiveninthefollowingtheorem.
Theorem 3.1. When the integral operator K is applied on the surface (cid:6) of a sphere, the
eigenvaluesandeigenvectorsof1−Karegivenby
2n
(1−K)Ym(θ,φ)= Ym(θ,φ) n=0,1,... m=−n,...,n
n 2n+1 n
whereYm(θ,φ)arethesphericalharmonicsasdefinedinArfken(1985).
n
3.2. Deflation
Wecannowreturntothequestionofdeflatingthecoefficientmatrix,i.e.,thechoiceofγ in
thetransformationA → A+γeeT. Fromtheorem3.1weseethattheanalyticaleigenvalues
for the sphere convergetowards one and thus to movethe zero eigenvalueto one would be
ideal in terms of the solution of the linear systems. Therefore, we have set γ = 1 for the
collocationapproach.
In the Galerkinapproachwe would like to deflate the matrix I −C−1F with the value
γ = 1, because we know the eigenvalue distribution of this matrix for the sphere. This is
equivalenttochangingtheoriginalcoefficientmatrixC−FtoC−F +CeeT.
3.3. Comparisonofanalyticalandnumericaleigenvalues
Forasphere,wepresentacomparisonofthenumericallycomputedandanalyticeigenvalues
showingthequalityoftheeigenvalueapproximationsfromdifferentboundaryelementmethod
formulations. The numerical eigenvalues were computed using the LAPACK library. The
firsttestswererunusingadiscretizationofthespherewith720trianglesand362vertices. In
figures1and2 we showthe eigenvaluesofthecoefficientmatrixofthe collocationmethod
using the piecewise constant and linear basis functions, respectively, together with the
analytically computedeigenvaluesfor the sphere. In the plots we omit the zero eigenvalue
whichhasbeendeflatedfromthecoefficientmatrix.
IterativesolutionofdenselinearsystemsinMEG 967
x 10−8
3
2
1
λ)
g( 0
a
m
I
−1
−2
−3
0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 1.05
Real(λ)
Figure1. Numericaleigenvalues (pluses)forthecollocation methodusingpiecewise constant
basisfunctionstogetherwiththeanalyticaleigenvalues(circles)whentheunitsphereisdiscretized
with720triangles. Notethedifferentscalesfortherealandimaginaryaxes.
1
0.8
0.6
0.4
0.2
λ)
g( 0
a
m
I−0.2
−0.4
−0.6
−0.8
−1
0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 1.05
Real(λ)
Figure2.Numericaleigenvalues(pluses)forthecollocationmethodusingpiecewiselinearbasis
functionstogetherwiththeanalyticaleigenvalues(circles)whentheunitsphereisdiscretizedwith
720triangles.
Itcanbeseenfromtheplotsthatthequalityoftheeigenvaluesisapproximatelythesame
for the two collocation methods. The piecewise constant basis functions generate a larger
matrixfromthesamemesh, asthenumberofunknownsisthenumberoftriangles. Forthe
piecewiselinearbasisfunctions,thenumberofunknownsisthenumberofvertices.
968 JRaholaandSTissari
x 10−8
8
6
4
2
λ)
g( 0
a
m
I
−2
−4
−6
−8
0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 1.05
Real(λ)
Figure3. Numericaleigenvalues (pluses)fortheGalerkinmethodusingpiecewiselinearbasis
functionstogetherwiththeanalyticaleigenvalues(circles)whentheunitsphereisdiscretizedwith
720triangles. Notethedifferentscalesfortherealandimaginaryaxes.
Infigure3weshowtheanalyticaleigenvaluestogetherwiththeeigenvaluesofthematrix
I −C−1F,whereC−FisthecoefficientmatrixfortheGalerkinapproachusingpiecewise
linearbasisfunctions. NotethattheaccuracyoftheeigenvaluesismuchbetterfortheGalerkin
methodthanforthecorrespondingcollocationapproach.
To get a more quantitative measure for the accuracy of the computed eigenvalues, we
computedtheabsolutevalueoftheerrorinthesmallestnonzeroeigenvalue(2/3)forthethree
discretizationschemesandforfourdifferentdiscretizationsoftheunitsphere. Theresultsare
showninfigure4. ItcanbeseenthattheGalerkinmethodhasamuchsmallererrorthanthetwo
collocationapproachesandthatfortheGalerkinmethodtheerrordecreasesmuchfasterwhen
thediscretizationisrefined. AlsointermsofthesolutionoftheintegralequationtheGalerkin
method givesmore accurate results than the collocation methods (Mosheretal 1999). The
faster convergencerate of the Galerkin method can also be expected from theoretical error
analysesofintegralequations(Sloan1992).
Our final eigenvalue experiment consists of finding the eigenvalues for a brain-shaped
meshof886triangles. Figure5showstheeigenvaluesofthematrixI−C−1F ofthepiecewise
linearGalerkinapproachforthisrealisticallyshapedmesh. Theeigenvaluesaremorespread
out than for the sphere. Deflating the zero eigenvalue to 1 is still a good choice for the
realisticallyshapedmodel,asplentyofeigenvaluesclusteraroundthispoint.
4. Iterativemethods
4.1. Moderniterativemethods
For solving a system of linear equations, iterative methods offer an efficient alternative to
direct methods, such as the LU decomposition. Starting from an initial guess the iterative
methodsproduceiterates and the iteration can be stoppedwhen a user-definedconvergence
IterativesolutionofdenselinearsystemsinMEG 969
10−1
ue10−2
al
v
n
e
g
ei
est 10−3
all
m
s
n
Error i10−4
10−5
101 102 103 104
Number of unknowns
Figure4. Errorinthesmallestnonzeroeigenvalueasafunctionofthenumberofunknownsfor
thepiecewise constantcollocation approach (pluses), thepiecewise linearcollocation approach
(stars)andthepiecewiselinearGalerkinapproach(circles).
x 10−3
3
2
1
λ)
g( 0
a
m
I
−1
−2
−3
0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8
Real(λ)
Figure5. Computedeigenvalues (pluses)fortheGalerkinmethodusingpiecewiselinearbasis
functions for abrain-shaped meshof 886triangles. Note the different scales for the real and
imaginaryaxes.
criterionismet. Iterativemethodscanbeveryeffectiveifthenumberofiterationsneededfor
convergenceissmall.
Historically, the first iterative method was stationary in the sense that the iterates were
updated in the same way in each iteration. Stationary iterative methods such as the Jacobi
or Gauss–Seidel methods are not used much nowadays because of their slow convergence.
InmodernnonstationaryKrylovsubspacemethodsthe iteratesare updatedineachiteration
970 JRaholaandSTissari
based on the convergence history. Such methods access the matrix only by matrix–vector
products. Thusthewholematrixneednotbeformedexplicitly,onlyitsproductwithagiven
vectorneedstobecomputed.
For symmetric linear systems the method of choice is the conjugate gradient method
because at each iteration the error between its iterates and the true solution is the smallest
possible in a certain norm (Barretetal 1993). In addition, the new iterate of the conjugate
gradientmethodiseasytocomputefromthepreviousone.
For nonsymmetric linear systems the situation is more complicated. The generalized
minimal residual method (GMRES) produces iterates x such that the norm of the residual
k
(cid:12)Ax − b(cid:12) is the smallest possible at each iteration (SaadandSchultz 1986). However,
k
GMRES needs all the previousiterates in the computation of the new iterate and therefore
thecostofthemethodincreaseswiththeiterationnumber. Alessexpensivebutsuboptimal
method(GMRES(k))isobtainedifGMRESisrestartedeverykiterations.
Other efficient methods for nonsymmetric linear systems include the quasi-minimal
residualmethod(QMR)(FreundandNachtigal1991)andthe bi-conjugategradientmethod
stabilized(Bi-CGSTAB)(vanderVorst1992). Boththesemethodsaresimpletoapplyasthey
onlyrequirethestorageofonepreviousiterate. However,bothmethodsproduceiteratesthat
haveonlyanapproximateminimizationproperty.
4.2. Convergenceofiterativemethods
Theconvergenceofiterativesolversisgovernedbytheeigenvaluedistributionofthecoefficient
matrices. Iterativesolversconvergequicklyiftheeigenvaluesareclusteredandareallaway
from zero. In contrast, if there are both small and large eigenvalues, iterative solvers may
convergeveryslowly.
Theeigenvaluesoftheelectrostaticintegraloperatorareideallysuitedforiterativesolvers.
Ifthedeflationiscarriedoutusingthevalueγ =1,theeigenvaluesofasphericalconductor
are located on the positive real axis on the interval [2/3, 1]. The coefficient matrices for
thecollocationmethodinheritthiseigenvaluedistributiondirectly,andthustheconvergence
should be very fast. The number of iterations can be expected to be independent of the
numberofunknowns,becauseasthediscretizationisrefined,thedistributionofeigenvalues
of the coefficient matrices and thus the behaviour of iterative solvers remains practically
constant. Forthe realistically shapedbrainmodel, theeigenvaluesare morespreadout, but
thedistributionisstillverygoodforiterativesolvers.
4.3. Iterativemethodsforrealisticbrain-shapedmodel
To choose an iterative method for realistic conductor models, we studied the performance
ofunpreconditionediterativesolvers. As a stoppingcriterionand errormeasurewe use the
normwisebackwarderror(Chaitin-ChatelinandFraysse´1996)
(cid:12)b−Ax (cid:12)
η(x )= m 2 (22)
m α(cid:12)x (cid:12) +(cid:12)b(cid:12)
m 2 2
where the constant α should approximate the norm of the matrix A. In the code we use
the value α = 1, which is the two-norm of the coefficient matrix in the case of a spherical
conductormodel. The normwise backwarderror measuresthe relative error in the solution
andisindependentofthescalingoftheright-handsidevectorb.
In figure 6 we compare the convergenceof QMR, Bi-CGSTAB and restarted GMRES
foramatrixarisingfromarealisticallyshapedmeshof886trianglesandusingthepiecewise
constant collocation technique. For GMRES we used the restart parameters 5, 10 and 20.
Description:Reprinted with permission from Physics in Medicine and Biology 47, pages (EEG) and electrocardiography (ECG), are in wide clinical use. The .. The numerical eigenvalues were computed using the LAPACK library. Arfken G 1985 Mathematical Methods for Physicists (Orlando, FL: Academic).