ebook img

Model Reduction of Descriptor Systems by Interpolatory Projection Methods PDF

1.1 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Model Reduction of Descriptor Systems by Interpolatory Projection Methods

MODEL REDUCTION OF DESCRIPTOR SYSTEMS BY INTERPOLATORY PROJECTION METHODS SERKAN GUGERCIN∗, TATJANA STYKEL†, AND SARAH WYATT‡ Abstract. Inthispaper,weinvestigateinterpolatoryprojectionframeworkformodelreduction of descriptor systems. With a simple numerical example, we first illustrate that employing sub- space conditions from the standard state space settings to descriptor systems generically leads to unboundedH2 orH∞ errorsduetothemismatchofthepolynomialpartsofthefullandreduced- order transfer functions. We then develop modified interpolatory subspace conditions based on the 3 deflatingsubspacesthatguaranteeaboundederror. Forthespecialcasesofindex-1andindex-2de- 1 scriptorsystems,wealsoshowhowtoavoidcomputingthesedeflatingsubspacesexplicitlywhilestill 0 enforcinginterpolation. Thequestionofhowtochooseinterpolationpointsoptimallynaturallyarises 2 asinthestandardstatespacesetting. WeanswerthisquestionintheframeworkoftheH2-normby n extendingtheIterativeRationalKrylovAlgorithm(IRKA)todescriptorsystems. Severalnumerical examplesareusedtoillustratethetheoreticaldiscussion. a J 8 Key words. interpolatorymodelreduction,differentialalgebraicequations,H2 approximation 1 AMS subject classifications. 41A05,93A15,93C05,37M99 ] A 1. Introduction. We discuss interpolatory model reduction of differential- N algebraic equations (DAEs), or descriptor systems, given by h. Ex˙(t) = Ax(t)+Bu(t), (1.1) t y(t) = Cx(t)+Du(t), a m where x(t) ∈ Rn, u(t) ∈ Rm and y(t) ∈ Rp are the states, inputs and outputs, [ respectively, E ∈ Rn×n is a singular matrix, A ∈ Rn×n, B ∈ Rn×m, C ∈ Rp×n, and D ∈ Rp×m. Taking the Laplace transformation of system (1.1) with zero initial 1 v condition x(0) = 0, we obtain y(cid:98)(s) = G(s)u(cid:98)(s), where u(cid:98)(s) and y(cid:98)(s) denote the 4 Laplace transforms of u(t) and y(t), respectively, and G(s) = C(sE−A)−1B+D 2 is a transfer function of (1.1). By following the standard abuse of notation, we will 5 denote both the dynamical system and its transfer function by G. 4 Systems of the form (1.1) with extremely large state space dimension n arise in . 1 various applications such as electrical circuit simulations, multibody dynamics, or 0 semidiscretized partial differential equations. Simulation and control in these large- 3 scale settings is a huge computational burden. Efficient model utilization becomes 1 : crucial where model reduction offers a remedy. The goal of model reduction is to v replace the original dynamics in (1.1) by a model of the same form but with much i X smaller state space dimension such that this reduced model is a high fidelity approx- r imation to the original one. Hence, we seek a reduced-order model a ˙ E(cid:101)x(cid:101)(t) = A(cid:101)x(cid:101)(t)+B(cid:101)u(t), (1.2) y(cid:101)(t) = C(cid:101)x(cid:101)(t)+D(cid:101)u(t), where E(cid:101),A(cid:101) ∈Rr×r, B(cid:101) ∈Rr×m, C(cid:101) ∈Rp×r, and D(cid:101) ∈Rp×m such that r (cid:28)n, and the error y−y is small with respect to a specific norm over a wide range of inputs u(t) (cid:101) ∗SerkanGugerciniswiththeDepartmentofMathematics,VirginiaTech.,Blacksburg,VA,24061- 0123,USA,e-mail: [email protected]. †Tatjana Stykel is with Institut fu¨r Mathematik, Universita¨t Augsburg, Universit¨atsstraße 14, 86159Augsburg,Germany,e-mail: [email protected]. ‡Sarah Wyatt is with the Department of Mathematics, Indian River State College, Fort Pierce, FL,34981,USA,e-mail: [email protected]. 1 2 SerkanGugercin,TatjanaStykel,andSarahWyatt with bounded energy. In the frequency domain, this means that the transfer function of (1.2) given by G(cid:101)(s)=C(cid:101)(sE(cid:101) −A(cid:101))−1B(cid:101) +D(cid:101) approximates G(s) well, i.e., the error G(s)−G(cid:101)(s) is small in a certain system norm. The reduced-order model (1.2) can be obtained via projection as follows. We first construct two n×r matrices V and W, approximate the full-order state x(t) by Vx(t), and then enforce the Petrov-Galerkin condition (cid:101) (cid:16) (cid:17) WT EVx˙(t)−AVx(t)−Bu(t) =0, y(t)=CVx(t)+Du(t). (cid:101) (cid:101) (cid:101) (cid:101) As a result, we obtain the reduced-order model (1.2) with the system matrices E(cid:101) =WTEV, A(cid:101) =WTAV, (1.3) B(cid:101) =WTB, C(cid:101) =CV, D(cid:101) =D. The projection matrices V and W determine the subspaces of interest and can be computed in many different ways. In this paper, we consider projection-based interpolatory model reduction me- thods, where the choice of V and W enforces certain tangential interpolation of the original transfer function. These methods will be presented in Section 2 in more de- tail. Projection-based interpolation with multiple interpolation points was initially proposed by Skelton et. al. in [7, 31, 32]. Grimme [10] has later developed a numeri- callyefficientframeworkusingtherationalKrylovsubspacemethodofRuhe[24]. The tangential rational interpolation framework, we will be using here, is due to a recent work by Gallivan et al. [9]. Unfortunately, it is often assumed that extending interpolatory model reduction from standard state space systems with E = I to descriptor systems with singular E is as simple as replacing I by E. In Section 2, we present an example showing that this naive approach may lead to a poor approximation with an unbounded error G(s)−G(cid:101)(s) although the classical interpolatory subspace conditions are satisfied. In Section 3, we modify these conditions in order to enforce bounded error. The theoretical result will take advantage of the spectral projectors. Then using the new subspace conditions, we extend in Section 4 the optimal H model reduction method 2 of [15] to descriptor systems. Sections 3 and 4 make explicit usage of deflating sub- spaces which could be numerically demanding for general problems. Thus, for the special cases of index-1 and index-2 descriptor systems, we show in Sections 5 and 6, respectively, how to apply interpolatory model reduction without explicitly com- puting the deflating subspaces. Theoretical discussion will be supported by several numerical examples. In particular, in Section 5.2, we present an example, where the balancedtruncationapproach[26]ispronetofailingduetoproblemssolvingthegen- eralized Lyapunov equations, while the (optimal) interpolatory model reduction can be effectively applied. 2. Model reduction by tangential rational interpolation. The goal of model reduction by tangential interpolation is to construct a reduced-order model (1.2) such that its transfer function G(cid:101)(s) interpolates the original one, G(s), at se- lectedpointsinthecomplexplanealongselecteddirections. Wewillusethenotation of [1] to define this problem more precisely: Given G(s) = C(sE−A)−1B+D, the left interpolation points {µ }q , µ ∈ C, together with the left tangential directions i i=1 i {c }q ,c ∈Cp,andtherightinterpolationpoints{σ }r ,σ ∈C,togetherwiththe i i=1 i j j=1 j INTERPOLATORYPROJECTIONMETHODSforDAEs 3 right tangential directions {b }r , b ∈ Cm, we seek to find a reduced-order model j j=1 j G(cid:101)(s)=C(cid:101)(sE(cid:101) −A(cid:101))−1B(cid:101) +D(cid:101) that is a tangential interpolant to G(s), i.e., cTi G(µi)=cTi G(cid:101)(µi), i=1,...,q, (2.1) G(σj)bj =G(cid:101)(σj)bj, j =1,...,r. Through out the paper, we will assume q =r, meaning that the same number of left and right interpolation points are used. In addition to interpolating G(s), one might ask for matching the higher-order derivatives of G(s) along the tangential directions as well. This scenario will also be handled. By combining the projection-based reduced-order modeling technique with the interpolation framework, we want to find the n×r matrices W and V such that the reduced-order model (1.2), (1.3) satisfies the tangential interpolation conditions (2.1). Thisapproachiscalledprojection-basedinterpolatorymodelreduction. Howto enforce the interpolation conditions via projection is shown in the following theorem, where the (cid:96)-th derivative of G(s) with respect to s evaluated at s=σ is denoted by G((cid:96))(σ). Theorem 2.1. [1, 9] Let σ, µ ∈ C be such that sE−A and sE(cid:101) −A(cid:101) are both invertible for s=σ, µ, and let b∈Cm and c∈Cp be fixed nontrivial vectors. 1. If (cid:0)(σE−A)−1E(cid:1)j−1(σE−A)−1Bb∈Im(V), j =1,...,N, (2.2) then G((cid:96))(σ)b=G(cid:101)((cid:96))(σ)b for (cid:96)=0,1,...,N −1. 2. If (cid:0)(µE−A)−T ET(cid:1)j−1(µE−A)−T CTc∈Im(W), j =1,...,M, (2.3) then cTG((cid:96))(µ)=cTG(cid:101)((cid:96))(µ) for (cid:96)=0,1,...,M −1. 3. If both (2.2) and (2.3) hold, and if σ =µ, then cTG((cid:96))(σ)b=cTG(cid:101)((cid:96))(σ)b for (cid:96)=0,1,...,M +N +1. One can see that to solve the rational tangential interpolation problem via pro- jection all one has to do is to construct the matrices V and W as in Theorem 2.1. Thedominantcostistosolvesparselinearsystems. WealsonotethatinTheorem2.1 the values that are interpolated are never explicitly computed. This is crucial since that computation is known to be poorly conditioned [8]. To illustrate the result of Theorem 2.1 for a special case of Hermite bi-tangential interpolation,wetakethesamerightandleftinterpolationpoints{σ }r ,lefttangen- i i=1 tialdirections{c }r ,andrighttangentialdirections{b }r . Thenfortheprojection i i=1 i i=1 matrices V=(cid:2)(σ E−A)−1Bb , ··· , (σ E−A)−1Bb (cid:3), (2.4) 1 1 r r W=(cid:2)(σ E−A)−TCTc , ··· , (σ E−A)−TCTc (cid:3), (2.5) 1 1 r r the reduced-order model G(cid:101)(s)=C(cid:101)(sE(cid:101) −A(cid:101))−1B(cid:101) +D(cid:101) as in (1.3) satisfies G(σi)bi =G(cid:101)(σi)bi, cTi G(σi)=cTi G(cid:101)(σi), cTi G(cid:48)(σi)bi =cTi G(cid:101)(cid:48)(σi)bi (2.6) for i=1,··· ,r, provided σiE−A and σiE(cid:101) −A(cid:101) are both nonsingular. 4 SerkanGugercin,TatjanaStykel,andSarahWyatt Note that Theorem 2.1 does not distinguish between the singular E case and the standard state space case with E = I. In other words, the interpolation conditions hold regardless as long as the matrices σiE−A and σiE(cid:101) −A(cid:101) are invertible. This is the precise reason why it is often assumed that extending interpolatory-based model reductionfromG(s)=C(sI−A)−1B+DtoG(s)=C(sE−A)−1B+Disassimple as replacing I by E. However, as the following example shows, this is not the case. Example 2.1. Consider an RLC circuit modeled by an index-2 SISO descriptor system (1.1) of order n = 765 (see, e.g., [20] for a definition of index). We approxi- matethissystemwithamodel(1.2)oforderr =20usingHermiteinterpolation. The carefullychoseninterpolationpointsweretakenasthemirrorimagesofthedominant polesofG(s). Sincetheseinterpolationpointsareknowntobegoodpointsformodel reduction [11, 13], one would expect the interpolant to be a good approximation as well. However, the situation is indeed the opposite. Figure 2.1 shows the amplitude plots of the frequency responses G(ıω) and G(cid:101)(ıω) (upper plot) and that of the er- ror G(ıω)−G(cid:101)(ıω) (lower plot). One can see that the error G(ıω)−G(cid:101)(ıω) grows unbounded as the frequency ω increases, and, hence, the approximation is extremely poor with unbounded H and H error norms even though it satisfies Hermite inter- 2 ∞ polation at carefully selected effective interpolation points. AmplitudeBodeplotsofGandG )|103 ω ! G(i100! | G ω),|10−3 G~ (i G |10−6 108 1010 1012 1014 1016 AmplitudeBodeplotoftheerrorG−G 103 ω)| ! G(i100! − ) (iω10−3 G | 10−6 108 1010 1012 1014 1016 freq (rad/sec) Fig. 2.1. Example 2.1: amplitude plots of G(ıω) and G(cid:101)(ıω) (upper); the absolute error |G(ıω)−G(cid:101)(ıω)| (lower). The reason is simple. Even though E is singular, E(cid:101) = WTEV will generically be a nonsingular matrix assuming r ≤ rank(E). In this case, the transfer function G(cid:101)(s) of the reduced-order model (1.2) is proper, i.e., lim G(cid:101)(s)<∞, although G(s) s→∞ might be improper. Hence, the special care needs to be taken in order to match the polynomial part of G(s). We note that the polynomial part of G(cid:101)(s) has to match that of G(s) exactly. Otherwise, regardless of how good the interpolation points are, the error will always grow unbounded. For the very special descriptor systems with the proper transfer functions and only for interpolation around s = ∞, a solution is offered in [5]. For descriptor systems of index 1, where the polynomial part of G(s) is a constant matrix, a remedy is also suggested in [1] by an appropriate choice of D(cid:101). However,thegeneralcaseisremainedunsolved. Wewilltacklepreciselythisproblem, INTERPOLATORYPROJECTIONMETHODSforDAEs 5 where(1.1)isadescriptorsystemofhigherindex,itstransferfunctionG(s)mayhave ahigherorderpolynomialpartandinterpolationisatarbitrarypointsinthecomplex plane. Thereby, the spectral projectors onto the left and right deflating subspaces of thepencilλE−Acorrespondingtofiniteeigenvalueswillplayavitalrole. Moreover, we will show how to choose interpolation points and tangential directions optimally for interpolatory model reduction of descriptor systems. 3. Interpolatory projection methods for descriptor systems. As stated above, in order to have bounded H∞ and H2 errors, the polynomial part of G(cid:101)(s) has tomatchthepolynomialpartofG(s)exactly. LetG(s)beadditivelydecomposedas G(s)=G (s)+P(s), (3.1) sp where G (s) and P(s) denote, respectively, the strictly proper part and the polyno- sp mial part of G(s). We enforce the reduced-order model G(cid:101)(s) to have the decomposi- tion G(cid:101)(s)=G(cid:101)sp(s)+P(cid:101)(s) (3.2) with P(cid:101)(s) = P(s). This implies that the error transfer function does not contain a polynomial part, i.e., Gerr(s)=G(s)−G(cid:101)(s)=Gsp(s)−G(cid:101)sp(s) is strictly proper meaning lim Gerr(s)=0. Hence, by making G(cid:101)sp(s) to interpolate s→∞ Gsp(s), we will be able to enforce that G(cid:101)(s) interpolates G(s). This will lead to the following construction of G(cid:101)(s). Given G(s), we create W and V satisfying new subspaceconditionssuchthatthereduced-ordermodelG(cid:101)(s)obtainedbyprojectionas in(1.3)willnotonlysatisfytheinterpolationconditionsbutalsomatchthepolynomial part of G(s). Theorem 3.1. Given a full-order model G(s) = C(sE−A)−1B+D, define P and P to be the spectral projectors onto the left and right deflating subspaces of l r the pencil λE−A corresponding to the finite eigenvalues. Let the columns of W ∞ and V span the left and right deflating subspaces of λE−A corresponding to the ∞ eigenvalue at infinity. Let σ, µ ∈ C be interpolation points such that sE−A and sE(cid:101)−A(cid:101) are nonsingular for s=σ,µ, and let b∈Cm and c∈Cp. Define Vf and Wf such that Im(V )=span(cid:110)(cid:0)(σE−A)−1E(cid:1)j−1(σE−A)−1P Bb, j =1,...,N(cid:111), (3.3) f l Im(W )=span(cid:110)(cid:0)(µE−A)−TET(cid:1)j−1(µE−A)−TPTCTc, j =1,...,M(cid:111). (3.4) f r Then with the choice of W = [W , W ] and V = [V , V ], the reduced-order f ∞ f ∞ model G(cid:101)(s)=C(cid:101)(sE(cid:101) −A(cid:101))−1B(cid:101) +D(cid:101) obtained via projection as in (1.3) satisfies 1. P(cid:101)(s)=P(s), 2. G((cid:96))(σ)b=G(cid:101)((cid:96))(σ)b for (cid:96)=0,1, ..., N −1, 3. cTG((cid:96))(µ)=cTG(cid:101)((cid:96))(µ) for (cid:96)=0,1, ..., M −1. If σ =µ, we have, additionally, cTG((cid:96))(σ)b=cTG(cid:101)((cid:96))(σ)b for (cid:96)=0, ..., M+N+1. Proof. Let the pencil λE−A be transformed into the Weierstrass canonical form (cid:20) (cid:21) (cid:20) (cid:21) I 0 J 0 E=S nf T−1, A=S T−1, (3.5) 0 N 0 I n∞ 6 SerkanGugercin,TatjanaStykel,andSarahWyatt where S and T are nonsingular and N is nilpotent. Then the projectors P and P l r can be represented as (cid:20) (cid:21) (cid:20) (cid:21) I 0 I 0 P =S nf S−1, P =T nf T−1. (3.6) l 0 0 r 0 0 Let T = [T , T ] and S−1 = [S , S ]T be partitioned according to E and A in 1 2 1 2 (3.5). Then the matrices W and V take the form ∞ ∞ W =S R =(I−PT)W , V =T R =(I−P )V ∞ 2 S l ∞ ∞ 2 T r ∞ withnonsingularR andR . Furthermore, thestrictlyproperandpolynomialparts S T of G(s) in (3.1) are given by G (s) = CT (sI −J)−1STB, sp 1 nf 1 and P(s) = CT (sN−I )−1STB+D, 2 n∞ 2 respectively. It follows from (3.5) and (3.6) that EP =P E, AP =P A, r l r l (sE−A)−1P =P (sE−A)−1, l r and, hence, W =PTW and V =P V . (3.7) f l f f r f Then the system matrices of the reduced-order model have the form (cid:34) WTEV WTEV (cid:35) (cid:34) WTEV 0 (cid:35) E(cid:101) =WTEV= f f f ∞ = f f , WT EV WT EV 0 WT EV ∞ f ∞ ∞ ∞ ∞ (cid:34) WTAV WTAV (cid:35) (cid:34) WTAV 0 (cid:35) A(cid:101) =WTAV= f f f ∞ = f f , WT AV WT AV 0 WT AV ∞ f ∞ ∞ ∞ ∞ (cid:34) WTB (cid:35) B(cid:101) =WTB= f , C(cid:101) =CV=[CVf, CV∞], D(cid:101) =D. WT B ∞ Thus, the strictly proper and polynomial parts of G(cid:101)(s) are given by G(cid:101)sp(s) = CVf(sWfTEVf −WfTAVf)−1WfTB, P(cid:101)(s) = CV∞(sW∞T EV∞−W∞T AV∞)−1W∞T B+D = CT (sI−J)−1STB+D=P(s). 2 2 One can see that the polynomial parts of G(s) and G(cid:101)(s) coincide, and the proof of theinterpolationresultreducestoprovingtheinterpolationconditionsforthestrictly properpartsofG(s)andG(cid:101)(s). Toprovethis,wefirstnotethat(3.5)and(3.6)imply that (cid:20) (cid:21)(cid:20) (cid:21)−1(cid:20) (cid:21) I 0 σI−J 0 I 0 CP (σE−A)−1P B = CT S−1B r l 0 0 0 σN−I 0 0 = CT (σI−J)−1STB=G (σ). 1 1 sp INTERPOLATORYPROJECTIONMETHODSforDAEs 7 Furthermore, it follows from the relations (3.7) that CP V =CV , WTP B=WTB. r f f f l f Due to the definitions of V and W in (3.3) and (3.4), respectively, Theorem 2.1 f f gives Gsp(σ)b = CPrVf(σWfTEVf −WfTAVf)−1WfTPlBb=G(cid:101)sp(σ)b, cTGsp(µ) = cTCPrVf(µWfTEVf −WfTAVf)−1WfTPlB=cTG(cid:101)sp(µ). Since both parts 1 and 2 of Theorem 2.1 hold, we have cTG(cid:101)(cid:48) (σ)b = cTG(cid:48) (σ)b for sp sp σ = µ. The other interpolatory relations for the derivatives of the transfer function can be proved analogously. Next, we illustrate that even though Theorem 3.1 has a very similar structure to thatofTheorem2.1,thesaddledifferencebetweenthesetworesultsmakesabigdiffe- renceintheresultingreduced-ordermodel. Towardsthisgoal,werevisitExample2.1. Wereducethesamefull-ordermodelusingthesameinterpolationpoints,butimposing the subspace conditions of Theorem 3.1, instead. Figure 3.1 depicts the resulting amplitudeplotsofG(ıω)andG(cid:101)(ıω)(upperplot)andthatoftheerrorG(ıω)−G(cid:101)(ıω) (lower plot) when the new subspace conditions of Theorem 3.1 are used. Unlike the case in Example 2.1, where the error G(ıω)−G(cid:101)(ıω) grew unbounded, for the new reduced-order model, the maximum error is below 10−2 and the error decays to zero as ω approaches ∞, since the polynomial part is captured exactly. Amplitude Bode plots of G andG 103 )| ! ω i G( ! | ),|100 ω i ( G G | G~ 10−3 108 1010 1012 1014 1016 Amplitude Bode plot of G−G 10−2 )| ! iω10−4 G( ! −10−6 ) ω i G(10−8 | 10−10 108 1010 1012 1014 1016 freq (rad/sec) Fig.3.1. AmplitudeplotsofG(ıω)andG(cid:101)(ıω)(upper);theabsoluteerror|G(ıω)−G(cid:101)(ıω)| (lower). In some applications, the deflating subspaces of λE−A corresponding to the eigenvaluesatinfinitymayhavelargedimensionn . However,theorderofthesystem ∞ can still be reduced if it contains states that are uncontrollable and unobservable at infinity. Such states can be removed from the system without changing its transfer 8 SerkanGugercin,TatjanaStykel,andSarahWyatt function and, hence, preserving the interpolation conditions as in Theorem 3.1. In thiscasetheprojectionmatricesW andV canbedeterminedasproposedin[26] ∞ ∞ by solving the projected discrete-time Lyapunov equations AXAT −EXET=−(I−P )BBT(I−P )T, X=(I−P )X(I−P )T, (3.8) l l r r ATYA−ETYE=−(I−P )TCTC(I−P ), Y =(I−P )TY(I−P ). (3.9) r r l l LetX andY betheCholeskyfactorsofX=X XT andY =Y YT,respectively, C C C C C C and let YTAX = [U , U ]diag(Σ,0)[V , V ]T be singular value decomposition, C C 1 0 1 0 where [U , U ] and [V , V ] are orthogonal and Σ is nonsingular. Then the projec- 1 0 1 0 tion matrices W and V can be taken as W =Y U and V =X V . Note ∞ ∞ ∞ C 1 ∞ C 1 thattheCholeskyfactorsX andY canbecomputeddirectlyusingthegeneralized C C Smith method [27]. In this method, it is required to solve ν−1 linear systems only, where ν is the index of the pencil λE−A or, equivalently, the nilpotence index of N in (3.5). The computation of the projectors P and P is, in general, a difficult prob- l r lem. However, for some structured problems arising in circuit simulation, multibody systems and computational fluid dynamics, these projectors can be constructed in explicit form that significantly reduces the computational complexity of the method; see [27] for details. 4. Interpolatory optimal H model reduction for descriptor systems. 2 The choice of interpolation points and tangential directions is the central issue in interpolatory model reduction. This choice determines whether the reduced-order model is high fidelity or not. Until recently, selection of interpolation points was largelyadhoc andrequiredseveralmodelreductionattemptstoarriveatareasonable approximation. However, Gugercin et al. [15] introduced an interpolatory model reduction method for generating a reduced model G(cid:101) of order r which is an optimal H approximation to the original system G in the sense that it minimizes H -norm 2 2 error, i.e., (cid:107)G−G(cid:101)(cid:107)H2 = min (cid:107)G−G(cid:101)r(cid:107)H2, (4.1) dim(G(cid:101)r)=r where (cid:18) 1 (cid:90) +∞ (cid:19)1/2 (cid:107)G(cid:107) := (cid:107)G(ıω)(cid:107)2 dω (4.2) H2 2π F −∞ and(cid:107)·(cid:107) denotestheFrobeniusmatrixnorm. Sincethisisanon-convexoptimization F problem,thecomputationofaglobalminimizerisaverydifficulttask. Hence,instead, one tries to find high-fidelity reduced models that satisfy first-order necessary opti- mality conditions. There exist, in general, two approaches for solving this problem. TheseareLyapunov-basedoptimalH methodspresentedin[16,18,25,29,30,33]and 2 interpolation-based optimal H methods considered in [3, 4, 6, 12, 14, 15, 19, 23, 28]. 2 WhiletheLyapunov-basedapproachesrequiresolvingaseriesofLyapunovequations, whichbecomescostlyandsometimesintractableinlarge-scalesettings,theinterpola- toryapproachesonlyrequiresolvingaseriesofsparselinearsystemsandhaveproved tobenumericallyveryeffective. Moreover,asshownin[15],bothframeworksarethe- oreticallyequivalentthatfurthermotivatestheusageofinterpolatorymodelreduction techniques for the optimal H approximation. 2 For SISO systems, interpolation-based H optimality conditions were originally 2 developedbyMeierandLuenberger[23]. Then,basedontheseconditions,aneffective INTERPOLATORYPROJECTIONMETHODSforDAEs 9 algorithm for interpolatory optimal H approximation, called the Iterative Rational 2 Krylov Algorithm (IRKA), was introduced in [12, 14]. This algorithm has also been recently extended to MIMO systems using the tangential interpolation framework, see [6, 15, 28] for more details. The model reduction methods mentioned above, however, only deals with the system G(s)=C(sE−A)−1B+D with a nonsingular matrix E. In this section, we will extend IRKA to descriptor systems. First, we establish the interpolatory H 2 optimality conditions in the new setting. Theorem 4.1. Let G(s)=G (s)+P(s) be decomposed into the strictly proper sp and polynomial parts, and let G(cid:101)(s)=G(cid:101)sp(s)+P(cid:101)(s) have an rth-order strictly proper part G(cid:101)sp(s)=C(cid:101)sp(sE(cid:101)sp−A(cid:101)sp)−1B(cid:101)sp. 1. IfG(cid:101)(s)minimizestheH2-error(cid:107)G−G(cid:101)(cid:107)H2 overallreduced-ordermodelswith an rth-order strictly proper part, then P(cid:101)(s)=P(s) and G(cid:101)sp(s) minimizes the H2-error (cid:107)Gsp−G(cid:101)sp(cid:107)H2. 2. Suppose that the pencil sE(cid:101)sp − A(cid:101)sp has distinct eigenvalues {λ(cid:101)i}ri=1. Let y and z denote the left and right eigenvectors associated with λ˜ so that i i i A(cid:101)spzi =λ(cid:101)iE(cid:101)spzi, yi∗A(cid:101)sp =λ(cid:101)iyi∗E(cid:101)sp, and yi∗E(cid:101)spzj =δij. Then for ci =C(cid:101)spzi and bTi =yi∗B(cid:101)sp, we have G(−λ(cid:101)i)bi =G(cid:101)(−λ(cid:101)i)bi, cTi G(−λ(cid:101)i)=cTi G(cid:101)(−λ(cid:101)i), (4.3) and cTi G(cid:48)(−λ(cid:101)i)bi =cTi G(cid:101)(cid:48)(−λ(cid:101)i)bi for i=1,··· ,r. Proof. 1. The polynomial part of G(s) and G(cid:101)(s) coincide, since, otherwise, the H2-norm of the error G(s)−G(cid:101)(s) would be unbounded. Then it readily follows that G(cid:101)sp(s) minimizes (cid:107)Gsp(s)−G(cid:101)sp(s)(cid:107)H2 since G(s)−G(cid:101)(s)=Gsp(s)−G(cid:101)sp(s). 2. Since P(cid:101)(s) = P(s), the H2 optimal model reduction problem for G(s) now reduces to the H optimal problem for the strictly proper transfer function G (s). 2 sp Hence, the optimal H2 conditions of [15] require that G(cid:101)sp(s) needs to be a bi-tan- gential Hermite interpolant to Gsp(s) with {−λ(cid:101)i}ri=1 being the interpolation points, and {c }r and {b }r being the corresponding left and right tangential directions, i i=1 i i=1 respectively. Thus, the interpolation conditions (4.3) hold since P(cid:101)(s)=P(s). Unfortunately, the H optimal interpolation points and associated tangent direc- 2 tions are not known a priori, since they depend on the reduced-order model to be computed. To overcome this difficulty, an iterative algorithm IRKA was developed [12, 14] which is based on successive substitution. In IRKA, the interpolation points arecorrectediterativelybythechoosingmirrorimagesofpolesofthecurrentreduced- order model as the next interpolation points. The tangential directions are corrected in a similar way; see [1, 15] for details. The situation in the case of descriptor systems is similar, where the optimal interpolationpointsandthecorrespondingtangentialdirectionsdependonthestrictly proper part of the reduced-order model to be computed. Moreover, we need to make sure that the final reduced-model has the same polynomial part as the original one. Hence, we will modify IRKA to meet these challenges. In particular, we will correct not the poles and the tangential directions of the intermediate reduced-order model atthesuccessiveiterationstepbutthatofthestrictlyproperpartoftheintermediate reduced-order model. As in the case of Theorem 3.1, the spectral projectors P and l P will be used to construct the required interpolatory subspaces. A sketch of the r resulting model reduction method is given in Algorithm 4.1. 10 SerkanGugercin,TatjanaStykel,andSarahWyatt Algorithm 4.1. Interpolatory H optimal model reduction method 2 for descriptor systems 1) Make an initial selection of the interpolation points {σ }r and the i i=1 tangential directions {b }r and {c }r . 2) V =(cid:2)(σ E−A)−1PiBib=1, ..., (σi Ei=−1 A)−1P Bb (cid:3), f 1 l 1 r l r W =(cid:2)(σ E−A)−TPTCTc , ..., (σ E−A)−TPTCTc (cid:3). f 1 r 1 r r r 3) while (not converged) a) A(cid:101)sp =WfTAVf, E(cid:101)sp =WfTEVf, B(cid:101)sp =WfTB, and C(cid:101)sp =CVf. b) Compute A(cid:101)spzi =λ(cid:101)iE(cid:101)spzi and yi∗A(cid:101)sp =λ(cid:101)iyi∗E(cid:101)sp with yi∗E(cid:101)spzj =δij, where yi and zi are left and right eigenvectors associated with λ(cid:101)i. dc)) σVi ←=−(cid:2)λ(cid:101)(iσ, bETi−←Ay)−i∗B1(cid:101)PspBabnd,c.i.←., (C(cid:101)σspEzi−foAr)i−=1P1,B..b.,(cid:3)r,. f 1 l 1 r l r W =(cid:2)(σ E−A)−TPTCTc , ..., (σ E−A)−TPTCTc (cid:3). f 1 r 1 r r r end while 4) Compute W and V such that Im(W )=Im(I−PT) and ∞ ∞ ∞ l Im(V )=Im(I−P ). ∞ r 5) Set V=[V , V ] and W=[W , W ]. f ∞ f ∞ 6) E(cid:101) =WTEV, A(cid:101) =WTAV, B(cid:101) =WTB, C(cid:101) =CV, D(cid:101) =D. NotethatuntilStep4ofAlgorithm4.1,thepolynomialpartisnotincludedsince the interpolation parameters result from the strictly proper part G(cid:101)sp(s). In a sense, Step 3 runs the optimal H iteration on G (s). Hence, at the end of Step 3, we 2 sp construct an optimal H interpolant to G (s). However, in Step 5, we append the 2 sp interpolatory subspaces with V and W (which can be computed as described ∞ ∞ at the end of Section 3) so that the final reduced-order model in Step 6 has the samepolynomialpartasG(s),and,consequently,thefinalreduced-ordermodelG(cid:101)(s) satisfiestheoptimalityconditionsofTheorem4.1. OnecanseethisfromStep3c: upon convergence,theinterpolationpointsarethemirrorimagesofthepolesofG(cid:101)sp(s)and the tangential directions are the residue directions from G(cid:101)sp(s) as the optimality conditions require. Since Algorithm 4.1 uses the projected quantities P B and CP , l r theoreticallyiteratingonastrictlyproperdynamicalsystem,theconvergencebehavior of this algorithm will follow the same pattern of IRKA which has been observed to converge rapidly in numerous numerical applications. Summarizing,wehaveshownsofarhowtoreducedescriptorsystemssuchthatthe transfer function of the reduced descriptor systems is a tangential interpolant to the originaloneandmatchesthepolynomialpartpreventingunboundedH andH error ∞ 2 norms. However, this model reduction approach involves the explicit computation of the spectral projectors or the corresponding deflating subspaces, which could be numerically infeasible for general large-scale problems. In the next two sections, we will show that for certain important classes of descriptor systems, the same can be achieved without explicitly forming the spectral projectors. 5. Semi-explicit descriptor systems of index 1. We consider the following semi-explicit descriptor system E x˙ (t)+E x˙ (t) = A x (t)+A x (t)+B u(t), 11 1 12 2 11 1 12 2 1 0 = A x (t)+A x (t)+B u(t), (5.1) 21 1 22 2 2 y(t) = C x (t)+C x (t)+Du(t), 1 1 2 2

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.