ebook img

Vector space of linearizations for the quadratic two-parameter matrix polynomial PDF

0.15 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Vector space of linearizations for the quadratic two-parameter matrix polynomial

Vector space of linearizations for the quadratic two-parameter matrix polynomial Bibhas Adhikari∗ 2 1 Indian Institute of Technology Rajasthan 0 2 Jodhpur, India n a J 6 Abstract: Given a quadratic two-parameter matrix polynomial Q(λ,µ), we de- velopasystematicapproachtogeneratingavectorspaceoflineartwo-parameter ] A matrix polynomials. We identify a set of linearizations of Q(λ,µ) that lie in the N vector space. Finally, we determine a class of linearizations for a quadratic two- . parameter eigenvalue problem. h t a m Key words: matrix polynomial, two-parameter matrix polynomial, quadratic [ two-parametereigenvalueproblem, two-parametereigenvalueproblem, lineariza- 1 tion v 0 AMS classification: 65F15, 15A18, 15A69, 15A22 5 3 1 . 1 Introduction 1 0 2 1 We consider two-parameter quadratic matrix polynomials of the form : v i Q(λ,µ) = A+λB +µC +λ2D +λµE +µ2F (1) X r a where λ,µ are scalars and the coefficient matrices are real or complex matrices of order n×n. If (λ,µ) ∈ C×C and nonzero x ∈ Cn satisfy Q(λ,µ)x = 0, then x is said to be an eigenvector of Q(λ,µ) corresponding to the eigenvalue (λ,µ). The classical approach to solving spectral problems for matrix polynomials is to first perform a linearization, that is, to transform the given polynomial into a linear matrix polynomial, and then work with this linear polynomial (see [16, 19, 17, 24, 23, 21] and the references therein). Therefore, given a quadratic two-parameter matrix polynomial Q(λ,µ), we seek linear two-parameter matrix polynomials L(λ,µ) = λA + µA + A , called linearizations, which have the 1 2 3 same eigenvalues as Q(λ,µ). b b b ∗ E-mail: [email protected] The one-parameter matrix polynomials have been an active area of research in numerical linear algebra [21, 27, 28]. In [21], Mackey et al. have investi- gated the one-parameter polynomial eigenvalue problem extensively and they have produced vector spaces of linearizations for a one-parameter matrix poly- nomial by generalizing the companion forms of the one-parameter polynomial. Adopting a similar approach we derive a set of linearizations of a quadratic two-parameter matrix polynomial. The quadratic two-parameter eigenvalue problem is concerned with finding scalars λ,µ ∈ C and non-zero vectors x ∈ Cn1,x ∈ Cn2 such that 1 2 Q (λ,µ)x = (A +λB +µC +λ2D +λµE +µ2F )x = 0 1 1 1 1 1 1 1 1 1 (2) Q (λ,µ)x = (A +λB +µC +λ2D +λµE +µ2F )x = 0 2 2 2 2 2 2 2 2 2 (cid:26) where A ,B ,...,F ∈ Cni×ni,i = 1,2. A pair (λ,µ) satisfying (2) is called i i i an eigenvalue of (2) and x ⊗ x , where ⊗ is the Kronecker product, is the 1 2 corresponding eigenvector. This problem appears in stability analysis of dif- ferent systems, for example, time-delay systems of single delay [9, 10, 13, 24]. The standard approach to solving (2) is by linearizing the problem into a two- parameter eigenvalue problem of larger size and then by converting it into an equivalent coupled generalized eigenvalue problem which is then solved by nu- merical methods, see [24, 23, 9]. Given (2), we seek a two-parameter eigenvalue problem L (λ,µ)w := (A(1))+λB(1) +µC(1))w = 0 1 1 1 (3) L (λ,µ)w := (A(2))+λB(2) +µC(2))w = 0 2 2 2 (cid:26) with the same eigenvalues, where w ∈ C3ni \{0} and A(i),B(i),C(i) ∈ C3ni×3ni, i i = 1,2. In such case (3) is called a linearization of (2). The choice of a linearization may have an adverse effect on the sensitivity of the eigenvalues. Therefore, it is important to identify potential lineariza- tions and describe their constructions. In this paper, we develop a systematic approach that enables us to generate a class of linearizations for a quadratic two-parameter eigenvalue problem. 2 Linearizations for quadratic two-parameter matrix polynomial In this section we construct a set of linearizations of a quadratic two-parameter matrix polynomial. Definition 2.1 ([24]) A ln × ln linear matrix polynomial L(λ,µ) = λA + 1 µA + A is a linearization of an n × n matrix polynomial Q(λ,µ) if there 2 3 exist polynomials P(λ,µ) and R(λ,µ), whose determinant is a non-zero consbtant inbdependbent of λ and µ, such that Q(λ,µ) 0 = P(λ,µ)L(λ,µ)R(λ,µ). 0 I(l−1)n (cid:20) (cid:21) 2 Let Q(λ,µ) be a quadratic two-parameter matrix polynomial given by Q(λ,µ) = λ2A +µ2A +λµA +λA +µA +A 20 02 11 10 01 00 wherethecoefficientmatricesareofordern×n.Assumethatxistheeigenvector corresponding to an eigenvalue (λ,µ) of Q(λ,µ), that is, Q(λ,µ)x = 0. With a view to constructing linearizations of Q(λ,µ), we denote x = x ,λx = 00 00 x ,µx = x . Then we have 10 00 01 A (λx )+A (µx )+A (λx )+A x +A x +A x = 0. (4) 20 10 02 01 11 01 10 10 01 01 00 00 Consequently we have A A 0 0 A 0 A A A x  20 11 02 10 01 00  10 λ 0 0 0 +µ 0 0 I + 0 −I 0 x = 0. (5) 01          0 0 I 0 0 0 −I 0 0  x   00            b b b   A1 A2 A3    | {z } | {z } | {z } x λx λ λ 10 Observe that x = µx = µ ⊗ x. We denote Λ := µ . Thus x is 01         x x 1 1 00 the eigenvector corresponding to an eigenvalue (λ,µ) of Q(λ,µ) if and only if L(λ,µ)w = 0 where w = Λ ⊗ x and L(λ,µ) = λA + µA + A , that is, w is 1 2 3 the eigenvector corresponding to an eigenvalue (λ,µ) of L(λ,µ). We show that L(λ,µ) is a linearization of Q(λ,µ). b b b Define λI I 0 n n E(λ,µ) := µI 0 I , n n   I 0 0 n I µA +λA +A λA +A n 02 11 01 20 10 F(λ,µ) := 0 0 I . n   0 I 0 n   Notice that E,F are unimodular quadratic two-parameter matrix polynomials, that is, determinants of E and F are constants. Then we can easily check that Q(λ,µ) 0 F(λ,µ)L(λ,µ)E(λ,µ) = . (6) 0 I 2n (cid:20) (cid:21) Thus we have detQ(λ,µ) = γdetL(λ,µ) for some γ 6= 0. This implies L(λ,µ) preserves the eigenvalues of Q(λ,µ) andhence is a linearization of order 3n×3n. We call this linearization the standard linearization of Q(λ,µ). It is interesting to observe that the linearization proposed in [24] is up to some permutations of block rows and columns of the standard linearization. 3 However, for the standard linearization we have L(λ,µ)·(Λ⊗x) = (Q(λ,µ)x)T 0 ... 0 T for all x ∈ Cn, (7) and therefore, any solution o(cid:2)f (5) gives a soluti(cid:3)on of the original problem Q(λ,µ)x = 0. Further, by (7) we have λI Q(λ,µ) 1 n L(λ,µ)·(Λ⊗I ) = L (λ,µ) µI = 0 = e ⊗Q(λ,µ),e = 0 . (8) n 1 n 1 1       I 0 0 n       We restrict our attention to the equation (8) which is satisfied by the stan- dard linearization. It would be worthy to find linear two-parameter matrix polynomials L(λ,µ) that satisfy L(λ,µ)·(Λ⊗I ) = v⊗Q(λ,µ) (9) n for some vector v ∈ C3. Therefore, we introduce the notation V = {v⊗Q(λ,µ) : v ∈ K3} (10) Q and define L(Q(λ,µ)) := L(λ,µ) = λA +µA +A ,A ∈ K3n×3n : L(λ,µ)·(Λ⊗I ) ∈ V . 1 2 3 i n Q n (11) o Note that L(Q(λ,µ)) 6= ∅ asbthe stabndardblinbearization L(λ,µ) ∈ L(Q(λ,µ)). It is easy to check that L(Q(λ,µ)) is a vector space. If L(λ,µ) ∈ L(Q(λ,µ)) for some v ∈ C3 then call v is an ansatz vector associated with L(λ,µ). To investi- gate the structure of each L(λ,µ) ∈ L(Q(λ,µ)), we define a “box-addition” for three 3n×3n block matrices as follows. Definition 2.2 Let X,Y,Z ∈ C3n×3n be three block matrices of the form X X X Y Y Y Z Z Z 11 12 b b13 b 11 12 13 11 12 13 X = X X X ,Y = Y Y Y ,Z = Z Z Z . 21 22 23 21 22 23 21 22 23       X X X Y Y Y Z Z Z 31 32 33 31 32 33 31 32 33 b   b   b   Define X X X Y Y Y Z Z Z 11 12 13 11 12 13 11 12 13 X ⊞Y ⊞Z = X X X ⊞ Y Y Y ⊞ Z Z Z 21 22 23 21 22 23 21 22 23       X X X Y Y Y Z Z Z 31 32 33 31 32 33 31 32 33 b b b X X 0 X 0 0 0 Y Y 0 Y 0  11 12 13 11 12 13 = X X 0 X 0 0 + 0 Y Y 0 Y 0 21 22 23 21 22 23     X X 0 X 0 0 0 Y Y 0 Y 0 31 32 33 31 32 33  0 0 0 Z Z Z    11 12 13 + 0 0 0 Z Z Z 21 22 23   0 0 0 Z Z Z 31 32 33   where ‘+’ is the usual matrix addition. 4 For the standard linearization L(λ,µ) = λA + µA + A ∈ L(Q(λ,µ)) we 1 2 3 have b b b A A 0 0 A 0 A A A 20 11 02 10 01 00 A ⊞A ⊞A = 0 0 0 ⊞ 0 0 I ⊞ 0 −I 0 1 2 3       0 0 I 0 0 0 −I 0 0 b b b A A 00 0 0 0 0 A 0 0 0  20 11 02 = 0 0 0 0 0 0 + 0 0 0 0 I 0     0 0 0 I 0 0 0 0 0 0 0 0  0 0 0 A A A   10 01 00 + 0 0 0 0 −I 0   0 0 0 −I 0 0 A A A A A A 20 11 02 10 01 00 = 0 0 0 0 0 0   0 0 0 0 0 0 = e ⊗ A A A A A A . 1 20 11 02 10 01 00 Thus we have the following l(cid:2)emma. (cid:3) Lemma 2.3 Let Q(λ,µ) = λ2A + µ2A + λµA + λA + µA + A be 20 02 11 10 01 00 a quadratic two-parameter matrix polynomial with real or complex coefficient matrices of order n×n, and L(λ,µ) = λA +µA +A a 3n×3n two-parameter 1 2 3 linear matrix polynomial. Then b b b L(λ,µ)·(Λ⊗I ) = v ⊗Q(λ,µ) ⇔ n A ⊞A ⊞A = v ⊗ A A A A A A . 1 2 3 20 11 02 10 01 00 (cid:2) (cid:3) Proof: Computational abnd eabsy tobcheck. Example 2.4 Consider a quadratic two-parameter polynomial Q(λ,µ) = λ2A +µ2A +λµA +λA +µA +A 20 02 11 10 01 00 where A ∈ Cn×n and ij A A +A A +A 20 11 20 10 01 L(λ,µ) = λ A A 0 20 00   2A A +2A I 20 02 11  −A A A  −A 0 A 20 02 01 01 00 +µ A −A A 0 + A A A . 11 00 02 01 01 00     −A 2A A −I +2A A 2A 02 02 01 10 01 00     Then L(λ,µ) ∈ L(Q(λ,µ)) since A ⊞A ⊞A = [1 1 2]T ⊗ A A A A A A . 1 2 3 20 11 02 10 01 00 (cid:2) (cid:3) b b b 5 Using Lemma 2.3 we characterize the structure of any L(λ,µ) ∈ L(Q(λ,µ)). Theorem 2.5 Let Q(λ,µ) = λ2A + µ2A +λµA +λA + µA + A be 20 02 11 10 01 00 a quadratic two-parameter matrix polynomial with real or complex coefficient matrices of order n × n, and v ∈ C3. Then a linear two-parameter matrix polynomial L(λ,µ) ∈ L(Q(λ,µ)) corresponding to the ansatz vector v is of the form L(λ,µ) = λA +µA +A where 1 2 3 A = v ⊗A −Y +v ⊗A −Z +v ⊗A 1 b b 20b 1 11 1 10 A2 = (cid:2)Y1 v ⊗A02 −Z2 +v ⊗A01 (cid:3) b A3 = (cid:2)Z1 Z2 v ⊗A00 (cid:3) b Y (cid:2) Z (cid:3)Z b11 11 12 where Y = Y ,Z = Z ,Z = Z ∈ C3n×n are arbitrary. 1 21 1 21 2 22       Y Z Z 31 31 32       Proof: LetM : L(Q(λ,µ)) → V beamultiplicativemapdefinedbyL(λ,µ) 7→ Q L(λ,µ)(Λ ⊗ I ). Its easy to see that M is linear. First we show that M is n surjective. Let v ⊗Q(λ,µ) be an arbitrary element of V . Construct L(λ,µ) = Q λA +µA +A where 1 2 3 A = v ⊗A v ⊗A v ⊗A b b b 1 20 11 10 A2 = (cid:2)0 v ⊗A02 v⊗A01 (cid:3) b A3 = (cid:2)0 0 v ⊗A00 . (cid:3) b Then obviously we habve A1⊞(cid:2)A2⊞A3 = v⊗(cid:3) A20 A02 A11 A10 A01 A00 , so by Lemma 2.3 L(λ,µ) is an M-pre-image of v ⊗Q(λ,µ). The set of all M- (cid:2) (cid:3) preimages of v ⊗Q(λ,µ)bis L(λb,µ)+bKerM, so all that remains is to compute KerM. Further by Lemma 2.3 KerM contains L(λ,µ) = λA + µA + A 1 2 3 that satisfies A ⊞A ⊞A = 0. The definition of “box-addition” implies that 1 2 3 A ,A ,A are of the following form b b b 1 2 3 b b b A = 0 −Y −Z b b b 1 1 1 A2 = (cid:2)Y1 0 0 (cid:3) b A3 = (cid:2)Z1 Z2 (cid:3)0 b where Y ,Z ,Z ∈ C3n×n are arbitra(cid:2)ry. This co(cid:3)mpletes the proof. 1 1 2 b Example 2.6 In Example 2.4 we achieve the linear two-parameter polynomial L(λ,µ) ∈ L(Q(λ,µ)) by choosing −A −A 0 20 01 Y := −A +A ,Z := A ,Z := A . 1 00 11 1 10 2 01       A −I +2A A 02 10 01       6 The standard linearization L(λ,µ) ∈ L(Q(λ,µ)) is achieved by choosing 0 A A 10 01 Y := 0 ,Z := 0 ,Z := −I . 1 1 2       0 −I 0       Corollary 2.7 Dimension of L(Q(λ,µ)) = 9n2 +3. Remark 2.8 For quadratic one-parameter matrix polynomial P(λ) = λ2A +λA +A ,A ∈ Cn×n,i = 0,1,2, (12) 20 10 00 i0 a vector space L (P) of matrix pencils of the form L(λ) = X +λY ∈ C2n×2n is 1 obtained in [21]. Setting µ = 0 in Q(λ,µ) we have Q(λ,0) = P(λ). Then from the constructions of linear two-parameter polynomials given in Theorem 2.5 it is easy to check that L(Q(λ,µ)) = L (P). In fact, if µ = 0 then L(Q(λ,µ)) 1 contains matrix pencils L(λ) = λA +A ∈ C2n×2n where 1 3 A = v ⊗A −Z +v ⊗A ,A = Z v ⊗A , 1 20 1 b b 10 3 1 00 v ∈ C2 and Z ∈(cid:2)C2n×n is arbitrary. Thu(cid:3)s we ob(cid:2)tain the sam(cid:3)e vector space b1 b of matrix pencils obtained in [21] for a given quadratic one-parameter matrix polynomial Q(λ,0) = λ2A +λA +A = P(λ). 20 10 00 2.1 Construction of linearizations It is not very clear that whether all linear two-parameter matrix polynomials in the space L(Q(λ,µ)) are linearizations of Q(λ,µ). For example, consider any L(λ,µ) ∈ L(Q(λ,µ)) corresponding to ansatz vector v = 0. Thus given a quadratic two-parameter matrix polynomial Q(λ,µ) we need to identify which L(λ,µ) in L(Q(λ,µ)) are linearizations. We begin with a result concerning the special case of the ansatz vector v = αe where e = 1 0 0 T and 0 6= α ∈ C. 1 1 Theorem 2.9 Let Q(cid:2)(λ,µ) =(cid:3) λ2A + λµA + µ2A +λA + µA + A be 20 11 02 10 01 00 a quadratic two-parameter matrix polynomial with real or complex coefficient matrices of order n×n. Suppose L(λ,µ) = λA +µA +A ∈ L(Q(λ,µ)) with 1 2 3 respect to the ansatz vector v = αe ∈ C3, where 1 b b b A = αe ⊗A −Y +αe ⊗A −Z +αe ⊗A 1 1 20 1 1 11 1 1 10 A2 = (cid:2)Y1 αe1 ⊗A02 −Z2 +αe1 ⊗A01 (cid:3) b A3 = (cid:2)Z1 Z2 αe1 ⊗A00 , (cid:3) b Y (cid:2) Z Z (cid:3) 11b 11 12 Z Z Y = 0 ,Z = Z ,Z = Z ∈ C3n×n, det 21 22 6= 0. Then 1   1  21 2  22 Z Z 0 Z Z (cid:20) 31 32(cid:21) 31 32 L(λ,µ)is alinearizationof Q(λ,µ).  7 Proof: ByTheorem2.5, anylineartwo-parametermatrixpolynomialL(λ,µ) = λA +µA +A ∈ L(Q(λ,µ)) corresponding to the ansatz vector v = αe is of 1 2 3 1 the form b b bαA −Y +αA −Z +αA Y αA −Z +αA 20 11 11 11 10 11 02 12 01 L(λ,µ) = λ 0 −Y −Z +µ Y 0 −Z 21 21 21 22     0 −Y −Z Y 0 −Z 31 31 31 32     Z Z αA 11 12 00 + Z Z 0 . 21 22   Z Z 0 31 32   Thus we have W (λ,µ) W (λ,µ) W (λ,µ) 1 2 3 L(λ,µ) = µY +Z −λY +Z −λZ −µZ 21 21 21 22 21 22   µY +Z −λY +Z −λZ −µZ 31 31 31 32 31 32   where W (λ,µ) = αλA +µY +Z ,W (λ,µ) = αµA +λαA −λY +Z 1 20 11 11 2 02 11 11 12 and W (λ,µ) = αλA −λZ +αµA −µZ +αA . 3 10 11 01 12 00 Define λI I 0 α E(λ,µ) = µI 0 I . α  1I 0 0 α Consequently, we have   Q(λ,µ) W (λ,µ) W (λ,µ) 1 2 L(λ,µ)E(λ,µ) = 0 µY +Z −λY +Z . 21 21 21 22   0 µY +Z −λY +Z 31 31 31 32   Q(λ,µ) W(λ,µ) Setting Y = 0 = Y we have L(λ,µ)E(λ,µ) = where 21 31 0 Z (cid:20) (cid:21) Z Z W(λ,µ) = W (λ,µ) W (λ,µ) ∈ Cn×2n,Z = 21 22 ∈ C2n×2n. 1 2 Z Z 31 32 (cid:20) (cid:21) Since Z(cid:2)is nonsingular, we d(cid:3)efine I −W(λ,µ)Z−1 F(λ,µ) = . 0 Z−1 (cid:20) (cid:21) Then we have Q(λ,µ) 0 F(λ,µ)L(λ,µ)E(λ,µ) = . 0 I 2n (cid:20) (cid:21) Note that both E(λ,µ) and F(λ,µ) are unimodular polynomials. Hence we have detL(λ,µ) = γdetQ(λ,µ) for some nonzero γ ∈ C. Thus L(λ,µ) is a linearization. This completes the proof. LetQ(λ,µ)quadratictwo-parametermatrixpolynomialandL(λ,µ) ∈ L(Q(λ,µ)) corresponding to anansatz vector 0 6= v ∈ C3. Then thefollowing isa procedure for determining a set of linearizations of Q(λ,µ). Procedure to determine linearizations in L(Q(λ,µ)): 8 1. Suppose Q(λ,µ) is a quadratic two-parameter matrix polynomial and L(λ,µ) = λA + µA + A ∈ L(Q(λ,µ)) corresponding to ansatz vec- 1 2 3 tor v ∈ C3 i.e. L(λ,µ)(Λ⊗I ) = v⊗Q(λ,µ). n b b b m m m 11 12 13 2. Select any nonsingular matrix M = m m m such that Mv = 21 22 23   m m m 31 32 33 αe ∈ C3,α 6= 0. A list of nonsingular matrices Mdepending on the 1 entries of v is given in the Appendix. 3. Apply the corresponding block-transformation M ⊗ I to L(λ,µ). Then n we have L(λ,µ) = (M ⊗I )L(λ,µ) = λA +µA +A such that n 1 2 3 Ae = αe ⊗A −Y +αe ⊗fA f−Z f+αe ⊗A 1 1 20 1 1 11 1 1 10 h i Af2 = Y1 αe1 ⊗A02e −Z2 +αe1 ⊗A01f h i Af3 = Ze1 Z2 αe1 ⊗A00f h i where f f f Y m Y 11 11 11 Y = (M ⊗I )Y = (M ⊗I ) 0 = m Y 1 n 1 n 21 11     0 m Y 31 11 e Z  m Z +m Z +m Z 11 11 11 12 21 13 31 Z = (M ⊗I )Z = (M ⊗I ) Z = m Z +m Z +m Z 1 n 1 n 21 21 11 22 21 23 31     Z m Z +m Z +m Z 31 31 11 32 21 33 31 f Z  m Z +m Z +m Z  12 11 12 12 22 13 32 Z = (M ⊗I )Z = (M ⊗I ) Z = m Z +m Z +m Z 2 n 2 n 22 21 12 22 22 23 32     Z m Z +m Z +m Z 32 31 12 32 22 33 32 f     are arbitrary. 4. For L(λ,µ) to be linearization we need to choose Y ,Z ,Z as follows. 1 1 2 If m = m = 0 then choose Y arbitrary; otherwise choose Y = 0. 21 31 11 11 e Z Z 11 12 Further we need to choose Z = Z ,Z = Z for which 1 21 2 22     Z Z 31 32     m Z +m Z +m Z m Z +m Z +m Z det 21 11 22 21 23 31 21 12 22 22 23 32 6= 0. (13) m Z +m Z +m Z m Z +m Z +m Z 31 11 32 21 33 31 31 12 32 22 33 32 (cid:20) (cid:21) From the construction of M given in the Appendix it is easy to check that we can always choose suitable Z ,Z for which the condition (13) is 1 2 satisfied. 9 3 Linearization of two-parameter quadratic eigen- value problem The quadratic two-parameter eigenvalue problem is concerned with finding a pair (λ,µ) ∈ C×C and nonzero vectors x ∈ Cni for which i Q (λ,µ)x = 0,i = 1,2, (14) i i where Q (λ,µ) = A +λB +µC +λ2D +λµE +µ2F , (15) i i i i i i i A ,B ,...,F ∈ Cni×ni. The pair (λ,µ) is called an eigenvalue of (14) and x ⊗ i i i 1 x is called the corresponding eigenvector. The spectrum of a quadratic two- 2 parameter eigenvalue problem is the set σ := {(λ,µ) ∈ C×C : detQ (λ,µ) = 0,i = 1,2}. (16) Q i In the generic case, we observe that (14) has 4n n eigenvalues by using the 1 2 following theorem. Theorem 3.1 (Bezout’s Theorem, [3]) Let f(x,y) = g(x,y) = 0 be a system of two polynomial equations in two unknowns. If it has only finitely many common complex zeros (x,y) ∈ C×C, then the numberof those zeros is at most degree(f)· degree(g). The usual approach to solving (14) is to linearize it as a two-parameter eigenvalue problem given by L (λ,µ)w = (A(1))+λB(1) +µC(1))w = 0 1 1 1 (17) L (λ,µ)w = (A(2))+λB(2) +µC(2))w = 0 2 2 2 (cid:26) where A(i),B(i),C(i) ∈ Cmi×mi,m ≥ 2n ,i = 1,2, and w = Λ ⊗ x . A pair i i i i (λ,µ) is called an eigenvalue of (17) if L (λ,µ)w = 0 for a nonzero vector w i i i for i = 1,2, and w ⊗w is the corresponding eigenvector. Thus the spectrum 1 2 of the linearized two-parameter eigenvalue problem is given by σ := {(λ,µ) ∈ C×C : detL (λ,µ) = 0,i = 1,2}. (18) L i Therefore, in the generic case, the problem (17) has m m ≥ 4n n eigenvalues. 1 2 1 2 A standard approach to solve a two-parameter eigenvalue problem (17) is by converting it into a coupled generalized eigenvalue problem given by △ z = λ△ z and △ z = µ△ z (19) 1 0 2 0 where z = w ⊗w and 1 2 △ = B(1) ⊗C(2) −C(1) ⊗B(2) 0 △ = C(1) ⊗A(2) −A(1) ⊗C(2) 1 △ = A(1) ⊗B(2) −B(1) ⊗A(2). 2 10

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.