ebook img

Reducibility of Matrix Weights PDF

0.27 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Reducibility of Matrix Weights

REDUCIBILITY OF MATRIX WEIGHTS JUANTIRAOANDIGNACIO ZURRIA´N 6 1 0 Abstract. In this paper we discuss the notion of reducibility for matrix 2 weights and introduce a real vector space CR which encodes all information l about the reducibility of W. In particular a weight W reduces if and only if u there is a non-scalar matrix T such that TW = WT∗. Also, we prove that J reducibilitycanbestudiedbylookingatthecommutantofthemonicorthog- 5 onal polynomials or by looking at the coefficients of the corresponding three 2 termrecursionrelation. Amatrixweightmaynotbeexpressibleasdirectsum ofirreducibleweights,butitisalwaysequivalenttoadirectsumofirreducible ] weights. We alsoestablishthat the decompositions of twoequivalent weights T assumsofirreducibleweightshavethesamenumberoftermsandthat,upto R apermutation, theyareequivalent. . We consider the algebra of right-hand-side matrix differential operators h D(W)ofareducibleweightW,givingitsgeneralstructure. Finally,wemake t a change of emphasis by considering reducibility of polynomials, instead of a m reducibilityofmatrixweights. [ 5 v 1. Introduction 9 5 The subject of matrix valued orthogonal polynomials was introduced by M.G. 0 Krein more than sixty years ago, see [21, 22]. In the scalar valued context, the 4 orthogonal polynomial satisfying second order differential equations played a very 0 important role in many areas of mathematics and its applications. In [11], A.J. . 1 Dura´n started the study of matrix weights whose Hermitian bilinear form admits 0 a symmetric second order matrix differential operator,following similar considera- 5 tions by S. Bochner in the scalar case in [2]. 1 Motivatedbypreviousworksonthe theoryofmatrixvaluedsphericalfunctions, : v see [27], one finds in [14, 15, 17] the first examples of classical pairs considered in i X [11]. In [8] one can find the developmentof a method that leads to introduce some other examples of classical pairs. In the last years, one can see a growing number r a of papers devoted to different aspects of this question. For some of these recent papers,see[28,9,4,16,24,25]aswellas[13,10,23,26,30,3,31,1],amongothers. When one is in front of a concrete matrix weight one should ask himself: is it a new example or is it equivalent to a direct sum of known weights of smaller dimensions? In this work we prove that a weight W is reducible to weights of smaller size if and only if there exists a non-scalar constant matrix T such that TW = WT∗, see Theorem 2.8. We also study the form in which a matrix weight reduces. In particular, we show that this reduction is unique. Given a matrix weight W and a sequence of matrix orthogonal polynomials {Qn}n∈N0, it is natural to consider the matrix differential operators D such that 2010 Mathematics Subject Classification. 42C05-47L80-33C45. Keywordsandphrases. Matrixorthogonalpolynomials,reducibleweights,completereducibil- ity,thealgebraofareducibleweight. 1 2 JUANTIRAOANDIGNACIOZURRIA´N Q is an eigenfunction of D for every n N . The set of these operators is a n 0 ∈ noncommutative -algebra (W) (see [5, 18, 29, 32]). In Section 3 we study the ∗ D structure of the algebra (W) for a reducible weight, see Proposition 3.3. D Finally, we give some irreducibility criteria, showing that one can decide the irreducibilityofaweightbyalsoconsideringthecommutantofthemonicorthogonal polynomials {Pn}n∈N0 (see Corollary 4.4), or by looking at the commutant of the coefficients of the three term recursion relations satisfied by them (see Theorem 4.5). We prove in Theorem 4.12 that if there is a sequence of orthogonalpolynomials withrespectto anirreducibleweightW andwith respectto aweightV,both with bounded support, then we have W = λV, with λ > 0. This is not true when the support is not bounded, even more, one can have a sequence of polynomials which are orthogonal with respect to two weights W and V, being W irreducible and V reducible (see Example 4.13). Thus, at the end of the paper we make a change of emphasis by considering reducibility of polynomials, instead of reducibility of matrix weights. It is knownthatthe classicalscalarorthogonalpolynomialsdo notsatisfy linear differential equations of odd order. Even more, the algebras (W) are generated D by one operator of order two, see [24]. The situation in the matrix context is very different, see [7]. Nevertheless, deciding whether there is or there is not an irreducible weight W, such that the algebra (W) contains an operator of first D order,is a problemthat is still unsolved. We believe thatthe results inthe present paper may be useful for giving an answer in the future. Acknowledgement. This paper was partially supported by CONICET PIP 112- 200801-01533and by the programme Oberwolfach Leibniz Fellows 2015. 2. Reducibility of Matrix Weights An inner product space will be a finite dimensional complex vector space V together with a specified inner product , . If T is a linear operator on V the h· ·i adjoint of T will be denoted T∗. ByaweightW˜ =W˜(x)oflinearoperatorsonV wemeananintegrablefunction on an interval (a,b) of the real line, such that W˜(x) is a (self-adjoint) positive semidefinite operator on V for all x (a,b), which is positive definite almost ∈ everywhere and has finite moments of all orders: for all n N we have 0 ∈ b xnW˜(x)dx End(V). ∈ Za More generally we could assume that W˜ is a Borel measure on the real line of linear operators on V, such that: W˜(X) End(V) is positive semidefinite for any ∈ Borel set X, W˜ has finite moments of any order, and W˜ is nondegenerate, that is for P in the polynomial ring End(V)[x] (P,P)= P(x)dW˜(x)P(x)∗ =0 R Z only when P =0. In any case the size of W˜ is the dimension of V. If we select an orthonormal basis e ,...,e in V, we may represent each 1 N { } linear operatorW˜(x) on V by a square matrix W(x) of size N, obtaining a weight matrixofsizeN onthe realline. Bythis wemeanacomplexN N-matrixvalued × REDUCIBILITY OF MATRIX WEIGHTS 3 integrable function on the interval (a,b) such that W(x) is a (self-adjoint) positive semidefinite complex matrix for all x (a,b), which is positive definite almost ∈ everywhere and with finite moments of all orders: for all n N we have 0 ∈ b M = xnW(x)dx Mat (C). n N ∈ Za Of course, any result involving weights of linear operators implies an analogous resultformatrixweightsandviceversa. Alongthe paper,weshallworkeitherwith weights of linear operatorsor with matrix weights,depending on what we consider better for reader’s understanding. Definition 2.1. Two weights W˜ = W˜(x) and W˜′ = W˜′(x) of linear operators on V and V′, respectively, defined on the same interval are equivalent, W˜ W˜′, if there exists an isomorphism T of V onto V′ such that W˜′(x) = TW˜(x)T∼∗ for all x. Moreover,if T is unitary then they are unitarily equivalent. Two matrix weights W = W(x) and W′ = W′(x) of the same size and defined on the same interval are equivalent, W W′, if there exists a nonsingular matrix ∼ M such that W′(x) = MW(x)M∗ for all x. Moreover, if the matrix M is unitary then they are unitarily equivalent. Let S be an arbitrary set of elements. By an S-module on a field K we mean a pair (W,V) formed by a finite dimensional vector space V over K and a mapping W which assigns to every element x S a linear operator W(x) on V. It follows ∈ that an S-module is an additive group with two domains of operators, one being the field K and the other one being the set S. In particular, if V is an inner product space and W = W(x) is a weight of linear operators on V, then (W,V) is an S-module, where S is the support of W. It is important to notice that the equivalence of weights of linear operators does not coincide with the notion of isomorphism among S-modules. If W = W(x) is a weight of linear operators on V, we say that W is the orthogonal direct sum W =W W if V is the orthogonaldirectsumV =V V whereeach 1 d 1 d ⊕···⊕ ⊕···⊕ subspace V is W(x)-invariant for all x and W (x) is the restriction of W(x) to V . i i i Definition 2.2. We saythataweightW˜ =W˜(x)oflinearoperatorsonV reduces to smaller-size weights, or simply reduces, if W˜ is equivalent to a direct sum W′ = W′ W′ of orthogonalsmaller dimensional weights. 1⊕···⊕ N WesaythatamatrixweightW =W(x)reducestosmaller-sizeifitscorrespond- ing weight W˜ =W˜(x) of linear operators reduces. Theorem 2.3. A matrix weight W reduces to scalar weights if and only if there exists a positive definite matrix P such that for all x,y we have (1) W(x)PW(y)=W(y)PW(x). Proof. Suppose that W(x)=MΛ(x)M∗, Λ=Λ(x) a diagonal weight. Then W(x)(MM∗)−1W(y)=(MΛ(x)M∗)(MM∗)−1(MΛ(y)M∗)=MΛ(x)Λ(y)M∗ =MΛ(y)Λ(x)M∗ =(MΛ(y)M∗)(MM∗)−1(MΛ(x)M∗) =W(y)(MM∗)−1W(x). Hence P =(MM∗)−1 is a positive definite matrix which satisfies (1). 4 JUANTIRAOANDIGNACIOZURRIA´N Conversely,assume (1). First ofalltake x such thatW(x ) is nonsingular. Let 0 0 A be a symmetric positive definite matrix such that A2 = W(x ). Take W′(x) = 0 A−1W(x)A−1 and P′ =APA. Then W′ is equivalent to W, P′ is positive definite and W′(x)P′W(y) = W′(y)P′W′(x) for all x,y, and W′(x ) = I. Thus without 0 loss of generality we may assume from the start that W(x )=I. 0 Now the hypothesis implies that W(x)P = PW(x) for all x. Let E be any eigenspace of P. Then E is W(x)-invariant. Hence W(x) and W(y) restricted to E commute by (1) and are self-adjoint. Therefore W(x) restricted to E are simul- taneously diagonalizable for all x, through a unitary operator (cf. [19], Corollary to Theorem 30, p. 283). Since this happens for all eigenspaces of P and they are orthogonalwe have proved that W(x) is unitarily equivalent to a diagonal weight, hence W reduces to scalar weights. This completes the proof of the theorem for a matrix weight. (cid:3) Theorem 2.4. The following conditions are equivalent: (i) A matrix weight W is unitarily equivalent to a diagonal weight; (ii) For all x,y we have W(x)W(y)=W(y)W(x); (iii) There exists a positive definite matrix P such that for all x,y we have W(x)PW(y)=W(y)PW(x) and W(x)P =PW(x). Proof. (i)implies(ii)IfU isunitaryandW(x)=UΛ(x)U∗whereΛ(x)isdiagonalforall x, then W(x)W(y) = (UΛ(x)U∗)(UΛ(y)U∗) = UΛ(x)Λ(y)U∗ = UΛ(y)Λ(x)U∗ = (UΛ(y)U∗)(UΛ(x)U∗)=W(y)W(x). (ii) implies (iii) It is obvious. (iii) implies (i) Let E be any eigenspace of P. Then E is W(x)-invariant. Hence W(x) and W(y) restricted to E commute and are self-adjoint. Therefore W(x) re- strictedtoE aresimultaneouslydiagonalizableforallx,throughaunitaryoperator (cf. [19], Corollary to Theorem 30, p. 283). Since this happens for all eigenspaces of P and they are orthogonal we have proved that W(x) is unitarily equivalent to a diagonal weight, hence W reduces to scalar weights. This completes the proof of the theorem. (cid:3) The following corollary can also be found in [8, p. 463]. Corollary 2.5. Let W =W(x) be a matrix weight such that W(x )=I for some 0 x in the support of W. Then W reduces to scalar weights if and only if W is 0 unitarily equivalent to a diagonal weight. Proof. Let us consider first the case of a matrix weight. It is obvious from the definition that if W is unitarily equivalent to a diagonal weight, then W reduces to scalar weights. Conversely, let us assume that W reduces to scalar weights. By Theorem 2.3 there exists a positive definite matrix P such that W(x)PW(y) = W(y)PW(x) for all x,y. Ifwe put y =x wegetW(x)P =PW(x) for all x. Then 0 Theorem 2.4 implies that W is unitarily equivalent to a diagonal weight. (cid:3) x2+x x Example 2.6. Let W(x)= supported in (0,1). Then x x (cid:18) (cid:19) 1 1 x2 0 1 0 W(x)= . 0 1 0 x 1 1 (cid:18) (cid:19)(cid:18) (cid:19)(cid:18) (cid:19) REDUCIBILITY OF MATRIX WEIGHTS 5 Therefore W reduces to scalar weights and W(x)PW(y) = W(y)PW(x) for all x,y (0,1) with ∈ −1 1 1 1 0 1 1 P =(MM∗)−1 = = − . 0 1 1 1 1 2 (cid:20)(cid:18) (cid:19)(cid:18) (cid:19)(cid:21) (cid:18)− (cid:19) However W is not unitarily equivalent to a diagonal weight. In fact x2+x x y2+y y W(x)W(y)= x x y y (cid:18) (cid:19)(cid:18) (cid:19) (x2+x)(y2+y)+xy (x2+2x)y = . (y2+2y)x 2xy (cid:18) (cid:19) Since the entries (1,2) and (2,1) are not symmetric in x and y, W(x)W(y) = 6 W(y)W(x) for all x,y. Definition 2.7. We say that a matrix weight W is the direct sum of the matrix weights W ,W ,...,W , and write W =W W W , if 1 2 j 1 2 j ⊕ ⊕···⊕ W (x) 0 0 1 · · · 0 W (x) 0 2  · · ·  W(x)= · · · .    · · ·     · · ·   0 0 Wj(x)  · · ·    Theorem 2.8. Let W =W(x) be a matrix weight function with support S. Then the following conditions are equivalent: (i) W is equivalent to a direct sum of matrix weights of smaller size; (ii) there is an idempotent matrix Q=0,I such that QW(x)=W(x)Q∗ for all x; 6 (iii) the space R T MatN(C):TW(x)=W(x)T∗ for a.e.x S )RI. C ≡{ ∈ ∈ } Proof. (i) implies (ii) Suppose that W˜(x) = MW(x)M∗ = W˜ (x) W˜ (x) where W˜ 1 2 1 ⊕ is of size k and W˜ (x) is of size n k. Let P be the orthogonal projection onto 2 − the subspace of Cn generated by e ,...,e . Then PW˜(x) = W˜(x)P. Hence 1 k { } P(MW(x)M∗)=(MW(x)M∗)P and (M−1PM)W(x)=W(x)(M∗P(M∗)−1)=W(x)(M−1PM)∗. Therefore if we take Q = M−1PM, then Q is idempotent, Q = 0,I and for all x 6 QW(x)=W(x)Q∗. (ii) implies (iii) Clearly R is a real vector space such that I R. Now the C ∈ C implication is obvious. (iii) implies (i) Our first observation is that if T R then its eigenvalues are ∈ C real. Infact, by changingW by anequivalentweightfunctionwe may assumethat T isinJordancanonicalform. ThusT = J isthedirectsumofelementary 1≤i≤s i L 6 JUANTIRAOANDIGNACIOZURRIA´N Jordan matrices J of size d with characteristic value λ of the form i i i λ 0 0 0 i · · · 1 λ 0 0 i  · · ·  · · · · J = . i  · · · ·     · · · ·  0 0 · · · λi 0 0 0 1 λi  · · ·    Let us write W(x) as an s s-matrix of blocks W (x) of d -rows and d -columns. ij i j × Then,byhypothesis,wehaveJ W (x)=W (x)J∗. Thus,if1 k nistheindex corresponding to the first row oif Wii (x), weiihaveiλ w (x)=w≤ (≤x)λ¯ . But there ii i kk kk i exists x such that W(x) is positive definite and so w (x)>0. Therefore λ =λ¯ . kk i i Another important property of the real vector space R is the following one: if C T R and p R[x] then p(T) R. ∈C ∈ ∈C LetL andN be, respectively,the semi-simple andthe nilpotentpartsofT. The minimalpolynomialofT is(x λ1)r1 (x λj)rj whereλ1,...,λj arethedifferent − ··· − characteristicvaluesofT andr isthegreatestdimensionoftheelementaryJordan i matrices with characteristic value λ . A careful look at the proof of Theorem i 8, Chapter 7 of [19] reveals that L and N are real polynomials in T. Therefore L,N R. ∈C The minimal polynomial of L is p = (x λ ) (x λ ). Let us consider the 1 j − ··· − Lagrange polynomials (x λ ) i p = − . k (λ λ ) i k i6=k − Y Since p (λ ) = δ , and L = L∗ it follows that P = p (L) is the orthogonal k i ik i i projection of Cn onto the λi-eigenspace of L. Therefore Pi R. ∈C Let us now consider the nilpotent part N of T: N is the direct sum of the nilpotent parts N of each elementary Jordan block J , i.e. i i 0 0 0 0 · · · 1 0 0 0  · · ·  · · · · N = . i · · · ·   · · · · 0 0 0 0  · · ·  0 0 1 0  · · ·    Since N ∈ CR we have NiWii(x) = Wii(x)Ni∗. If Ni were a matrix of size larger thanone,andif1 k n isthe index correspondingto the firstrowofW (x), we ii ≤ ≤ wouldhavew (x)=0 forall x, whichis a contradiction. Thereforeallelementary kk Jordan matrices of T are one dimensional, hence N =0 and T =L. If R ) RI, then there exists T R with j > 1 different eigenvalues. Let C ∈ C M GL(N,C) such that L=MTM−1 be a diagonal matrix and let P , 1 i j i ∈ ≤ ≤ be the orthogonal projections onto the eigenspaces of L. Let W˜(x)=MW(x)M∗. Then I = P + + P , P P = P P = δ P ,1 r,s j and P∗ = P , 1 ··· j r s s r rs r ≤ ≤ r r REDUCIBILITY OF MATRIX WEIGHTS 7 P W˜(x)=W˜(x)P for all 1 r j. Therefore, r r ≤ ≤ W˜(x)=(P + +P )W˜(x)(P + +P )= P W˜(x)P 1 j 1 j r s ··· ··· 1≤r,s≤j X =P W˜(x)P + +P W˜(x)P =W˜ (x) W˜ (x), 1 1 j j 1 j ··· ⊕···⊕ completing the proof that (iii) implies (i). Hence the theorem is proved. (cid:3) For future reference we introduce the following definition. Definition 2.9. If W = W(x) is a matrix weight function with support S, then we define the real vector space R = T MatN(C):TW(x)=W(x)T∗ for a.e.x S . C { ∈ ∈ } Any matrix weight W is equivalent to a matrix weight W˜ with the order zero moment M˜ =I. In fact, it suffices to take W˜ =M−1/2WM−1/2. 0 0 0 Proposition 2.10. Let W be a weight matrix with support S such that M = I. 0 Then its real vector space R is a real form of the commutant algebra C = T Mat (C):TW(x)=W(x)T for a.e.x S . N C { ∈ ∈ } Proof. The hypothesis implies right away that T = T∗, hence the proposition follows. (cid:3). It is worth to state the following immediate corollary of Theorem 2.8 which resembles a Schur’s lemma. Corollary 2.11. Let W =W(x) be a matrix weight . Then W is irreducible if and only if its real vector space R =RI. C The following result can also be found in a recent work announced in [20]. Corollary 2.12. A matrix weight reduces if and only if the commutant of W˜ = M−1/2WM−1/2 contains non-scalar matrices. 0 0 Let (W,V) be an abstract S-module. Then (W,V) is said to be simple if it is of positive dimension and if the only invariant subspaces of V are 0 and V. { } An S-module is called semi-simple if it can be represented as a sum of simple submodules. Instead, when W =W(x) is a matrix weight (resp. a weight of linear operators on V) one says that W is irreducible when it is not equivalent to a direct sum of matrix (resp. an orthogonal direct sum of operator) weights of smaller size. Furthermore an operator weight will be called simple if has no proper invariant subspace. A semi-simple abstract S-module (W,V) can be represented as the direct sum V =V V ofafinitecollectionΦ= V ofsimpleS-submodules. Moreover, 1 j i ⊕···⊕ { } if we have a representation of this kind, and if V′ is any invariant subspace of V, then there exists a subcollection Φ of Φ such that V is the direct sum of V′ and 0 of the sum of the submodules belonging to Φ . 0 Conversely, let V be an S-module which has the following property: if V′ is any invariant subspace of V, there exists an invariant subspace V′′ such that V = V′ V′′. Then V is semi-simple (cf. Propositions 1,2 in Ch. VI, I of [6]). ⊕ § Proposition 2.13. Let W = W(x) be a matrix weight with support S. Then the S-module (W,V) is semi-simple. 8 JUANTIRAOANDIGNACIOZURRIA´N Proof. Let V′ be any invariant subspace of V and let V′′ be its orthogonal complement. Then, since W(x) is self-adjoint for all x, V′′ is invariant. Therefore the S-module (W,V) is semi-simple. (cid:3) LetV =V V =V′ V′ betworepresentationsofanabstractsemi- 1⊕···⊕ j 1⊕···⊕ j′ simple S-module V as a direct sum of simple submodules. Then we have j = j′ and there exists a permutation σ of the set 1,...,j such that V is isomorphic i { } to V′ for all 1 i j (cf. Propositions 3, in Ch. VI, I of [6]). Clearly, this σ(i) ≤ ≤ § uniqueness result can be generalized to the following one: if V =V V and 1 j ⊕···⊕ V′ = V′ V′ are two isomorphic S-modules represented as direct sums of 1 ⊕···⊕ j′ simple submodules, then we have j = j′ and there exists a permutation σ of the set 1,...,j such that V is isomorphic to V′ for all 1 i j. { } i σ(i) ≤ ≤ This last statement is not true for matrix weights or operator weights on an innerproductspaceV,because,aswepointedoutatthebeginning,theequivalence amongweightshasadifferentmeaningthantheisomorphismofS-modules. Infact, Example 2.6 illustrates this: the matrix weight x2+x x W(x)= x x (cid:18) (cid:19) has no invariant subspace of Cn, i.e. it is simple, but it is equivalent to x2 0 W′(x)= 0 x (cid:18) (cid:19) which is the direct sum of two scalar weights. Nevertheless the following holds. Theorem2.14. LetW =W W andW′ =W′ W′ berepresentations 1⊕···⊕ j 1⊕···⊕ j′ of the operator weights W and W′ as orthogonal direct sums of simple (irreducible) weights. If W and W′ are unitarily equivalent, then we have j =j′ and there exists a permutation σ of the set 1,...,j such that W is unitarily equivalent to W′ { } i σ(i) for all 1 i j. ≤ ≤ Proof. It is clear that it is enough to consider the case W = W′. We shall constructthe permutationσ. Suppose thatσ(i) is alreadydefinedfori<k (where 1 k j) and has the following properties: a) σ(i)=σ(i′) for i<i′ <k; b) V′ is ≤ ≤ 6 i unitarily equivalent to V′ for i<k; c) we have the orthogonal direct sum σ(i) V = V′ V . σ(i)⊕ i i<k i≥k M M We consider the invariant subspace E = V′ V . σ(i)⊕ i i<k i>k M M Let E⊥ be the orthogonalcomplement of E which is, by Propositions1 in Ch. VI, I of [6] mentionedabove,the direct sum of a certainnumber ofthe spaces V′. On § i the other hand E⊥ is unitarily isomorphic to V/E, i.e to V . It follows that E⊥ is k simple and therefore E⊥ is one of the V′, say E⊥ =V′. Since V′ E for i<k, i i0 σ(i) ⊂ we have i =σ(i) for i<k. We define σ(k)=i . It is clear that the function σ(i) 0 0 6 now defined for i < k+1, satisfies conditions a), b), c) above, with k replaced by k+1. REDUCIBILITY OF MATRIX WEIGHTS 9 Becausewecandefinetheinjectivefunctionσ ontheset 1,...,j wemusthave { } j′ j. Since the two decompositions play symmetric roles, we also have j j′, hen≥ce j =j′. The theorem is proved. ≥ (cid:3) If we come back to our Example 2.6 we realize that a matrix weight may not be expressible as direct sum of irreducible matrix weights. But such a weight is equivalentto onethatis the directsumofirreducibleweights. Takingintoaccount this fact and Theorem 2.8 we make the following definition. Definition 2.15. We say that a matrix weight (resp. an operator weight) is com- pletely reducible if it is equivalent to a direct sum of irreducible matrix weights (resp. an orthogonal direct sum of irreducible operators). Observe that Theorem 2.8 implies that every weight is completely reducible. Theorem2.16. LetW =W W andW′ =W′ W′ berepresentations 1⊕···⊕ j 1⊕···⊕ j′ of the operator weights W and W′ as orthogonal direct sums of irreducible weights. If W and W′ are equivalent, then we have j =j′ and there exists a permutation σ of the set 1,...,j such that W is equivalent to W′ for all 1 i j. { } i σ(i) ≤ ≤ Proof. Modulo unitary equivalence we may assume from the beginning that W andW′ arematrixweightsofthesamesize. LetW′(x)=MW(x)M∗ forallx,with M a nonsingular matrix. We may write M =UP in a unique way with U unitary and P positive definite, and P = VDV∗ where V is unitary and D is a positive diagonal matrix. Then W′(x) = (UV)D(V∗W(x)V)D(UV)∗. Modulo unitary equivalenceswemayassumethatW′(x)=DW(x)D. IfwewriteD =D D 1 j ⊕···⊕ where D is a diagonal matrix block of the same size as the matrix block W . i i Then W′ = (D W D ) (D W D ). From the hypothesis we also have the 1 1 1 j j j ⊕···⊕ representation of W′ = W′ W′ as an orthogonal direct sum of irreducible 1⊕···⊕ j′ weights. NowwearereadytoapplyTheorem2.14toconcludethatj =j′ andthat there exists a permutation σ such that D W D W′ for all 1 i j. Hence i i i ≈ σ(i) ≤ ≤ W W′ for all 1 i j, completing the proof of the theorem. (cid:3) i ∼ σ(i) ≤ ≤ Theorem 2.17. Let W =W(x) be an operator weight equivalent to an orthogonal direct sum W W of irreducible weights. If j(T) is the number of distinct 1 d ⊕···⊕ eigenvalues of T R, then d = max j(T) : T R . Moreover, W is equivalent ∈ C { ∈ C } to a matrix weight W′ = W′ W′ where W′ is the restriction of W′ to one 1⊕···⊕ d i of the eigenspaces of a diagonal matrix D R. Besides j(T) is the degree of the ∈ C minimal polynomial of T. Proof. Suppose that W =W W and let P ,...,P be the corresponding 1 d 1 d ⊕···⊕ orthogonalprojections. Define T =λ P + +λ P with λ ,...,λ all different. 1 1 d d 1 d ··· Then clearly T R. Hence d max j(T):T R . ∈C ≤ { ∈C } Conversely, let T R such that j(T) = max j(T) : T R . Modulo unitary ∈ C { ∈ C } equivalence we may assume that W is a matrix weight. In the proof of Theorem 2.8 we established that T is semi-simple. Thus we may write T = A−1DA with D a diagonal matrix. Let W′(x) = AW(x)A∗. Then W′ W and D R(W′). ∼ ∈ C We may assume that D =D D where D is the Jordan diagonalblock 1 j(T) i ⊕···⊕ corresponding to the eigenvalue λ of D. Then W′ = W′ W′ where the i 1⊕···⊕ j(T) size of the block W′ is equal to the size of the block D , for all 1 i j(T), i i ≤ ≤ because DW′(x) = W′(x)D. If some W′ were not irreducible we could replace i 10 JUANTIRAOANDIGNACIOZURRIA´N it, modulo equivalence, by a direct sum of matrix irreducible weights. Thus there exists a matrix weight W′′ W′ such that W′′ = W′′ W′′ is a direct sum ∼ 1 ⊕···⊕ j of matrix irreducible weights with j(T) j. ≤ By hypothesis W = W W . Hence Theorem 2.16 implies that d = j. 1 d ⊕···⊕ Therefore d = j(T) and the W′ are in fact irreducible. Moreover, there exists i a permutation σ of the set 1,...,d such that W′ = W . The theorem is proved. { } i σ(i) (cid:3) 3. The algebra (W) of a reducible weight D We come now to the notion of a differential operator with matrix coefficients actingonmatrixvaluedpolynomials,i.e. elementsofMat (C)[x]. Theseoperators N could be made to act on our functions either on the left or on the right. One finds a discussion of these two actions in [11]. The conclusion is that if one wants to have matrix weights that do not reduce to scalar weights and that have matrix polynomialsastheireigenfunctions,oneshouldsettleforright-hand-sidedifferential operators. We agree now to say that D given by s d D = ∂iF (x), ∂ = , i dx i=0 X acts on Q(x) by means of s QD= ∂i(Q)(x)F (x) i i=0 X Given a sequence of matrix orthogonalpolynomials {Qn}n∈N0 with respect to a weightmatrixW =W(x),weintroducedin[18]thealgebraD(W)ofallright-hand sidedifferentialoperatorswithmatrixvaluedcoefficientsthathavethepolynomials Q as their eigenfunctions. Thus w (2) (W)= D :Q D =Γ (D)Q , Γ (D) Mat (C)for alln N . n n n n N 0 D { ∈ ∈ } The definition of (W) depends only on the weight matrix W and not on the D sequence {Qn}n∈N0. Remark 1. If W′ W, say W′ = MWM∗, then the map D MDM−1 ∼ 7→ establishes an isomorphism between the algebras (W) and (W′). In fact, if D D {Qn}n∈N0 is a sequence of matrix orthogonalpolynomials with respect to W, then {Q′n =MQnM−1}n∈N0 isasequenceofmatrixorthogonalpolynomialswithrespect to W′. Moreover,if Q D =Γ (D)Q , then n n n Q′(MDM−1)=(MΓ (D)M−1)Q′ . n n n Hence MDM−1 (W′) and Γ (D′)=MΓ (D)M−1. n n ∈D Itisworthobservingthateachalgebra (W)isasubalgebraoftheWeylalgebra D D over Mat (C) of all linear right-hand side ordinary differential operators with N coefficients in Mat (C)[x]: N D= D = ∂iF :F Mat (C)[x] . i i N ∈ n Xi o

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.