Semidefinite descriptions of separable matrix cones ∗ Roland Hildebrand February 2, 2008 Abstract 7 0 Let K ⊂ E, K′ ⊂ E′ be convex cones residing in finite-dimensional real vector spaces. An 0 element y in thetensor product E⊗E′ is K⊗K′-separable if it can berepresented as finitesum 2 y =Plxl⊗x′l, where xl ∈K and x′l ∈K′ for all l. Let S(n), H(n), Q(n) be the spaces of n×n n real symmetric, complex hermitian and quaternionic hermitian matrices, respectively. Let further a S+(n), H+(n), Q+(n) be the cones of positive semidefinite matrices in these spaces. If a matrix J A ∈H(mn) = H(m)⊗H(n) is H+(m)⊗H+(n)-separable, then it fulfills also the so-called PPT 0 condition,i.e.itispositivesemidefiniteandhasapositivesemidefinitepartialtranspose. Thesame 2 implicationholdsformatricesinthespacesS(m)⊗S(n),H(m)⊗S(n),andform≤2inthespace Q(m)⊗S(n). Weprovideacompleteenumerationofallpairs(n,m)whentheinverseimplication ] A is also true for each of the above spaces, i.e. the PPT condition is sufficient for separability. We also show that a matrix in Q(n)⊗S(2) is Q+(n)⊗S+(2)-separable if and only if it is positive R semidefinite. . h Keywords: positive partial transpose, separability t a AMS Subject Classification: 15A48 m [ 1 Introduction 1 v Let K,K be regularconvexcones (closedconvexcones, containingno lines, with non-empty interior), ′ 1 residing in finite-dimensional real vector spaces E,E . Then an element w E E of the tensor 7 ′ ∈ ⊗ ′ productspaceiscalledK K -separable(orjustseparable,ifitisclearwhichconesK,K aremeant), 5 ′ ′ ⊗ 1 if it can be represented as a convex combination of product elements v v′, where v K, v′ K′. ⊗ ∈ ∈ 0 It is not hard to show that the set of separable elements is itself a regular convex cone. This cone is 7 called the K K -separable cone. ′ 0 ⊗ The notion of separability is intimately linked with the notion of positive maps [3],[9]. Cones of / h positivemapsappearfrequentlyinapplications[10],[6]andaredualtoseparablecones[5]. Separability at itself plays an increasingly important role in quantum information theory [8]. m A particularly important case in optimization and Mathematical Programming is when the cones K,K are standard self-scaled cones such as Lorentz cones or cones of positive semidefinite (PSD) : ′ v matrices [1]. Let (n), (n), (n) be the spaces of n n real symmetric, complex hermitian and i S H Q × X quaternionic hermitian matrices, respectively. Let further S (n), H (n), Q (n) be the cones of PSD + + + r matrices in these spaces. If the elements of a pair of matrix spaces commute, then the tensor product a of these spaces can be represented by the Kronecker product space and is itself a subset of such a matrix space. If such a product matrix is separable with respect to two PSD matrix cones, then it is necessarily PSD itself. In the case of H (m) H (n)-separability, where n,m N , there + + + ⊗ ∈ exists another simple necessary condition for separability, the so-called PPT condition [8]. A matrix in (mn) = (m) (n) fulfills the PPT condition if it is positive semidefinite and has a positive H H ⊗H semidefinite partial transpose. In the spaces (m) (n) and (m) (n) the PPT condition reduces just to positivity, i.e. S ⊗S H ⊗S inclusion in the cone S (mn) or H (mn), respectively. This is because the positivity property of real + + symmetric or complex hermitian matrices is preserved under transposition. However, the cone Q (n) + is invariant under transposition only for n 2. Therefore for matrices in (m) (n) the PPT ≤ Q ⊗ S condition is stronger than just positivity. Moreover,it follows that the PPT condition is necessary for Q (m) S (n)-separability only for m 2, while positivity is necessary for arbitrary (n,m). + + ⊗ ≤ The importance of the PPT condition and the positivity condition is based on the fact that these conditionsaresemidefiniterepresentable(i.e.intheformoflinearmatrixinequalities)andhenceeasily ∗LJK,TourIRMA,51ruedesMath´ematiques,38400St.Martind’H`eres,France([email protected]). This paper presents research results of the BelgianProgrammeon Interuniversity Poles of Attraction, Phase V, initiated by the Belgian State, PrimeMinister’sOffice forScience, Technology and Culture; andof the Action Concert´ee Incitative ”Massesdedonn´ees”ofCNRS,France. Thescientificresponsibilityrestswithitsauthor. 1 verifiablealgorithmically,incontrasttoseparability. Itisthen importantto knowinwhichcasesthese conditions are actually equivalent to separability, rather than only necessary. In these cases one then obtains semidefinite descriptions of the corresponding separable cones. The theorem of Woronowicz-Peres states that in the case m = 2, n = 3 the PPT condition is not only necessary, but also sufficient for H (m) H (n)-separability [15],[8]. However, there exist + + ⊗ matricesin (2) (4)whichfulfillthePPTcondition,butarenotseparable[15]. Similarly,Terpstra H ⊗H [10] has shown that positivity is sufficient for S (m) S (n)-separability for min(n,m) 2, but not + + ⊗ ≥ for n = m = 3. In [10] this was formulated in the equivalent form of sums of squares representability of biquadratic forms. One can then conclude that the positivity condition is equivalent to S (m) S (n)-separability if + + ⊗ and only if min(n,m) 2 and that the PPT condition is equivalent to H (m) H (n)-separability if + + ≤ ⊗ and only if min(n,m)=1 or m+n 5. ≤ Inthiscontributionweprovideasimilarclassificationforthespaces (m) (n)and (m) (n). H ⊗S Q ⊗S We show that positivity is equivalent to H (m) S (n)-separability if and only if m=1 or n 2 or + + ⊗ ≤ m+n 5,andthatthePPTconditionisequivalenttoQ (2) S (n)-separabilityifandonlyifn 3. + + ≤ ⊗ ≤ Further, we show that for m 3 positivity is equivalent to Q (m) S (n)-separability if and only if + + ≥ ⊗ n 2. In addition, we enumerate all pairs (n,m) for which the positivity property in (m) (n) ≤ Q ⊗S is preserved by the operation of matrix transposition, namely, the cases m = 1, n arbitrary, and the cases m=2, n 2. This involves mainly the following new and nontrivial results. ≤ First, we show that a matrix in (2) (3) is Q (2) S (3)-separable if and only if it fulfills + + Q ⊗S ⊗ the PPT condition. Second, we provide an example of a matrix in (2) (4) which fulfills the PPT H ⊗S condition,butisnotH (2) S (4)-separable,thussharpeningthecounterexampleprovidedin[15]for + + ⊗ the H (2) H (4) case. Third, we show that if a matrix in (n) (2) is positive semidefinite, then + + ⊗ Q ⊗S it is Q (n) S (2)-separable. In addition, we provide examples of matrices in (3) and (2) (3) + + ⊗ Q Q ⊗S which are PSD but whose transpose is not PSD. Theremainderofthepaperisstructuredasfollows. Inthenextsectionweprovideexactdefinitions of separability and of the PPT condition and consider some of their basic properties. In Section 3 we consider low-dimensionalcases and relations between the cones we deal with. In the next two sections we provethe sufficiency ofthe PPTconditionfor separabilityininthe space (2) (3). Insection5 Q ⊗S we also provide an example of a PSD matrix in (2) (3) whose transpose is not PSD. In Section 6 Q ⊗S we provide a counterexample against sufficiency of the PPT condition for H (2) S (4)-separability. + + ⊗ In Section 7 we prove the equivalence of positivity and separability in the space (n) (2). Finally Q ⊗S we summarize our results in the last section. In the appendix we list facts about quaternions and quaternionic matrices which we use for the proof of the main results of the paper. There we provide also an example of a matrix in Q (3) whose transpose is not PSD. + 2 Definitions and preliminaries In this section we introduce the cones we deal with and provide definitions of separability and the partial transpose. For basic informationrelated to quaternions and quaternionic matrices we refer the reader to the appendix. Throughout the paper, i,j,k denote the imaginary units of the quaternions and the overbar¯the complex or quaternion conjugate. Let further e·,...,e be the canonicalbasis vectorsof Rm. By id denote the identity operator 0 m 1 E − on the space E, by I the n n identity matrix, by 0 a zero matrix of size n m, and by 0 a n n m n × × × zero matrix of size n n. Let further diag(A,B) denote a block-diagonal matrix with blocks A and × B. For a matrix A with real, complex or quaternionic entries, A will denote the transpose, complex ∗ conjugate transpose or quaternionic conjugate transpose of A, respectively, and rkA the rank of A. Further we denote by GL (R) the set of invertible matrices of size n n with entries in the ring R. n × We now introduce several convex cones we deal with. Let L be the n-dimensional standard Lorentz cone, or second order cone, n L = (x ,...,x )T Rn x x2+ +x2 . n 0 n−1 ∈ | 0 ≥ 1 ··· n−1 (cid:26) q (cid:27) Let (n) be the space of real symmetric n n matrices and S (n) the cone of positive semidefinite + S × (PSD) matrices in (n); (n) the space of complex hermitian n n matrices and H (n) the cone + S H × of PSD matrices in (n); (n) the space of quaternionic hermitian n n matrices and Q (n) the + H Q × cone of PSD matrices in (n); k(n) the space of quaternionic hermitian n n matrices with zero Q Q × k-componentandQk(n)theintersectionofthePSDconeQ (n)withthespace k(n). Allthesecones + + Q are regular, i.e. closed, containing no line, and with nonempty interior. 2 Let E,E be real vector spaces and K E,K E regular convex cones in these spaces. ′ ′ ′ ⊂ ⊂ Definition 2.1. [5] An element of the tensor product E E is called K K -separable if it can be ′ ′ ⊗ ⊗ written as a sum N p x x, where N N and p >0, x K,x K for all l =1,...,N. l=1 l l⊗ ′l ∈ l l ∈ ′l ∈ ′ The set of K K -separable elements forms a regular convex cone in E E . ′ ′ ⊗P ⊗ Weareinterestedinthecasewhenthe spacesE,E arespacesofhermitianmatrices,andthecones ′ K,K arethecorrespondingconesofPSDmatricesinthesespaces. Iftheelementsinthefactorspaces ′ commute, then we can represent tensor products of matrices by Kronecker products, which will again be hermitian matrices. If the elements do not commute, then the Kronecker products will in general not be hermitian. Since we are interested in describing the separable cone by a matrix inequality, we do not consider this latter case in this contribution. Inparticular,form,n N weconsiderthe space (m) (n) asa subspaceof (mn), the spaces + ∈ S ⊗S S (m) (n) and (m) (n) as subspaces of (mn) and the space (m) (n) as subspace of H ⊗S H ⊗H H Q ⊗S (mn). As caneasilybe seen,exchangingthe factorsinthe tensorproductis equivalentto applyinga Q certain permutation of rows and columns to the corresponding Kronecker product. Hence exchanging the factors in the tensor product leads to a canonically isomorphic space, and we can restrict our consideration to the cases listed above. Definition 2.2. An automorphism A of E is called an automorphism of K if A[K]=K. The automorphisms of a cone K form a group, which will be called Aut(K). For regular convex cones K,K let g Aut(K),g Aut(K ) be elements of their automorphism groups. Since g,g are ′ ′ ′ ′ ∈ ∈ linearautomorphismsoftheunderlyingspacesE,E ,wecanconsidertheirtensorproductg g,which ′ ′ ⊗ will be a linear automorphism of the space E E . The following assertion is trivial but nevertheless ′ ⊗ very useful. Proposition 2.3. Let K K′ be the set of maps g g′ : E E′ E E′ g Aut(K),g′ G ⊗ { ⊗ ⊗ → ⊗ | ∈ ∈ Aut(K′) . Then K K′ is a group. It is canonically isomorphic to the factor group [Aut(K) Aut(K′)}]/ (αidV,αG−1⊗idV′) α R+ . TheconeofK K′-separable elementsis invariantwithrespec×t { | ∈ } ⊗ to the action of K K′, and K K′ is a subgroup of the automorphism group of this cone. G ⊗ G ⊗ Definition 2.4. A face F of a convex cone is a subset of K with the following property. If x,y K and x+y F, then x,y F. For x K, the face of x in K is the minimal face of K containing x∈. 2 ∈ ∈ ∈ ItisnothardtoseethatafaceF isthe face ofapointxifandonlyifxiscontainedinthe relative interior of F. If x K and y K , then the face of (x,y) in K K E E is F F , where F is ′ ′ ′ x y x ∈ ∈ × ⊂ × × the face of x in K and F is the face of y in K . y ′ Definition 2.5. An extreme ray of a regular convex cone K is a 1-dimensional face. Any non-zero point on an extreme ray is a generator of that extreme ray. A convex cone is the convex hull of its extreme rays. Lemma 2.6. Let K RN be a regular convex cone and L RN a linear subspace of dimension n. ⊂ ⊂ Let K = K L and let x be the generator of an extreme ray of K . Then the face of x in K has at ′ ′ ∩ most dimension N n+1. − Proof. Let d be the dimension of the face F of x in K. Then the intersection F L has dimension ∩ d d+n N and x is in the relative interior of F L K . On the other hand, d = 1, since x ′ ′ ′ ≥ − ∩ ⊂ generates an extreme ray of K . It follows that d N n+1. ′ ≤ − AnyfaceofS (n),H (n), orQ (n)isisomorphictothe coneS (l),H (l),orQ (l),respectively, + + + + + + for some l 0,...,n , has dimension l(l+1), l2, or l(2l 1), respectively, and consists of matrices of ∈ { } 2 − rank notexceeding l. The proofofthis assertionis similar for allthree cases,for the quaternionic case we refer the reader to Proposition A.6 in the appendix. Denote the spaces (2), (2), k(2), (2) by E ,E ,E ,E , and the cones S (2), H (2), Qk(2), S H Q Q 3 4 5 6 + + + Q (2) by K ,K ,K ,K , respectively. The index denotes the real dimension of the corresponding + 3 4 5 6 space or cone. Let T : E E , m = 3,4,5,6, be the matrix transposition, or equivalently the m m m → complex or quaternion conjugation in E . Note that T is in Aut(K ). m m m WehavetheinclusionsE E andK K form n. Forn N wehaveE (n) (2n), m n m n + 3 ⊂ ⊂ ≤ ∈ ⊗S ⊂S E (n) (2n),E (n) k(2n)andE (n) (2n). ThespaceE (n), m=3,...,6, 4 5 6 m ⊗S ⊂H ⊗S ⊂Q ⊗S ⊂Q ⊗S consists of matrices composed of 4 symmetric n n blocks. × Definition 2.7. Let m,n N and let A be a mn mn matrix, with real, complex or quaternionic + ∈ × entries. Partition A in m m blocks A (α,β =1,...,m) of size n n. Then the partial transpose αβ × × 3 of A, denoted by AΓ, will be defined as the result of exchanging the off-diagonal blocks A and A αβ βα for all α=β. 6 A A A A A A 11 12 1m 11 21 m1 ··· ··· A A A A A A 21 22 2m 12 22 m2 A= . . ··· . , AΓ = . . ··· . . . . . . . . . . . . . . A A A A A A m1 m2 ··· mm 1m 2m ··· mm Note that if A is hermitian, i.e. A=A , then so is AΓ. ∗ Definition 2.8. Let A be a hermitian mn mn matrix, with real, complex or quaternionic entries. × Then A is said to fulfill the PPT condition or to be a PPT matrix if both A and AΓ are positive semidefinite. The set of PPT matricesin the spaces (m) (n), (m) (n), (m) (n), (m) (n) for fixed m,n N is a regular convex cone. S ⊗S H ⊗S Q ⊗S H ⊗H + ∈ Itiswell-knownthatformatricesin (mn)beingPPTisanecessaryconditionforH (m) H (n)- + + H ⊗ separability [8]. We can generalize this result in the following way. Proposition 2.9. Let V be one of the matrix spaces (m), (m), (m) and V one of the spaces m n S H Q (n), (n), (n), such that the elements of V and V commute. Let K (m),K (n) be the corre- m n + + S H Q sponding positive matrix cones. Then any K (m) K (n)-separable matrix in V V is PSD. + + m n ⊗ ⊗ Assume further that K (m) is invariant with respect to transposition. Then any K (m) K (n)- + + + ⊗ separable matrix in V V is a PPT matrix. m n ⊗ Proof. It is sufficient to prove the assertions for the extreme rays of the K (m) K (n)-separable + + ⊗ cone. Let the matrix A generate such an extreme ray. Then A can be written as Kronecker product A A , where A =xx K (m), A =yy K (n) generate extreme rays of the corresponding m n m ∗ + n ∗ + ⊗ ∈ ∈ PSDcones,andx,yareappropriatecolumnvectorsofsizem,n,respectively. HenceA=(x y)(x y) ∗ ⊗ ⊗ is PSD, which proves the first part of the proposition. Let us prove the second part. We have AΓ = AT A . By assumption of the proposition AT m ⊗ n m also generates an extreme ray of K+(m) and can hence be expressed as ATm = x′x′∗, where x′ is an appropriate column vector of length m. Then AΓ = (x y)(x y) is also PSD and A is a PPT ′ ′ ∗ ⊗ ⊗ matrix. In particular, the PPT property is necessary for S (m) S (n)-, H (m) S (n)-, and H (m) + + + + + H (n)-separability for any m,n N , and for Q (m) ⊗S (n)-separability⊗for m 2 and n ⊗1 + + + + ∈ ⊗ ≤ ≥ arbitrary (cf. Corollary A.8 in the appendix). Note also that if V = (n), then the operation of partial transposition is equivalent to full trans- n S position. If in addition, transposition preserves positivity of matrices in V V , then a matrix has m n ⊗ the PPT property if and only if it is PSD. In particular, this holds for the spaces (m) (n) and S ⊗S (m) (n). H ⊗S Let now 3 m 6 and n 1. Denote by Σ the cone of K S (n)-separable matrices and m,n m + ≤ ≤ ≥ ⊗ by Γ the cone of PPT matrices in the space E (n). Observe that K = Σ = Γ for all m,n m m m,1 m,1 ⊗S m. Proposition 2.9 yields the following result. Corollary 2.10. For any 3 m 6 and n N , we have the inclusion Σ Γ . + m,n m,n ≤ ≤ ∈ ⊂ Let bethegroup g g g Aut(K ),g Aut(S (n)) . ByProposition2.3itisasubgroup m,n ′ m ′ + G { ⊗ | ∈ ∈ } of Aut(Σ ). Note that the operator T id of partial transposition is in and amounts to m,n m (n) m,n ⊗ S G complex or quaternion conjugation. Let us define isomorphisms :Rm E , 3 m 6. m m I → ≤ ≤ x +x x : (x ,x ,x )T 0 1 2 I3 0 1 2 7→ x2 x0 x1 (cid:18) − (cid:19) x +x x +ix : (x ,x ,x ,x )T 0 1 2 3 I4 0 1 2 3 7→ x2 ix3 x0 x1 (cid:18) − − (cid:19) x +x x +ix +jx : (x ,x ,x ,x ,x )T 0 1 2 3 4 I5 0 1 2 3 4 7→ x2 ix3 jx4 x0 x1 (cid:18) − − − (cid:19) x +x x +ix +jx +kx : (x ,x ,x ,x ,x ,x )T 0 1 2 3 4 5 (1) I6 0 1 2 3 4 5 7→ x2 ix3 jx4 kx5 x0 x1 (cid:18) − − − − (cid:19) It is not hard to check the following result (cf. Corollary A.8 in the appendix). 4 Lemma 2.11. K = [L ] for all m=3,4,5,6. m m m I As a consequence, the group Aut(K ) is isomorphic to the automorphism group of the Lorentz m cone L . m Byvirtueof(1)themap id isanisomorphismbetweenthespacesRm (n)andE (n). m (n) m BHe)n,cwehweerecaBnr,e.p.r.e,sBentanyIel(enm⊗)e.nStofEm⊗S(n)inauniquewayasanimage(Im⊗⊗SidS(n))( ml=⊗−0S1el⊗ l 0 m 1 − ∈S P BDe,fi.n..i,tBion 2.12.(nF)orwialnlybeelceamlleendtthBe c=om(Ipmon⊗enitdsSo(fn)B)(. ml=−01el ⊗ Bl) ∈ Em ⊗S(n), the matrices 0 m 1 − ∈S P Proposition 2.13. The cone Γ is invariant with respect to the action of , i.e. m,n m,n m,n G G ⊂ Aut(Γ ). m,n Before proceeding to the proof of this proposition, we define γ , m = 3,...,6 to be the set of m all appropriate matrices S such that the mapping A SAS is an automorphism of the space E . ∗ m More precisely, define γ = GL (R),γ = GL (C),γ 7→= GL (H) and let γ be the set of all matrices 3 2 4 2 6 2 5 S GL (H) such that SAS E whenever A E . The set γ is a matrix group, its Lie algebra 2 ∗ 5 5 5 ∈ ∈ ∈ given by all quaternionic 2 2 matrices which: i) have a trace with zero i- and j-components, ii) the × k-componentsoftheoff-diagonalelementsarezero,andiii)the k-componentsofthe diagonalelements are equal. Observe that mappings of the form A SAS preserve the PSD matrix cone K . Let then ∗ m 7→ Hγ :γ Aut(K ) be the grouphomomorphismassigningto anymatrixS γ the automorphism m m → m ∈ m Hγ(S):A SAS . DenotetheimageofHγ byG . HenceG isthesubgroupofallautomorphisms ofmE of th7→e form A∗ SAS . Let also HRm:GL (Rm) Aut(Sm(n)) be the group homomorphism as- signimngtoanymatrix7→S GL∗ (R)theautonmorphnismH→R(S):A+ SAST. Thisgrouphomomorphism ∈ n n 7→ is surjective. Proof of Proposition 2.13. Assume the notations of the proposition. The group Aut(Lm) is a Lie group of dimension m(m2−1) +1 and consists of two connected compo- nents, corresponding to automorphisms with positive and negative determinants, respectively. Hence Aut(K ) is generated by the connected component of its neutral element and a single automorphism m withnegativedeterminant. BycomputingtherankoftheLiealgebraofG oneeasilydeterminesthat m thedimensionofG equalsthatofAut(L ). Itfollows[12]thatG containstheconnectedcomponent m m m ofthe neutralelementofAut(K ). Form=3,5the groupG containsautomorphismswithnegative m m determinants and hence equals Aut(K ). For m = 4,6 G is connected, because γ is connected. m m m However, in this case the matrix transposition T has negative determinant as an automorphism of m E . Thus Aut(K ) is generated by G and T for all m=3,...,6. m m m m Therefore is generatedby the followingelements. First,elements ofthe formHγ(S) id , Gm,n mR ⊗ S(n) where S γ ; second, the element T id ; and third, elements of the form id H (S), where S GL ∈(R)m. Letusnowconsiderthemac⊗tionSo(nf)thesegeneratorsontheconeΓ . LeEtmK⊗ n E (n) n m,n + m ∈ ⊂ ⊗S be the cone of PSD matrices in E (n). m ⊗S Let S γ . Then T 1 Hγ(S) T is an element of G , and there exists a matrix S γ ∈ m m− ◦ m ◦ m m ′ ∈ m such that T 1 Hγ(S) T = Hγ(S ). The element Hγ(S) id acts on E (n) as A (S I )A(Sm−I ◦) amndhe◦ncempreservmesth′econeK . Moreovmer,we⊗havSe(n(H) γ(S) id m ⊗) S(T id )7→= (H⊗γ(nS) T⊗)n ∗id =(T Hγ(S )) id =(+T id ) (Hγ(S ) mid ⊗). ISf(nw)e◦denmo⊗te thSen set AmΓ A ◦ Km ⊗bySKnΓ, thmen◦wemget′(H⊗γ(SS)n id m⊗)[KΓSn] =◦(Hmγ(S′) ⊗idS(n)) (T id )[K ] = ({T | id∈ )+}(Hγ(S+) id )[K ]=m(T ⊗id S()n[K) ]+=KΓ. Hmence⊗HγS(S(n)) ◦id m ⊗preSsenrves+also themc⊗oneSKnΓ◦, andmthe′re⊗foreSa(lns)o Γ+ =Km⊗KSΓn. + + m ⊗ S(n) + m,n +∩ + The automorphism (T id ) is the operator of partial transposition on E id and hence preserves Γ by definitiomn⊗. Sn m⊗ S(n) m,n Let now S GL (R). Then id HR(S) acts on E (n) like A (I S)A(I S)T ∈ n Em⊗ nR m ⊗ S 7→ 2 ⊗ 2 ⊗ and hence preserves K . Moreover,id H (S) commutes with the operator T id of partial transposition and henc+e preserves alsoEKmΓ⊗. Tnherefore it preserves Γ . m⊗ S(n) + m,n Thus all generators of preserve the cone Γ , and so do all other elements. m,n m,n G Proposition 2.13 allows us, when looking for elements in Γ Σ , to restrict the consideration m,n m,n \ to elements that are in some canonical form with respect to the action of the symmetry group , m,n G since the inclusions in Σ or Γ hold or do not hold for all elements of an orbit simultaneously. m,n m,n 3 Relations between different cones and trivial cases The next three sections aim at proving that the cones Σ and Γ are equal. 6,3 6,3 5 Proposition 3.1. Let 3 m m 6 and n n > 0 and suppose that Γ = Σ . Then ′ ′ m,n m,n ≤ ≤ ≤ ≥ Γm′,n′ =Σm′,n′. Proof. Assume the conditions of the proposition. It is sufficient to prove the assertion for n = n ′ − 1,m =m and n =n,m =m 1. ′ ′ ′ − Let first n′ = n 1,m′ = m. Define the linear mapping Ln′,n : (n′) (n) as follows. For − S → S S (n′), let Ln′,n(S) be the matrix which has its upper left n′ n′ submatrix equal to S and whose ∈S × all other elements are zero. It is not hard to see that if a matrix A E (n) is in the image m ∈ ⊗S of idEm⊗Ln′,n, then A ∈ Σm,n if and only if A ∈ (idEm⊗Ln′,n)[Σm,n′] and A ∈ Γm,n if and only if A∈(idEm⊗Ln′,n)[Γm,n′]. Injectivity of Ln′,n now yields the desired result. Let now n′ = n,m′ = m 1 and let C Γm′,n. Then we have also C Γm,n and by assumption − ∈ ∈ C Σ . Therefore we can represent C as a sum N A B, where A K and B S (n). Let∈nowm,πn :Rm Rm′ be the projectionthatassignslt=o1anly⊗vectlor(x ,...l,x∈ m,x )Tl ∈Rm+ the πvemEc[tKorm(]x=0,m.K.m.,′x.mM→−o2r)eTov∈erR,mth′eanredstlerticπtimEon=ofImπmE′◦tπomE◦PmI′m−i1s:juEsmt i→dEmE′ma′n.0dSitnhceerπemms−t[rL2icmti]mo=n−1oLfmπ′∈mE,w⊗eSha(nve) to Em′ ⊗S(n) is just idEm′⊗S(n). Therefore C =(πmE ⊗S(n))(C) = Nl=1A′l⊗Bl with A′l =πmE(Al). But A′l is an element of Km′, because Al ∈ Km. Therefore C is Km′P⊗S+(n)-separable and hence in Σm′,n. As one should expect, the equality Γ =Σ is thus easier to prove for smaller n,m. m,n m,n By a similar reasoning we can prove the following results. Lemma 3.2. Let n,m 3. Then the positivity of a matrix in (m) (n) is not sufficient for ≥ H ⊗ S H (m) S (n)-separability. The PPT condition in (m) (n) is not sufficient for Q (m) S (n)- + + + + ⊗ Q ⊗S ⊗ separability. Proof. Let n,m 3. Then the cone of positive semidefinite matrices in the space (m) (n) does ≥ S ⊗S not coincide with the cone of S (m) S (n)-separable matrices. This is a consequence of the fact + + that not every nonnegative definite bi⊗quadratic form F(x,y), where x Rm, y Rn are two vectors ∈ ∈ of variables, is representable as a sum of squares of bilinear forms [10]. Let then C (m) (n) be ∈S ⊗S a PSD matrix which is not S (m) S (n)-separable. Then C is also PSD if consideredas an element + + ⊗ of (m) (n) and fulfills the PPT condition if considered as an element of (m) (n). HNows⊗upSposethatC isH (m) S (n)-separable. ThenthereexistsaninteQgerN⊗SNandmatrices + + ⊗ ∈ A ,...,A H (m), B ,...,B S (n) such that C = N A B. Since C is real, we then 1 N ∈ + 1 N ∈ + l=1 l ⊗ l have C = N Re(A) B. But Re(A) S (m) for all l. Therefore C is S (m) S (n)-separable, l=1 l ⊗ l l ∈ + P + ⊗ + contrary to our assumption. P In the same way one shows that Q (m) S (n)-separability of C leads to a contradiction. + + ⊗ It now happens that an element of Σ can be reduced to elements of cones in lower dimensions. m,n We formalize this in the following definitions. Definition3.3. Let3 m 6andn 1. LetB E (n)havecomponentsB ,...,B (n). m 0 m 1 Ifthereexistsamatrix≤S G≤L (R)an≥dpositivein∈tegers⊗nS,n withn +n =nsuchthatth−em∈aStrices n 1 2 1 2 ∈ SB ST are block-diagonal with blocks B1,B2 of sizes n n ,n n for all l = 0,...,m 1, then l l l 1× 1 2× 2 − we call the element B decomposable. We call the elements B1 = (Im ⊗ S(n1))( ml=−01el ⊗ Bl1), B2 =(Im⊗S(n2))( ml=−01el⊗Bl2) block components of B. P An element of E (n) may have several decompositions and its block components are not m P ⊗ S uniquely defined. Definition 3.4. Let 4 m 6 and n 1. If an element B E (n) has linearly dependent m ≤ ≤ ≥ ∈ ⊗S components, then we call it reducible. If there exists A Aut(K ) such that B =(A id )(B) m ′ (n) E (n), then we call B a reduction of B. ∈ ⊗ S ∈ m 1 ′ − ⊗S Lemma 3.5. Let B Γ be reducible. Then B has a reduction. m,n ∈ Proof. AssumetheconditionsofthelemmaandletB ,...,B bethecomponentsofB. Thenthere 0 m 1 exisItfsBa=no0n,ztehroenvBectorEλ=(λ0,(.n.).a,λnmd−B1)isTa∈reRdmucstuiocnhotfhiatstelf−.ml=H−0e1nλcelBlelt=us0a.ssumethatB=0. Since m 1 B Γ , the diago∈nal el−em⊗eSnts of the components B ,...,BP cannot all be zero simul6taneously. m,n 0 m 1 ∈ − Withoutrestrictionofgeneralityletusassumethatthe(1,1)-elementsofB ,...,B formanonzero 0 m 1 vector b=(b ,...,b )T Rm. Define b =(b ,...,b )T,λ =(λ ,...,λ )T − Rm 1. 0 m 1 ′ 1 m 1 ′ 1 m 1 − The element b= − (b)∈ E is a principal 2 2 sub−matrix of B. Since B− Γ ∈, we have b 0 m m m,n I ∈ × ∈ (cid:23) and hence b K , b L , and 0 =b b . Moreover, b,λ = b λ + b,λ =0. It follows that m m 0 ′ 0 0 ′ ′ ∈ ∈ 6 ≥ || || h i h i b λ b λ b λ and λ λ . We distinguish two cases. 0 0 ′ ′ 0 ′ 0 ′ | |≤|| |||| ||≤ || || | |≤|| || 6 i) λ = λ = 0. Let without restriction of generality λ = 1. Now choose an orthogonal 0 ′ 0 | | || || 6 (m 1) (m 1)matrixU suchthatitsfirstrowisgivenbyλT. DefinetheautomorphismA Aut(L ) ′ ′ m − × − ∈ by A = diag(1,U) and let A = A 1 be the corresponding automorphism of K . Denote the ′ Im ′Im− m components of B = (A id )(B) by B ,...,B . We now compute B and B . We have B0′ =B0,B1′ = ′ml=−11λlB⊗l =−S(λn0)B0. Hence0B′ 0′ +B1′m′=−10. Note that this sum is0′the upp1′er left n×n block of the matrix B. But A id , hence B Γ by Proposition2.13 and in particular ′ (n) m,n ′ m,n B is PSD.ItfoPllowsthatthe fi⊗rstnS row∈sGandcolumnsof∈B arezero. Inparticular,we getB =0, ′ ′ m′ 1 and B E (n). − ′ m 1 ii) λ∈ <−λ⊗.SLet without restriction of generality λ 2 = λ2+1. Then there exists a number | 0| || ′|| || ′|| 0 ξ and a unit length vector v Rm 1 such that λ = sinhξ,λT = coshξvT. Let now A Aut(L ) ∈ − 0 ′ ′1 ∈ m be a hyperbolic rotation in the (e ,e )-plane by the angle ξ, and let A Aut(L ) be a rotation 0 m−1 ′2 ∈ m in the linear subspace spanned by e ,...,e , given by an orthogonal (m 1) (m 1) matrix 1 m 1 { − } − × − which has v as its last row. The last row of the matrix A = A A Aut(L ) is then given by ′ ′1 ′2 ∈ m (sinhξ,coshξv ,...,coshξv )=λT. Define nowthe automorphismA= A 1 Aut(K ) and lBet B′E=(A⊗i1d(Sn()n.))(B). Thme−l1ast component of B′ is then given by Bm′ −1 I=m ′mlI=−m−01λ∈lBl =0.mHence ′ m 1 ∈ − ⊗S P We can hence reduce decomposable elements to elements in spaces with smaller n, and reducible elements to elements in spaces with smaller m. By using Proposition 2.13 and applying the lines of reasoning in the proof of Proposition 3.1 we arrive at the following proposition. Proposition 3.6. A decomposable element of Γ is in Σ if and only if its block components are m,n m,n in Σ and Σ , respectively. A reducible element of Γ is in Σ if and only if its reduction m,n1 m,n2 m,n m,n is in Σ . m 1,n − Corollary 3.7. Suppose that Γ =Σ for some n,m. Then all decomposable elements of Γ m,n m,n m,n+1 are in Σ and all reducible elements of Γ are in Σ . m,n+1 m+1,n m+1,n Let us now consider the cases m=3 and n=2. Theorem 3.8. Γ =Σ for any n 1. 3,n 3,n ≥ Proof. TheconeΓ istheconeofrealsymmetricPSDblock-Hankelmatricesofsize2n 2n. However, 3,n × such matrices are known to be separable. This follows from the spectral factorization theorem for quadratic matrix-valued polynomials in one variable [16]. Byasimilarreasoningappliedtocomplexhermitianblock-Hankelmatricesweobtainthe following result. Theorem 3.9. Let n 1. A matrix in (2) (n) is PSD if and only if it is S (2) H (n)- + + ≥ S ⊗ H ⊗ separable. Theorem 3.10. Γ =Σ for m=3,...,6. m,2 m,2 Proof. The space (2) is 3-dimensional. Hence any quadruple of real symmetric 2 2 matrices is S × linearly dependent, and for m 4 the space E (2) consists of reducible elements only. Since m ≥ ⊗S Γ = Σ by the previous theorem, we have Γ = Σ for arbitrary m = 3,...,6 by repeated 3,2 3,2 m,2 m,2 application of Corollary 3.7. Lemma 3.11. Let B Γ be partitioned in symmetric 3 3 blocks as follows 4,3 ∈ × B B B= 11 12 . (2) B B 12 22 (cid:18) (cid:19) Then either B is reducible or decomposable, and hence in Σ ; or in the orbit of B with respect to the 4,3 action of the group there exists an element B such that its upper left 3 3 subblock B equals G4,3 ′ × 1′1 I . 3 Proof. Let the assumptions of the lemma hold. Then we have B 0. Hence if a real vector v R3 is (cid:23) ∈ in the kernel of one of the matrices B ,B , then it is also in the kernels of B ,B . 11 22 12 12 Suppose that neither B nor B are of full rank. 11 22 If the kernels of B ,B have a nontrivial intersection, then B is decomposable, because this 11 22 intersection will be in the kernel of all four components of B. If the intersection of the kernels is trivial,then the kernelof B contains two linearly independent 12 real vectors. Then the real and imaginary parts of B must be linearly dependent, because they are 12 symmetric, sharing a 2-dimensional kernel and of rank not exceeding 1. Hence B is reducible. 7 By Corollary 3.7 and Theorems 3.8 and 3.10 any reducible or decomposable element of Γ is in 4,3 Σ . 4,3 Let us now suppose that at least one of the matrices B ,B has full rank and is hence positive 11 22 definite (PD).We canassumewithoutlossofgeneralitythatthisis B ,otherwisewepasstoamatrix 11 inthesameorbitbyfirstapplyingtheautomorphismHγ(σ ) id ,whereσ isthenon-trivial 2 2 permutation matrix. Then the matrix B =(id 3 H2 R⊗((BS(3)) ∈1/G2)4),3(B) is in t2he orbit of B and × ′ E4⊗ n 11 − we have B =I . 1′1 3 Theorem 3.12. Γ =Σ . 4,3 4,3 Proof. We shall show that any extreme ray of Γ is in Σ . 4,3 4,3 Let B generate an extreme ray of Γ . Note that Γ is isomorphic to the intersection of the 4,3 4,n 4n2-dimensional cone H (2n) with the 2n(n+1)-dimensional subspace of matrices consisting of four + symmetric n n blocks. Then the face of B in H (6) has at most dimension 36 24+1 = 13 by + × − Lemma 2.6. Hence the PSD matrix B H (6) has at most rank 3. + ∈ ByLemma3.11wecanassumewithoutrestrictionofgeneralitythatB ispartitionedasin(2)with B =I . Since B has rank 3, we can factorize it as 11 3 I3 B12 = I3 I3 ∗ B B W W 12 22 (cid:18) (cid:19) (cid:18) (cid:19)(cid:18) (cid:19) with W = B = WT. It follows that WW = WW = B = (WW )T = WW. Therefore the 12 ∗ 22 ∗ real and imaginary parts of W are symmetric and commute. In particular they can be simultaneously R diagonalized by applying an orthogonaltransformation U. Applying the automorphism id H (U) E4⊗ n to B, we thus simultaneously diagonalize all four matrices B ,B ,B ,B , and B is decomposable 11 12 12 22 and hence separable. We have proven that all extreme rays of Γ are in Σ . But then we have Γ =Σ . 4,3 4,3 4,3 4,3 In this section we have investigated the relationship between the cones Σ ,Γ for different m,n m,n dimensions m,n. We have defined two properties of elements in E (n), namely those of being m ⊗S decomposable and reducible. Our next goal is to show that Γ Σ for n = 3, m = 5,6 (the m,n m,n ⊂ converseinclusionbeing trivial). We haveshowninthis sectionthatthis inclusionisvalidforelements possessingtheabove-citedproperties,providedtherelationΓ =Σ holdsfortherespectivecones m,n m,n of smaller dimension. We have proven this relation for n 2; m 3; and m = 4, n = 3. This allows ≤ ≤ us to concentrate on non-decomposable and non-reducible elements in the proofs of the main results in the next two sections, which essentially amounts to imposing certain non-degeneracy conditions. 4 Γ = Σ 5,3 5,3 The structure of the proof of this equality resembles that of the proof of Theorem 3.12. We show that every element of Γ generating an extreme ray is in Σ . 5,3 5,3 Firstwederivesomepropertiesofreal3 3matrices. Let (n)bethespaceofrealskew-symmetric matrices of size n n. The space (3) is is×omorphic to R3. WAe define an isomorphism : (3) R3 × A V A → by 0 a a a 12 31 23 − : a 0 a a . 12 23 31 V − 7→ a a 0 a 31 23 12 − If v,w R3 are column vectors, then this isomorphism maps the skew-symmetric matrix vwT wvT ∈ − to the cross-productv w. Define now a group×homomorphism HQ : GL (R) GL (R) by [HQ(C)](v) = (C[ 1(v)]CT) 3 3 − for any v R3, C GL (R). Note that HQ( C) =→HQ(C) for all C GL (R).VOneValso easily 3 3 ∈ ∈ − ∈ checks that the induced Lie algebra homomorphismhas a trivial kernel. Therefore the image of HQ is the connectedcomponentofthe identity matrixI andconsistsofthe real3 3matriceswithpositive 3 × determinant. Direct calculus shows that HQ(C)=(detC)C T. − For 3 column vectors u,v,w R3, let (u,v,w) be the 3 3 matrix composed of these column ∈ × vectors. The following result can be checked by direct computation. Lemma 4.1. For any three vectors u,v,w R3 we have (u v) (v w)=v det(u,v,w). Moreover, ∈ × × × the following assertions are equivalent. i) det(u,v,w)=0, ii) u,v,w are linearly dependent, 8 iii) u v, v w, w u are linearly dependent, × × × iv) u v, v w, w u are all proportional. × × × Corollary 4.2. Let u,v,w R3 be linearly independent vectors. Then det(v u,w v,u w)<0. ∈ × × × Proof. By repeated application of the previous lemma we get v w = [(u v) (v w)det−1(u,v,w)] [(v w) (w u)det−1(v,w,u)] × × × × × × × × = det−2(u,v,w)[(u v) (v w)] [(v w) (w u)] × × × × × × × = det−2(u,v,w)[(v w)det(u v,v w,w u)]. × × × × It follows that det(v u,w v,u w)= det(u v,v w,w u)= det2(u,v,w)<0. × × × − × × × − We now investigate 2-dimensional traceless linear subspaces in (3). To each such subspace, L S we will assign a sign σ( ) 1,0,+1 in the following way. Let S ,S ,S ,I be a basis of 1 2 3 3 L ∈ {− } { } the orthogonal complement of . Consider the vectors v = (S S S S ),v = (S S ⊥ 12 1 2 2 1 23 2 3 S S ),v = (S S S S )L R3. LNow define the sign of as σ( )V=sgnd−et(v ,v ,v ).V − 3 2 31 3 1 1 3 23 31 12 V − ∈ L L Lemma 4.3. The sign σ( ) is well-defined, i.e. does not depend on the choice of the basis of . ⊥ L L Proof. Let us first remark that has dimension 4 and contains the identity matrix I , hence the ⊥ 3 L claimed choice of its basis is possible. Letnow S ,S ,S ,I and S ,S ,S ,I be twobasesof . Observethat[A+αI ,B+βI ]= { 1 2 3 3} { 1′ 2′ 3′ 3} L⊥ n n [A,B] for any two n n matrices and any real scalars α,β. Since σ( ) depends only on the pairwise × L commutatorsofthe basis elements S ,S , we canassumewithout loss ofgeneralitythatthese matrices l l′ aretraceless. NotethatthenbothS andS spanthesame3-dimensionalspace,namelytheorthogonal l l′ complement of in the subspace of traceless symmetric matrices. We hence find a regular 3 3 matrix C such tLhat S = 3 C S for l = 1,2,3. Here the indexation of C denotes its elemen×ts. l′ α=1 lα α Define v = ([S ,S ]), v = ([S ,S ]), α,β = 1,2,3. Then we have by the bilinearity of the αβ V α β Pα′β V α′ β′ matrix commutator that v = n C v CT for α,β = 1,2,3. It follows that (v ,v ,v ) = α′β l,m=1 αl lm mβ 2′3 3′1 1′2 (v ,v ,v )(HQ(C))T = (detC)(v ,v ,v )C 1 and det(v ,v ,v ) = (detC)2det(v ,v ,v ). 23 31 12 P 23 31 12 − 2′3 3′1 1′2 23 31 12 Thus the signs of det(v ,v ,v ) and det(v ,v ,v ) are equal. 2′3 3′1 1′2 23 31 12 Definition 4.4. We call two 2-dimensional traceless subspaces , (3) equivalent if there exists ′ L L ⊂S an orthogonal 3 3 matrix U such that = USUT S . ′ × L { | ∈L} Lemma 4.5. For equivalent subspaces , we have σ( )=σ( ). ′ ′ L L L L Proof. Let U be the orthogonal matrix realizing the equivalence. Let S ,S ,S ,I be a basis 1 2 3 3 { } of L⊥ and define Sl′ = USlUT, l = 1,2,3. Then {S1′,S2′,S3′,I3} is a basis of L′⊥. Let further v = ([S ,S ]), v = ([S ,S ]), α,β = 1,2,3. Since we have [S ,S ] = U[S ,S ]UT for αβ V α β α′β V α′ β′ α′ β′ α β all α,β = 1,2,3, it follows that v = [HQ(U)]v and (v ,v ,v ) = [HQ(U)](v ,v ,v ) = α′β αβ 2′3 3′1 1′2 23 31 12 (detU)U(v ,v ,v ). This finally yields σ( )=σ( ). 23 31 12 ′ L L Let us now consider symmetric tensors S of order 3 in R3. Since there are 10 independent αβγ components, these tensors form a 10-dimensional real vector space. Definition 4.6. We shall say that a symmetric tensor S of order 3 satisfies the δ-condition if αβγ 3 (S S S S ) = δ δ δ δ for all α,β,γ,η = 1,2,3, where δ is the Kronecker κ=1 αβκ γηκ − γβκ αηκ γβ αη − αβ γη symbol (δ =1 if α=β and δ =0 otherwise). αβ αβ P Definition4.7. LetS beasymmetrictensor. LetthematrixcomponentsofS bethreematrices αβγ αβγ S1,S2,S3 (3) defined elementwise by Sl =S , α,β,l =1,2,3. ∈S αβ αβl Remark 4.8. The δ-condition is equivalent to the condition 3 (detSl)(Sl) 1 = I , where Sl are l=1 − − 3 the matrixcomponentsofthe tensor. Herethe functionA (detA)A 1 is understoodtobeextended − 7→P by continuity to singular matrices A (3). ∈S Lemma 4.9. The δ-condition is rotationally invariant, i.e. if U is an orthogonal 3 3matrix, then the tensor S = 3 U U U S satisfies the δ-condition if and only if th×e tensor S does α′βγ η,φ,ξ=1 αη βφ γξ ηφξ αβγ so. The proof isPby direct calculation using the relation 3 U U =δ . γ=1 αγ βγ αβ Lemma 4.10. Let S be a symmetric tensor satisfyinPg the δ-condition with matrix components Sl, αβγ l =1,2,3. Then the matrices S1,S2,S3,I are linearly independent. 3 { } 9 Proof. We proof the lemma from the contrary. Assume the conditions of the lemma and suppose that there exist c ,c ,c ,c R, not all equal zero, such that c I + 3 c Sl = 0. By Lemma 4.9 we 0 1 2 3 ∈ 0 3 l=1 l can assume without loss of generality that c = c = 0. Then there exists c = 0 such that S1 = cI , 2 3 3 P 6 or S = cδ for all β,γ = 1,2,3. Since S satisfies the δ-condition, we have in particular 1βγ βγ αβγ 3 (S S S S )= δ δ δ δ . But the left-hand side of this equation simplifies to 0, κ=1 11κ 22κ− 21κ 12κ 21 12− 11 22 whereas the right-hand side simplifies to 1, which leads to a contradiction. P − We now come to a result linking the sign of a traceless 2-dimensional subspace (3) to L ⊂ S symmetric tensors satisfying the δ-condition. Theorem 4.11. 2-dimensional traceless subspaces of (3) with sign 1 are in correspondence with S − symmetric tensors satisfying the δ-condition. Namely, if is such a subspace, then there exists a L symmetric tensor S with matrix components Sl, l = 1,2,3 satisfying the δ-condition such that αβγ S1,S2,S3,I is a basis of the space . Conversely, if S is such a tensor, then the orthogonal 3 ⊥ αβγ { } L complement of the linear span of the set S1,S2,S3,I has sign 1. 3 { } − Proof. Let e ,e ,e be the canonical orthonormalbasis vectors of R3. 1 2 3 Let S be a tensor with matrix components Sl, l = 1,2,3 satisfying the δ-condition. Then we αβγ have S1S2 S2S1 = e eT e eT and ([S1,S2]) = e e = e . Similarly, ([S2,S3]) = e and − 2 1 − 1 2 V 2× 1 − 3 V − 1 ([S3,S1]) = e . Let be the orthogonal complement of the linear span of the set S1,S2,S3,I . 2 3 V − L { } Then by definition σ( )=sgndet( e , e , e )= 1, which proves the second part of the theorem. 1 2 3 L − − − − Letusprovethefirstpart. Let (3)bea2-dimensionaltracelesssubspacewithsignσ( )= 1. Anypointin (3)canbeviewedasLa⊂hoSmogeneousquadraticformonR3,orequivalently,asaqLuadra−tic map from RPS2 to R. We are interested in the number of points in RP2 which are mapped to zero by all elements of a . Denote N = x R3 xTSx=0 S . L L { ∈ | ∀ ∈L} The determinant as a scalar function on is odd and hence possesses zeros on 0 . Since the L L\{ } matrices in are traceless, we can find a matrix in with eigenvalues 1,0,+1. By conjugationwith L L − an appropriate orthogonal matrix U we can transform it to the matrix 0 1 0 N = 1 0 0 . 1 0 0 0 Let = USUT S . Then the subspaces and are equivalent and we have N . A ′ ′ 1 ′ L { | ∈ L} L L ∈ L generator the orthogonalcomplement of N in will be of the form 1 ′ L a 0 c N = 0 b d 2 c d a b − − for some numbers a,b,c,d R, which do not equal zero simultaneously. Note that by Lemma 4.5 we ∈ have σ( )= 1. Let us first treat several degenerate cases. ′ L − 1. a=b=0. In this case a basis of ′⊥ is given by S1,S2,S3,I3 with L { } 1 0 0 0 0 0 0 0 d S = 0 0 0 , S = 0 1 0 , S = 0 0 c . 1 2 3 − 0 0 0 0 0 0 d c 0 − We get σ( )=0, because [S ,S ]=0. ′ 1 2 L 2. a= b=0. In this case a basis of ′⊥ is given by S1,S2,S3,I3 with − 6 L { } 0 0 0 ad 0 cd ac 0 a2+d2 − − − S = 0 0 0 , S = 0 ad a2+c2 , S = 0 ac cd . 1 2 3 − 0 0 1 cd a2+c2 0 a2+d2 cd 0 − − We get det(v ,v ,v )=a4(a2+c2+d2)2 >0 and σ( )=1. Here v = ([S ,S ]). 23 31 12 ′ αβ α β L V Hence these two cases do not satisfy the conditions of the theorem and we can assume a+b=0. The set N ′ is given by those vectors x=(x1,x2,x3)T R3 that satisfy xTN1x=xTN2x=60. In L ∈ particular, x must satisfy x x =0. If we define 1 2 a c b d N = , N = , 21 c a b 22 d a b (cid:18) − − (cid:19) (cid:18) − − (cid:19) 10