ebook img

Quasi-translations and singular Hessians PDF

0.25 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Quasi-translations and singular Hessians

Quasi-translations and singular Hessians 5 1 Michiel de Bondt 0 2 January 22, 2015 n a J 1 Abstract 2 In 1876 in [8], the authors Paul Gordan and Max N¨other classify all ] homogeneouspolynomialshinatmostfivevariablesforwhichtheHessian G determinant vanishes. For that purpose, they study quasi-translations A which are associated with singular Hessians. We will explain what quasi-translations are and formulate some ele- . h mentarypropertiesofthem. Additionally,weclassifyallquasi-translations t a with Jacobian rank one and all so-called irreducible homogeneous quasi- m translations with Jacobian rank two. Thelatter isan important result of [8]. Using these results, we classify all quasi-translations in dimension at [ most three and all homogeneous quasi-translations in dimension at most 1 four. v Furthermore,wedescribetheconnectionofquasi-translationwithsin- 8 gular Hessians, and as an application, we will classify all polynomials in 6 dimension twoandallhomogeneouspolynomialsindimensionsthreeand 1 four whose Hessian determinant vanishes. More precisely, we will show 5 thatuptolinearterms,thesepolynomialscanbeexpressedinn−1linear 0 forms,wherenisthedimension,accordingtoaninvalidtheoremofHesse. . 1 In the last section, we formulate some known results and conjectures 0 in connection with quasi-translations and singular Hessians. 5 1 : Key words: Quasi-translation, Hessian, determinant zero, homogeneous, lo- v cally nilpotent derivation, algebraic dependence, linear dependence. i X r MSC 2010: 14R05, 14R10,14R20, 13N15. a 1 Quasi-translations LetAbeacommutativeringwithQandnbeapositivenaturalnumber. Write x 1  x2  x= .. =(x1,x2,...,xn)  .     x  n   1 for the identity map from An to An, thus x ,x ,...,x are variables that cor- 1 2 n respond to the coordinates of An. A translation of An is a map x +c 1 1  x2+c2  x+c= . .  .     xn+cn    where c An is a fixed vector. For a translation x+c, we have that x c is ∈ − the inverse polynomial map, because (x +c ) c x 1 1 1 1 −  (x2+c2) c2   x2  (x c) (x+c)= . − = . =x − ◦  ..   ..       (xn+cn) cn   xn   −    is the identity map. Inspiredbythis property,wedefine a quasi-translation asapolynomialmap x +H x +H (x) 1 1 1 1  x2+H2   x2+H2(x)  x+H = . = . . .  .   .       xn+Hn   xn+Hn(x)      such that x H is the inverse polynomial map of x+H, i.e. − x +H (x) H (x+H) 1 1 1 −  (cid:0)x2+H2(x)(cid:1) H2(x+H)  (x−H)◦(x+H)= (cid:0) (cid:1)...− =x    x +H (x) H (x+H)  n n n  −  (cid:0) (cid:1) The differencebetweenatranslationx+canda quasi-translationx+H is that c A for all i, while H A[x]=A[x ,x ,...,x ] for all i. i i 1 2 n ∈ ∈ Foraregulartranslationx+c,wehavethatapplyingitmtimescomesdown tothetranslationx+mc. Iff C[x]isaninvariantofaregulartranslationx+c, ∈ i.e. f(x+c)=f(x), then f(x+(m+1)c)=f(x+mc) follows by substituting x=x+mc,whencebyinductiononm, f(x+mc)=f(x)forallm N. Below, ∈ we show similar results for quasi-translations. Since (x H) (x+H)=x, we see that − ◦ (x+mH) (x+H)= (m+1)x m(x H) (x+H) ◦ − − ◦ =((cid:0)m+1)(x+H) mx=(cid:1) x+(m+1)H − Henceitfollowsbyinductiononmthatapplyingx+H mtimescomesdownto the map x+mH indeed. Consequently, if f A[x] is an invariant of a x+H, ∈ i.e. f(x+H)=f(x), then f(x+mH)=f((x+H) (x+H))=f(x) ◦···◦ 2 follows for each m N by applying f(x+H)=f(x) m times. Thus ∈ f(x+H)=f(x)= f(x+mH)=f(x) for all m N (1.1) ⇒ ∈ With some techniques ‘on the shelf’, we can improve (1.1) to the following. Lemma 1.1. Assume that x+H is a quasi-translation over A. Then f(x+H)=f(x)= f(x+tH)=f(x) (1.2) ⇒ where t is a new indeterminate. Proof. Letd:=degf andwritef(x+tH) f =cdtd+cd−1td−1+ +c1t+c0. − ··· Since f(x+mH) f =0 for all m 0,1,2,...,d on account of (1.1), − ∈{ } 1 0 02 0d c 0 0 ···  1 1 12 1d   c1   0  ··· 1 2 22 2d c 0 2 =  ···       ... ... ... ... ... · ...   ...         1 d d2 dd   cd   0   ···      The matrix on the left hand side is of Hadamard type and hence invertible, becauseA Q. Consequently,c =c =c = =c =0. Hence f(x+tH)= 0 1 2 d ⊇ ··· f, as desired. But GordanandNo¨ther did notuse the termquasi-translation,andcharac- terizedquasi-translationsinanotherway. Todescribeit,wedefinetheJacobian matrix of a polynomial map H =(H ,H ,...,H ): 1 2 m ∂ H ∂ H ∂ H ∂x1 1 ∂x2 1 ··· ∂xn 1 JH := ∂∂x1...H2 ∂∂x2...H2 ·.·.·. ∂∂xn...H2     ∂ H ∂ H ∂ H   ∂x1 m ∂x2 m ··· ∂xn m  Notice that H is a row vector if H is a single polynomial, i.e. m=1. J IfweseeH asacolumnvector,wecanevaluatethematrixproduct H H, J · and the characterization of quasi-translations x + H by Gordan and No¨ther comes down to H H =(01,02,...,0n) (1.3) J · Here,the powersofzeroare onlytakento indicate the lengthofthe zerovector on the right. Gordan and No¨ther derived the following under the assumption of (1.3). Take a polynomial f A[x] such that f H =0. By the chain rule ∈ J · f(x+tH) H =( f) (I +t H) H x=x+tH n J · J | · J · ∂ =( f) H = f(x+tH) x=x+tH J | · ∂t 3 where In is the unit matrix of size n. Here, x=··· means substituting for x. | ··· Since f H =0, it follows from the above that J · ∂ f(x+tH) f(x) H = f(x+tH) (1.4) J − · ∂t (cid:0) (cid:1) Suppose that t divides the right hand side of (1.4) exactly r < times. Then ∞ t divides f(x+tH) f(x) more than r times. Hence t divides the left hand − side of (1.4) more than r times as well, which is a contradiction. So both sides of (1.4) are zero. Since the right hand side of (1.4) is zero and A Q, we get ⊇ f(x+tH)=f, and that is what Gordan and No¨ther derived from (1.3). Lemma 1.2. Assume H is a polynomial map over A, such that H H = J · (01,02,...,0n). Then f H =0= f(x+tH)=f(x) (1.5) J · ⇒ for every f A[x]. ∈ With lemmas 1.1 and 1.2, we can prove the following. Proposition 1.3. Let A be a commutative ring with Q and H :An An be a ⇒ polynomial map. Then the following properties are equivalent: (1) x+H is a quasi-translation, (2) H(x+tH)=H, where t is a new indeterminate, (3) H H =(01,02,...,0n). J · Furthermore, f(x+H)=f(x) f(x+tH)=f(x) f H =0 (1.6) ⇐⇒ ⇐⇒J · holds for all f A[x], and additionally ∈ ( H)x=x−tJH =( H)+t( H)2+t2( H)3+ (1.7) J | J J J ··· if any of (1), (2) and (3) is satisfied. Proof. The middle hand side of (1.6) gives the left hand side by substituting t=1 andthe righthandside bytakingthe coefficientoft1. Hence (1.6)follows from (1) and lemma 1.1 and (3) and lemma 1.2. By taking the Jacobian of (2), we get ( H) (I +t H) = H, x=x+tH n J | · J J which gives (1.7) after substituting t = t. Therefore, it remains to show that − (1), (2) an (3) are equivalent. (1) ⇒ (2) Assume (1). Since x=(x H) (x+H)=x+H H(x+H), we − ◦ − see that H(x+H)=H, and (2) follows by taking f =H in lemma 1.1. i 4 (2) ⇒ (1) Assume (2). Then (x tH) (x+tH)=(x+tH) tH(x+tH) − ◦ − =x+tH tH =x (1.8) − which gives (1) after substituting t=1. (2) ⇒ (3) Assume (2). By taking the coefficient of t1 of (2), we get (3). (3) ⇒ (2) Assume (3). By taking f =H in lemma 1.2, we get (2). i Notice that(1.8)tells usthatin somesense, x+tH is aquasi-translationas well. Remark 1.4. Let D be the derivation n H ∂ . Then one can easily verify i=1 i∂xi that P f H =0 Df =0 J · ⇐⇒ and H H =0 D2x =D2x = =D2x =0 1 2 n J · ⇐⇒ ··· Hencequasi-translationscorrespondtoaspecialkindoflocallynilpotentderiva- tions. Furthermore, invariants of the quasi-translation x+H are just kernel elements of D. In addition, we can write exp(D) and exp(tD) for the automorphisms cor- responding to the maps x+H and x+tH respectively. But in order to make the article more readable for readers that are not familiar with derivations, we will omit the terminology of derivations further in this article. 2 Singular Hessians and Jacobians Now that we havehad some introduction aboutquasi-translations,it is time to show how they are connected to singular Hessians. For that purpose, we define the Hessian matrix of a polynomial h C[x] as follows: ∈ ∂ ∂ h ∂ ∂ h ∂ ∂ h ∂x1∂x1 ∂x2∂x1 ··· ∂xn∂x1  ∂ ∂ h ∂ ∂ h ∂ ∂ h  Hh:= ∂x1∂...x2 ∂x2∂...x2 ·.·.·. ∂xn...∂x2     ∂ ∂ h ∂ ∂ h ∂ ∂ h   ∂x1∂xn ∂x2∂xn ··· ∂xn∂xn  A Hessian is a Jacobian which is symmetric with respect to the main diagonal. Hence each dependence between the columns of a Hessian is also a dependence between the rows of it. For generality,we considerJacobianswhich are not necessaryHessians. As- sume G : Cn Cn is a polynomial map and det G = 0. Let rkM denote → J the rank of a matrix M. It is known that rk H = trdeg C(H), see e.g. C J 5 Proposition 1.2.9 of either [5] of [1]. Hence there exists a nonzero polynomial R C[y]=C[y ,y ,...,y ] such that 1 2 n ∈ R(G ,G ,...,G )=0 (2.1) 1 2 n Here, y ,y ,...,y are also variables that correspondto the coordinates of Cn. 1 2 n Define ∂R H := (2.2) i (cid:16)∂yi(cid:17)(cid:12)y=G (cid:12) for all i. By taking the Jacobian of (2.1), w(cid:12)e get 0= (R(G))=( R) G=Ht G y y=G J J J | ·J ·J where Ht = (H H H ) is the transpose of H. Hence H is a dependence 1 2 n ··· between the rows of G. If G is a Hessian, H is also a dependence between J J thecolumnsof G. Henceitseemsagoodideatoassumethat G H =0. But forthe sakeofgJenerality,weonly assumethat G H˜ =0 forsJome· polynomial vector H˜ that does not need to be equal to H.J · If we write R for the transpose of R, then H =( R)(G), and by the y y y ∇ J ∇ chain rule H H˜ = ( R)(G) H˜ =( R) G H˜ =0 (2.3) y y y y=G J · J ∇ · J ∇ | ·J · whence x+H is a quasi-translation on account of (3) (1) of 1.3 if H = H˜. ⇒ Thus we have proved the following. Proposition 2.1. Assume H is defined as in (2.2) for each i, with R is as i in (2.1), where G : Cn Cn is a polynomial map. Then H is a dependence → between the rows of G. If H˜ is a dependJence between the columns of G, then H H˜ = 0. In J J · particular, if H is a dependence between the columns of G, then x+H is a J quasi-translation. ButH definedasabovemaybethezeromap. Thisishowevernotthecaseif we choose the degree of R as small as possible, without affecting (2.1), because ∂ R is nonzero for some i and of lower degree than R itself. ∂yi Example 2.2. Let p=x2x +x x x +x2x , h=pr and R=y y y2. Then 1 3 1 2 4 2 5 3 5− 4 R( h)=(rpr−1x2)(rpr−1x2) (rpr−1x x )2 =0 ∇ 1 2 − 1 2 and R=(0,0,y , 2y ,y ). So y 5 4 3 ∇ − x+H :=x+( R)( h)=x+rpr−1(0,0,x2, 2x x ,x2) ∇y ∇ 2 − 1 2 1 is a quasi-translation. Indeed, x , x and p are invariants of x+H. 1 2 6 Example 2.3. Let n 6 be even and ≥ h=(Ax1 Bx2)2+(Ax3 Bx4)2+ +(Axn−1 Bxn)2 − − ··· − Then (By2i−1+Ay2i)y=∇h =0 for all i, because | ∂h ∂h B =2AB(Ax2i−1 Bx2i)= A ∂x2i−1 − − ∂x2i for all i. Hence (B2y22i−1−A2y22i)|y=∇h =0 for all i as well, and also 1 R:= B2(y2+y2+ +y2 ) A2(y2+y2 +y2)2 4AB 1 3 ··· n−1 − 2 4··· n (cid:0) (cid:1) satisfies R( h)=0. Now one can compute that ∇ ∂R( h) ∂y1 ∇ B(Ax1 Bx2)  ∂∂∂yRR2((∇hh))   BA((AAxx1−−BBxx2))  x+H :=x+ ∂∂y∂∂∂∂∂nyyyRRRn−34(1(...∇∇∇(∇hhh))) = BAA((AA(Axxnnx−−3311...−−−−BBBxxx44)nn))  is a quasi-translation, and h H =(01,02,...,0n). H · Now let a:=x x x x , b:=x x x x , and G:=( h) . Then 1 4 2 3 3 6 4 5 A=a,B=b − − ∇ | R(G)=0aswell. Thus H˜ :=( R)(G)=H isa dependence between y A=a,B=b ∇ | the rows of G= ( h)+( h) a+( h) b J H JA∇ ·J JB∇ ·J A=a,B=b (cid:0) (cid:1)(cid:12) (cid:12) Since (x2i−1x2j x2ix2j−1) H˜ J − · =(x2i−1H˜2j x2iH˜2j−1)+(x2jH˜2i−1 x2j−1H˜2i) − − =(ax2i−1 bx2i)(ax2j−1 bx2j)+(bx2j ax2j−1)(ax2i−1 bx2i) − − − − =0 for all i,j, we have a H˜ = b H˜ =0. Consequently, J · J · G H˜ =( h) H˜ +( h) a H˜ +( h) b H˜ A=a,B=b a b J · H | · J ∇ ·J · J ∇ ·J · = ( h) H =(01,02,...,0n) H · A=a,B=b (cid:0) (cid:1)(cid:12) (cid:12) Thus x+H˜ is a quasi-translation as well. In fact, it appears in [4, Th. 2.1] as a homogeneous quasi-translationwithout linear invariants. 7 3 Hesse’s theorem A polynomial is homogeneous of degree d if all its terms have degree d. A homogeneous polynomial of degree one is called a linear form. We call a quasi- translationx+H homogeneousifthereexistsad Nsuchthateachcomponent ∈ H of H is either zero or homogeneous of degree d. i When Gordan and No¨ther published their article in 1876, it was two years ago that Otto Hesse had died. The role of Hesse is, that the starting point of Gordan and No¨ther was a wrong theorem of Hesse, which they proved in dimensions two, three and four, and disproved for all dimensions greater than four. Hesse’s (invalid) theorem. Assume n 2 and h is a homogeneous polyno- ≥ mial in x ,x ,...,x over C whose Hessian matrix is singular. Then there is a 1 2 n constant vector c=(c ,c ,...,c ) such that 1 2 n ∂h ∂h ∂h c +c + +c =0 (3.1) 1 2 n ∂x ∂x ··· ∂x 1 2 n or equivalently h c=0. J · Gordan and No¨ther first observe that (3.1) is equivalent to the existence of a linear transformationwhich makes h a polynomial in less than n variables,in other words, the existence of an invertible matrix T such that h(Tx) C[x1,x2,...,xn−1] ∈ In order to see that, we first compute the Jacobian matrix of h(Tx) by way of the chain rule, where means substituting the vector Tx for x. x=Tx | h(Tx) = h(x) T (3.2) J J x=Tx· (cid:0) (cid:1) (cid:0) (cid:1)(cid:12) If the i-th column of T is equal to c, we have(cid:12) n ∂ ∂h h(Tx)= h(x) c= c ∂x J x=Tx· (cid:18) j∂x (cid:19)(cid:12) i (cid:0) (cid:1)(cid:12) Xj=1 j (cid:12)x=Tx (cid:12) (cid:12) (cid:12) whence for all c C, 0 ∈ n ∂ ∂h h(Tx)=c c =c (3.3) 0 j 0 ∂x ⇐⇒ ∂x i Xj=1 j By taking i = n and c = 0 in (3.3), we get the above remark of Gordan and 0 No¨ther. WeshallproveHesse’stheoremindimensionstwo,three,andfour. Ifwedrop theconditionthathishomogeneous,wecanproveHesse’stheoremindimension two, but only for polynomials h without linear terms. For polynomials h with linear terms, we can only get ∂h ∂h ∂h c +c + +c =c C 1 2 n 0 ∂x ∂x ··· ∂x ∈ 1 2 n 8 whichholdsforhomogeneoushindimensiononeaswell(h=x ishomogeneous, 1 but does not satisfy Hesse’s theorem in dimension one). So we will prove the following. Theorem 3.1. Hesse’s theorem is true in dimension n 4, and for non- ≤ homogeneous polynomials in dimension n 2. ≤ Before we prove our affirmative results about Hesse’s theorem, we take a lookatthe effects ofaffinetransformationsofh andofremovinglineartermsof h. Viewthevectorsx=(x ,x ,...,x ),c=(c ,c ,...,c )andc˜=(c˜ ,c˜ ,..., 1 2 n 1 2 n 1 2 c˜ ) as column matrices of variables and constants respectively. Let T be an n invertible matrix of size n over C. Then h(Tx+c) is an affine transformation of h. Write Mt for the transpose of a matrix M. Then c˜tx, which is the product of a row matrix and a column matrix, is a matrix with only one entry, and we associateit with the value of its only entry. If we define h˜ :=h(Tx+c) c˜tx − then it appears that det h˜ = 0, if and only if det h = 0. To see this, notice H H that on account of (3.2), h˜ = h(Tx+c) c˜tx =( h) T c˜t x=Tx+c J J − J | · − (cid:0) (cid:1) The transpose of this equals h˜ =Tt ( h) c˜ x=Tx+c ∇ · ∇ | − where f stands for the transpose of f. Taking the Jacobian of the above, ∇ J we get h˜ = h(Tx+c) c˜tx = Tt ( h) c˜ =Tt ( h) T x=Tx+c x=Tx+c H H − J · ∇ | − · H | · (cid:0) (cid:1) (cid:0) (cid:1) which is a singular matrix, if and only if h is. If det h=0, then R( h)=0 for some nonzero R C[y] on account ofH(2.1), and simHilarly R˜( h˜)=∇0. We shall describe∈a natural connection between R and R˜. If∇R( h)=0 for ∇ some R C[y], then ∈ R (Tt)−1 (Tt ( h) c˜)+c˜ = R( h) =0 x=Tx+c (cid:16) (cid:0) · ∇ | − (cid:1)(cid:17) (cid:16) ∇ (cid:17)(cid:12)(cid:12)x=Tx+c thus R˜ :=R((Tt)−1(y c˜)) satisfies R˜( h˜)=0. (cid:12) − ∇ If we define H as in (2.2) with G = h for each i, then we get H = i ∇ ( R)( h), and x+H is a quasi-translation on account of corollary 2.1. In a y ∇ ∇ similar manner, x+H˜ is a quasi-translation if H˜ = ( R˜)( h˜), and again we y ∇ ∇ describe a natural connection. By the chain rule, JyR˜ =JyR (Tt)−1(y−c˜) =(JyR)|y=(Tt)−1(y−c˜)·(Tt)−1 (cid:0) (cid:1) 9 whence yR˜ =T−1( yR)y=(Tt)−1(y−c˜) ∇ ∇ | Combining this with h˜ =Tt ( h) +c˜, we get that x=Tx+c ∇ · ∇ | H˜ =( R˜)( h˜)=T−1H(Tx+c) y ∇ ∇ thus H˜ is a linear conjugation of H if c=0. In general, x+H˜ =x+T−1H(Tx+c)=T−1 Tx+c+H(Tx+c) c (cid:16)(cid:0) (cid:1)− (cid:17) isanaffinelylinearconjugationofx+H. Inthenextsection,weshallshowthat affinely linear conjugations of quasi-translations are again quasi-translations. We are now ready to prove the following. Theorem 3.2. Assume h C[x] and R=0 satisfies (2.1) and is of minimum ∈ 6 degree. Define H =( R)( h). y ∇ ∇ If h is homogeneous, then R and H are homogeneous as well. If the dimension of the linear span of the image of H is at most one, then degR=1 and x+H is a regular translation. Proof. If h is homogeneous, then (2.1) is still satisfied if R is replaced by any homogeneous component of it. Since R was chosen of minimum degree, we see that R and hence also H is homogenous if h is homogeneous. Assume that the dimension of the linear span of the image of H is at most one. Then there are n 1 independent vectors c(i) Cn such that − ∈ (c(1))tH =(c(2))tH = =(c(n−1))tH =0 ··· Suppose that degR = 1. Since (c(i))tH = ((c(i))t R)( h) for each i and R is 6 ∇ ∇ ofminimumdegree,wehave(c(i))t R=0foreachi. Takeaninvertiblematrix T over C such that the i-th colum∇n of (Tt)−1 equals c(i) for each i n 1. ≤ − Then ( R) c(i) =0 for each i n 1. By applying (3.3) with R(y) instead of h(x) anJd c ·=0, we obtain ∂ R≤ (T−t)−1y =0 for all i n 1, so 0 ∂yi ≤ − (cid:0) (cid:1) R˜(y):=R (Tt)−1y C[y ] n ∈ (cid:0) (cid:1) Notice that for h˜ := h(Tx), we have R˜( h˜) = R (Tt)−1Tt h = 0, ∇ ∇ x=Tx whence (cid:0) (cid:1)(cid:12) ∂ (cid:12) ˜h ∂x n isalgebraicoverC. Thisisonlypossibleifthisderivativeisaconstantc ,which 0 means that the left hand side of (3.3) holds for i = n. Thus R can be chosen of degree one, and since that is the minimum possible degree for R, we have degR=1 and H is constant. 10

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.