ebook img

Advanced Linear Algebra [Lecture notes] PDF

44 Pages·2017·0.306 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Advanced Linear Algebra [Lecture notes]

Advanced Linear Algebra MA500-1: Lecture Notes Semester 1 2015-2016 DrRachelQuinlan SchoolofMathematics,StatisticsandAppliedMathematics,NUIGalway March14,2017 Contents 1 Threewaystothinkaboutamatrix 2 1.1 Lineartransformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.1.1 Interpretingamatrixasalineartransformation . . . . . . . . . . . . . . . . 2 1.1.2 Interpretingalineartransformationasamatrix . . . . . . . . . . . . . . . . 3 1.1.3 ChangeofBasis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.1.4 Similarity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.1.5 Rank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.2 BilinearForms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.2.1 Symmetricandalternatingforms . . . . . . . . . . . . . . . . . . . . . . . . . 13 1.2.2 Duality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 1.3 MatricesandGraphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2 SpectralProperties 24 2.1 Thedeterminant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.2 Thespectrumofamatrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.3 Positivematrices-theFrobenius-PerronTheorem . . . . . . . . . . . . . . . . . . . 31 2.4 SupplementtoChapter2: evenandoddpermutations. . . . . . . . . . . . . . . . . 38 1 Chapter 1 Three ways to think about a matrix Matricesareubiquitousinmathematicsandarisecentrallyinmanyareasthatarenotnecessarily closelyrelatedinanobviousway.Matricesarenaturallyequippedwithlotsofalgebraicstructure (forexamplethesetofn×nmatricesoveraringRorfieldFisitselfaringwithlotsofinteresting properties). Whichaspectsofthisextensivealgebraicstructureareofinterestcandependaloton thecontext. Therearemanywaysofthinkingaboutwhatamatrixis,anditisoftenhelpful,even necessary, to have access to more than one of them. We discuss three different viewpoints (and thealgebraicconsiderationsthataccompanythem)inthischapter. 1.1 Linear transformations ForafieldFandpositiveintegern,wewillwriteFnforthevectorspaceconsistingofallcolumn vectorsoflengthnwithentriesinF,andMn(F)forthesetofalln×nmatriceswithentriesinF. 1.1.1 Interpretingamatrixasalineartransformation IfA∈Mn(F)andv∈Fn,wecan“multiply”AbyvtogetanotherelementofFn.     −1 1 2 1 Example1.1.1. InM3(Q),writeA= −12 8 6 . InQ3,writev= 2 . Then 12 −7 −3 3        −1 1 2 1 −1(1)+1(2)+2(3) 7 Av= −12 8 6  2 = −12(1)+8(2)+6(3) = 22 . 12 −7 −3 3 12(1)−7(2)−3(3) −11 Example 1.1.1 demonstrates the process of matrix-vector multiplication. Although this is al- readyfamiliaritisworthwhiletoconsiderwhatisgoingoninslightlymoredetail. Foramatrix Aandcolumnvectorv, youcancalculatetheproductAvonlyifthenumberofcolumnsofAis the same as the number of entries in v. What we are doing when we calculate Av is taking the linearcombinationofthecolumnsofAthatisdeterminedbytheentriesofv. Thismeansthatif   a 1 v= .. andAhasncolumns,thenthecolumnvectorAvisgivenby  .  a n       Col1 Col2 Coln a1 of +a2 of +···+an of . A A A Exercise1.1.2. Ifindoubt,confirmthisfromExample1.1.1. 2 ThuseverymatrixA∈Mn×n(F)definesafunctionTA :Fn →Fnby TA(v)=Av. Thisfunctionisalineartransformation,whichmeansthat • Forallu,v∈Fn,TA(u+v)=TA(u)+TA(v)(sinceA(u+v)=Au+Av),and • Forallv∈Fn andα∈F,TA(αu)=αTA(u). (Thefieldelementαisreferredtoasascalarin thiscontext). Ingeneralalineartransformationisafunctionthatrespectsadditionandscalarmultiplication(in acontextwherethatmakessense). NOTES 1. Moregenerally,ap×nmatrixA(prows,ncolumns)maybethoughtofasalineartrans- formationTA fromFn toFp, viamatrix-vectormultiplication. Ifv ∈ Fn, thenTA(v) = Av isthelinearcombinationofthecolumnsofAinwhichthecoefficientofColumniistheith entryofv. 2. If A ∈ Mp×n(F) then Column i of A is the image under TA of the vector ei, which has entry1intheithpositionandzeroinallotherpositions. Sodifferentmatricescorrespond todifferentlineartransformations. 3. If T : Fn → Fp is any linear transformation, let A be the matrix in Mp×n(F) whose ith columnisT(ei). ThenT(v)=Avforallvectorsv∈Fn. Exercise1.1.3. ProvethestatementinItem3above. SowecanthinkofthematrixspaceMp×n(F)asbeingthesetoflineartransformationsfromFn toFp. Thepresentationgivenhereinvolvesachoicetodiscussmatricesmultiplied(ontheright) bycolumnvectors. Itcouldequallywellbepresentedasintermsofmatricesbeingmultipliedon theleftbyrowvectors. Welet(Fp)T denotethetransposeofFp,i.e. thespaceofrowvectorsoflengthpwithentriesin F. Thenap×nmatrixAdescribesalineartransformationfrom(Fp)T to(Fn)T via v→vA. NotethatvA,whichbelongstoFn,isthelinearcombinationoftherowsofAinwhichthecoeffi- cientofRowiisentryiofv. Soa(row-)vector-matrixproductgivesalinearcombinationofthe rowsofthematrix,withcoefficientsgivenbythevectorentries. 1.1.2 Interpretingalineartransformationasamatrix Suppose now that V and W are vector spaces of finite dimensions n and p respectively over F, andletf : V → W bealineartransformation. LetB = {b1,...,bn}andC = {c1,...,ck}bebases for V and W respectively. Write Mf,B,C for the p×n matrix whose ith column contains the C- coordinatesoftheelementf(bi)ofW. Theorem 1.1.4. If v ∈ V, then the C-coordinates of f(v) are the entries of the matrix-vector product Mf,B,C[v],where[v]isthevectorinFnwhoseentriesaretheB-coordinatesofv.   a 1 Proof. Writev=a1b1+···+anbn,so[v]= ... . Then a n f(v)=a1f(b1)+a2f(b2)+···+anf(bn). 3 SincetheC-coordinatesoff(bi)arewrittenintoColumniofMf,B,C,itfollowsthattheC-coordinates off(v)aretheentriesof       Col1 Col2 Coln a1 of +a2 of +···+an of =Mf,B,C[v]. Mf,B,C Mf,B,C Mf,B,C Example 1.1.5. Let V = Q [x], the space of polynomials in x of degree at most 4 over Q, with basis 4 B = {1,x,x2,x3,x4}. Let W = Q [x], the space of polynomials of degree at most 3 over Q, with basis 3 C = 1,x,x2,x3. Let D : V → W be the differential operator, which maps a polynomial to its derivative. ThenDisalineartransformationand   0 1 0 0 0  0 0 2 0 0  MD,B,C = 0 0 0 3 0 . 0 0 0 0 4 Ifwewanttousethistocalculatethederivativeofx4−3x3+2x2−x,wecancalculatethematrix-vector product     0   0 1 0 0 0 −1  −1   0 0 2 0 0    4    2 = .  0 0 0 3 0    −9   −3  0 0 0 0 4 4 1 SoD(x4−3x3+2x2−x)=4x3−9x2+4x−1. Exercise1.1.6. Intheaboveexample,supposeweusedthebasisC(cid:48) ={1,1+x,1+x+x2,1+x+x2+x3} forW insteadofC. Howwouldthematrixchange? The point here is that given a linear transformation f between two vector spaces of finite dimension,thechoiceofabasisforeachspaceallowsusconsiderfasamatrix. Itisnotexactly truetosaythateverytransformationfcorrespondstoamatrixinsomeobjectiveway,becauseit isnotonlyfbutalsothechoiceoftwobasesthatdeterminethematrix. Question1.1.7. Supposethatf:V →Wisalineartransformationbetweendifferentvectorspaces. How dothematricesthatrepresentfwithrespecttodifferentbasesresembleeachother? Inordertoanswerthisquestionweneedsomematrixmachinery. 1.1.3 ChangeofBasis LetV beaF-vectorspaceofdimensionnandsupposethatB={b1,...,bn}andB(cid:48) ={v1,...,vn} arebasesofV. Theneachbi canbewritteninauniquewayasalinearcombinationofv1,...,vn. Forj=1,...,nwrite (cid:88)n bj = aijvi, aij ∈F. i=1 Let P denote the n×n matrix whose entry in the (i,j) position is aij. Possibly a better way to thinkaboutthematrixP isthatColumnjofP isthevectorwhoseentriesaretheB(cid:48)-coordinates ofbj. SothecolumnsofPexpresstheelementsofthebasisBintermsoftheirB(cid:48)-coordinates. (cid:80) Lemma1.1.8. Letx∈V,andsupposethatx= ni=1cibi,sothattheB-coordinatesofxarec1,...,cn. c 1 ThentheB(cid:48)-coordinatesofxaregivenbytheentriesofthematrix-vectorproductP ... . c n 4 Proof. Thematrix-vectorproductis       Col1 Col2 Coln c1 of +c2 of +···+cn of . P P P SinceColumnjofPexpressesbj intermsofitsB(cid:48)-coordinates,theentriesofthisproductarethe coordinatesofc1b1+···+cnbn =xintermsofB(cid:48). Definition1.1.9. InviewofLemma1.1.8,werefertoPasthechangeofbasismatrixfromBtoB(cid:48). Its columnsaretheelementsofBexpressedintermsofB(cid:48). Remark In the context of Section 1.1.2, you can consider P to be the matrix M ,B,B(cid:48), where id id:V →V istheidentitymapping. (ThankstoBenforthisniceobservation). NowletQbethechangeofbasismatrixfromB(cid:48) toB,definedequivalently-thecolumnsof QaretheB-columnrepresentationsoftheelementsv1,...,vn ofB(cid:48). Asabove,wecanpassfrom theB(cid:48)-columnrepresentationofanyelementofV toitsB-columnrepresentationbymultiplying ontheleftbyQ. NowletcbeanyvectorinFn. ThencistheB-columnrepresentationofsomeelementvofV, andtheB(cid:48) columnrepresentationofvisPc. ButthentheB-columnrepresentationofvisgiven byQ(Pc)=QPc. HoweverthismustbeequaltocThus QPc=cforallc∈Fn. Hence QP = In, the n×n identity matrix. Similarly PQ = In, so P and Q are inverses of each other.Inparticular,everychangeofbasismatrixisinvertible,anditsinverseisthereversechange ofbasismatrix. On the other hand, every invertible n×n matrix determines a change of basis in Fn. To see thisletP ∈Mn(F)beinvertibleandletQbeitsinverse. ThismeansthatPQ=QP =In. We focus on the product QP = In. This means that e1, the first column of In, is the linear combination of the columns of Q whose coefficients are the entries of Column 1 of P. Similarly e2,...,en are linear combinations of the columns of Q with coefficients given by the entries of Columns 2,...,n of P. Then in particular all of the standard basis vectors of Fn belong to the span of the Columns of Q, and so the columns of Q form a spanning set of Fn. Since Fn has dimension n it cannot be spanned by fewer than n columns, and so the columns of Q form a minimal spanning set of Fn, hence they must be linearly independent. Thus the columns of the invertiblematrixQformabasisBQofFn. Moreover,forj = 1,...,n,theentriesofColumn1ofP arethecoordinatesofei withrespect to the basis BQ. If v is any vector in Fn, then the coordinates of v with respect to BQ are given by the entries of Pv (or equivalently Q−1v). So P or Q−1 is the change of basis matrix from the standardbasisofFntoBQ. Remarks 1. Asabove,wecouldusethefactthatPQ=IntoshowthatthecolumnsofPformabasisfor Fn,andbythinkingoftherowsofPQ(orQP)aslinearcombinationsoftherowsofQ(or P),wecanshowthattherowsofQ(orP)formabasisfor(Fn)T. 2. The above argument shows that if A ∈ Mn(F) has a right inverse (i.e. there exists B ∈ Mn(F) with AB = In, then the columns of A form a basis of Fn. It is true that if A has a right inverse then it also has a left inverse and these coincide, but we are not quite in a positiontoprovethatyet. Wewillsoon. WearenowinapositiontoanswerQuestion1.1.7. SupposethatVandWareF-vectorspacesofdimensionspandnrespectivelyandthatf:V → W isalineartransformation. LetBandB(cid:48) betwobasesofV andletCandC(cid:48) betwobasesofW. 5 LetthechangeofbasismatricesfromB(cid:48)toBandfromCtoC(cid:48)bedenotedbyPandQrespectively (soPisanonsingularn×nmatrixandQisanonsingularp×pmatrix). Then Mf,B(cid:48),C(cid:48) =QMf,B,CP. Explanation: SupposethatcistheB(cid:48)-columnrepresentationofsomeelementvofV. Wewantto knowwhatmatrixshouldmultiplycontheleftinordertogivetheC(cid:48)-columnrepresentationof f(v). MultiplyingcbyPgivesustheB-columnrepresentationofv,multiplyingthatbythep×n matrix Mf,B,C gives us the C-column representation of f(v), and multiplying that by Q gives us theC(cid:48)-columnrepresentationoff(v). Thus,overallwehave [f(v)]C(cid:48) =QMf,B,CP[v]B(cid:48). Thismotivatesthefollowingdefinition. Definition1.1.10. LetAandBbematricesinMp×n(F). ThenAandBaresaidtobeequivalentifthere existnonsingularmatricesP ∈M (F)andQ∈M (F)forwhich n p B=QAP. Iftwop×nmatricesareequivalent,ifmeansthattheyrepresentthesamelineartransforma- tionfromFntoFp,possiblywithrespecttodifferentbasesforbothspaces. 1.1.4 Similarity We now specialize the discussion to the case where V = W. Our set up now is that we have a single vector space V of dimension n, and a linear transformation f : V → V. If B is a basis of V,wewriteMf,B forthematrixthatwascalledMf,B,B inSection??. SupposethatB(cid:48) isanother basis of V, and let P be the change of basis matrix from B(cid:48) to B (so the columns of P are the B- representationsoftheelementsofB(cid:48). ThenP−1 isthechangeofbasismatrixfromBtoB(cid:48) (and itscolumnsaretheB(cid:48)-representationsoftheelementsofB). Then Mf,B(cid:48) =P−1Mf,BP. Definition1.1.11. SupposethatAandBarematricesinMn(F). ThenAandBaresaidtobesimilarif thereexistsaninvertiblematrixP ∈M (F)forwhich n B=P−1AP. If two matrices are similar, it means that they describe the same linear transformation, with respecttodifferentbases. Givenalineartransformationf:V →V,itisreasonabletoaskwhetherthereissomebasisof V withrespecttowhichitsmatrixhasaparticularyniceform. Thenicestformthatyoucanhope forisadiagonalform,inwhichallentriesawayfromthemaindiagonal(fromupperlefttolower right)arezeros. IfthereisabasisB={b1,...,bn}ofV forwhich   λ 0 ... 0 1  0 λ2 ... 0  Mf,B =diag(λ1,...,λn)= ... ... ... , 0 0 ... λn itmeansthatf(bi)=λi(bi)fori=1,...,n. Thismeansexactlythateachbiisaneigenvectoroff. Definition1.1.12. LetV beaF-vectorspaceandletf : V → V bealineartransformation. Anon-zero element v of V is called an eigenvector of f if f(v) = λv for some λ ∈ F. In this case λ is called the eigenvalueofftowhichvcorresponds. 6 So if v is an eigenvector of f, it means that f(v) is just a scalar multiple of v. The mapping f : V → V is called diagonalizable (or diagonable) if there exists a basis of V with respect to whichthematrixoffisdiagonal. ThismeansthatthereexistsabasisofV consistingentirelyof eigenvaluesoff. The“matrix”versionsof(someof)thesedefinitionsaregivenbelow.Therearemanyconcepts andstatementsinlinearalgebrathatcanbeexpressedeitherintermsofmatricesorintermsof lineartransformations. Definition 1.1.13. Let A ∈ Mn(F). A non-zero column vector v ∈ Fn is a right eigenvector of A if Av = λvforsomeλ ∈ F. Anon-zerorowvectorw ∈ Fn isalefteigenvectorofAifwA = λwforsme λinF. IneachcasethescalarλisthecorrespondingeigenvalueofA. Ifisnotalwaystrueforalineartransformationf : V → V orforasquarematrixA ∈ Mn(F) (cid:18) (cid:19) 1 1 that there exists a basis of V (or Fn) consisting of eigenvectors. For example let A = 0 1 andinvestigaterighteigenvectorsofA. Supposethat (cid:18) (cid:19)(cid:18) (cid:19) (cid:18) (cid:19) 1 1 x x x+y = λx =λ =⇒ 0 1 y y y = λy. Foraneigenvectorxandycannotbothbezero. Fromthesecondequation,eithery=0orλ=1. Howeverify=0thenλ=1anywayfromthefirstequation,sowemusthaveλ=1,and1isthe onlyeigenvalueofA. Now x+y=x=⇒y=0, and any vector of the form (cid:0)x(cid:1) is an eigenvector of A corresponding to the eigenvalue 1. These 0 aretheonlyeigenvectorsofAandtheyareallscalarmultiplesof(cid:0)1(cid:1),theyforma1-dimensional 0 subspaceofF2. SoF2 doesnothaveabasisconsistingofeigenvectorsofA,andAisnotsimilar toadiagonalmatrix. Wedohavethefollowingtheorem,butitsconverseisnottrue(sincewecanobviouslyeasily writedownexamplesofdiagonalmatricesthathaverepeatedeigenvalues). Theorem1.1.14. SupposethatA∈Mn(F)hasndistincteigenvaluesλ1,...,λn,andthatv1,...,vnare respectiveeigenvectorsofA. ThenB={v1,...,vn}isabasisofFn. Proof. WeshowthatthesetBislinearlyindependent. Supposeit’snot,andseekacontradiction. Then k be the least index for which {v1,v2,...,vk} is linearly dependent. Then k (cid:54) n since B is linearly dependent, and k (cid:62) 2 since v1 is not the zero vector. Then v1,...,vk−1 are linearly independentandvk isalinearcombinationofthese,sothereexista1,...,ak−1 ∈ F,notallzero, with vk =a1v1+···+ak−1vk−1. (1.1) Multiplying1.1ontheleftbyAgives λkvk =a1λ1v1+a2λ2v2+···+ak−1λk−1vk−1, andmultiplying1.1bythescalarλkgives λkvk =a1λkv1+a2λkv2+···+ak−1λk−1vk−1. Equatingtherighthandsidesofthesetwoexpressionsforλkvkgives a1(λk−λ1)+a2(λk−λ2)+···+ak−1(λk−λk−1)vk−1. (1.2) Since the eigenvalues λ1,...,λk are distinct, the field elements λk −λi in 1.2 above are all non- zero. Furthermoretheai inthisexpressionarenotallzero,so1.2isalineardependencerelation among v1,...,vk−1. This contradicts the choice of k as the least index for which {v1,...,vk} has sucharelation. 7 A matrix A ∈ Mn(F) is diagonalizable (over F) if there exists a basis B of Fn consisting of eigenvectors of A. In this case P−1AP is diagonal, where P is matrix whose columns are the elements of B (this is the change of basis matrix from B to the standard basis). So a matrix is diagonalizableifitissimilartoadiagonalmatrix. There is an issue here that might not be immediately obvious. We demonstrate it with an example. (cid:18) (cid:19) 0 −1 Example1.1.15. LetA= inM (R). ToinvestigateeigenvectorsofAwecanwrite 1 0 2 (cid:18) (cid:19)(cid:18) (cid:19) (cid:18) (cid:19) 0 −1 x x −y = λx =λ =⇒ 1 0 y y x = λy Now substituting the second equation into the first gives −y = λx = λ(λy) = λ2y. Taking y=0isnotanoptionasthiswouldforcex=0also,andthezerovectorcannotbeaneigenvector. So from −y = λ2y we must conclude λ2 = −1. There is no real number λ with this property, so A has no eigenvalues in R and has no eigenvectors with real entries. However, if we allow ourselves to consider complex eigenvalues, we can try λ = i and λ = −i. An eigenvector (cid:0)x(cid:1) 1 2 y correspondingtoλ mustsatisfyx=iy,forexample(cid:0)i(cid:1),andaneigenvectorcorrespondingtoλ 1 1 2 mustsatisfyx=−iy,forexample(cid:0)−i(cid:1).Now(cid:0)i(cid:1)and(cid:0)−i(cid:1)formabasisofC2,soAisdiagonalizable 1 1 1 ifweconsideritasanelementofM (C),andinthiscase 2 (cid:18) (cid:19) (cid:18) (cid:19) i 0 i −i P−1AP = , whereP = . (Checkthis!) 0 −i 1 1 However,AisnotdiagonalizablewithinM (R). 2 GivenamatrixA ∈ Mn(F)(oralineartransformationf : V → V ofan-dimensionalF-vector space V) we can ask about the existence of “nice” bases for describing A, or of “nice” matrices thataresimilartoA. TheeigenvaluesofAmayormaynotbelongtothefieldF. Wecanconsider twocases: • If we only want to consider similarity within Mn(F), we can try to identify the “nicest” matrix of the form P−1AP, where P ∈ GL(n,F). This leads to the theory of the rational canonicalform. • If F¯ is a field that has F has a subfield and contains all the eigenvalues of F, then we can consider A to be an element of Mn(F¯) and consider similarity over F¯. Then we would be looking for the “nicest” matrix of the form P−1AP where the entries of the invertible matrixP(andofP−1AP)belongtoF¯ butnotnecessarilytoF. Thisleadstothetheoryofthe Jordancanonicalform.TheJordancanonicalformofthematrixinExample1.1.15aboveisthe (cid:18) (cid:19) i 0 diagonalmatrix . 0 −i Wewillrevisittheconceptofsimilaritylaterinthecourse. 1.1.5 Rank Letf:V →W bealineartransformationofF-vectorspaces. Thekernelandimageoffaredefined by kerf={x∈V :f(x)=0}; (cid:61)f=f(x):x∈V. Lemma1.1.16. ThekernelandimageoffaresubspacesofV andW respectively. Theproofisleftasanexercise. Definition1.1.17. Thedimensionof(cid:61)fiscalledtherankoff. 8 LetA∈Mp×n(F). ThecolumnspaceofAisthesubspaceofFpthatisspannedbythecolumns of A. The dimension of this space is called the column rank of A. Since linear combinations of thecolumns of A areprecisely equalto matrix-vectorproducts ofthe form Av with v ∈ Fn, the columnspaceofAisthesetofallsuchvectors. Thisisalsotheimageofthelineartransformation TA :Fn →Fpdefinedasleft-multiplicationbyA,anditsdimensionistherankofTA. Thus ThecolumnrankofAistherankofthelineartransformationTA. Analagously, we can define the row rank of A to be the dimension of the subspace of (Fn)T spanned by the rows of A. This space is called the row space of A and it is the image of the lineartransformationfromLA :(Fp)T →(Fn)T definedforarowvectorw∈(Fp)T by LA(w)=wA. WhatpreciselyistheconnectionbetweenTAandLAisnotaparticularlyeasyquestiontoanswer butwewillcomebacktoitinSection1.2. Fornowwecanprovethesurprisingandnon-obvious factthattheyhavethesamerank. Manydifferentproofsofthistheoremcanbefoundinbooks, notallofthemofferalotofinsightintowhythestatementistrue-manyrelyonrowandcolumn operationsthatreducethematrixtoanechelonform. Theorem1.1.18. LetA∈M (F). ThentherowrankandcolumnrankofAareequal. p×n Proof. WriterfortherowrankofAandcforthecolumnrank. Wewanttoshowthatr = c. Let v1,...,vc becolumnvectorsinFn thatformabasisforthecolumnspaceofA,andletCA bethe p×c matrix that has v1,...,vc as its columns. Then every column of A can be expressed as a linearcombinationofv1,...,vc inauniqueway,anditfollowsthatthereexistsac×nmatrixR forwhich (cid:124)C(cid:123)(cid:122)A(cid:125)(cid:124)(cid:123)R(cid:122)(cid:125)=A. p×c c×n The first column in R contains the coordinates of Column 1 of A in terms of v1,...,vc, and so on. Looking at the same product the other way round, we see that Row 1 of A is the linear combination of the rows of R whose coefficients are the entries of Row 1 of CA and so on; each rowofAisalinearcombinationofthecrowsofR. ThusthecrowsofRspantherowspaceofA, andtherowrankofAisatmostc,sor(cid:54)c. We use essentially the same argument to show that c (cid:54) r and hence that r = c. The row rankofAisr,hencethereexistsaspanningsetw1,...,wr fortherowspaceofA. LetRA bethe r×nmatrixwhoserowsarew1,...,wr. TheneveryrowofAhasauniqueexpressionasalinear combinationoftherowsofRA,sothereexistsap×rmatrixCforwhich (cid:124)(cid:123)C(cid:122)(cid:125)(cid:124)R(cid:123)A(cid:122)(cid:125)=A. p×r r×n TheentriesinRowiofCarethecoefficientsintheexpressionforRowiofAasalinearcombina- tionoftherowsofRA. ButnoweverycolumnofAisalinearcombinationofthercolumnsofC, andhencethedimensionofthecolumnspaceofAisatmostr,soc(cid:54)r. Sincec(cid:54)randr(cid:54)c,weconcludethatc=randthattherowrankandcolumnrankofAare equal. InviewofTheorem1.1.18,wedonotneedtodistinguishbetweentherowrankandcolumn rankofamatrix,andwecanjustrefertoitsrank. IfAisap×nmatrix,thenrank(A)isaninteger between0andmin(p,n). Hereareafewremarksaboutrank. 1. Amatrixhasrank0ifandonlyifitisthezeromatrix. 2. A matrix has rank 1 if and only if it is not the zero matrix and all of its non-zero rows are scalarmultiplesofeachother(sameforcolumns). 9

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.