Lecture Notes on Operator Algebras John M. Erdman Portland State University Version March 12, 2011 (cid:13)c 2010 John M. Erdman E-mail address: [email protected] Contents 1 Chapter 1. LINEAR ALGEBRA AND THE SPECTRAL THEOREM 3 1.1. Vector Spaces and the Decomposition of Diagonalizable Operators 3 1.2. Normal Operators on an Inner Product Space 6 Chapter 2. THE ALGEBRA OF HILBERT SPACE OPERATORS 13 2.1. Hilbert Space Geometry 13 2.2. Operators on Hilbert Spaces 16 2.3. Algebras 19 2.4. Spectrum 21 Chapter 3. BANACH ALGEBRAS 23 3.1. Definition and Elementary Properties 23 3.2. Maximal Ideal Space 26 3.3. Characters 27 3.4. The Gelfand Topology 28 3.5. The Gelfand Transform 31 3.6. The Fourier Transform 32 Chapter 4. INTERLUDE: THE LANGUAGE OF CATEGORIES 35 4.1. Objects and Morphisms 35 4.2. Functors 36 4.3. Natural Transformations 38 4.4. Universal Morphisms 39 Chapter 5. C∗-ALGEBRAS 43 5.1. Adjoints of Hilbert Space Operators 43 5.2. Algebras with Involution 44 5.3. C∗-Algebras 46 5.4. The Gelfand-Naimark Theorem—Version I 47 Chapter 6. SURVIVAL WITHOUT IDENTITY 49 6.1. Unitization of Banach Algebras 49 6.2. Exact Sequences and Extensions 51 6.3. Unitization of C∗-algebras 53 6.4. Quasi-inverses 55 6.5. Positive Elements in C∗-algebras 57 6.6. Approximate Identities 59 Chapter 7. SOME IMPORTANT CLASSES OF HILBERT SPACE OPERATORS 61 7.1. Orthonormal Bases in Hilbert Spaces 61 7.2. Projections and Partial Isometries 63 7.3. Finite Rank Operators 65 7.4. Compact Operators 66 iii iv CONTENTS Chapter 8. THE GELFAND-NAIMARK-SEGAL CONSTRUCTION 69 8.1. Positive Linear Functionals 69 8.2. Representations 69 8.3. The GNS-Construction and the Third Gelfand-Naimark Theorem 70 Chapter 9. MULTIPLIER ALGEBRAS 73 9.1. Hilbert Modules 73 9.2. Essential Ideals 76 9.3. Compactifications and Unitizations 78 Chapter 10. FREDHOLM THEORY 81 10.1. The Fredholm Alternative 81 10.2. The Fredholm Alternative – continued 82 10.3. Fredholm Operators 83 10.4. The Fredholm Alternative – Concluded 84 Chapter 11. EXTENSIONS 87 11.1. Essentially Normal Operators 87 11.2. Toeplitz Operators 88 11.3. Addition of Extensions 90 11.4. Tensor Products of C∗-algebras 94 11.5. Completely Positive Maps 94 Chapter 12. K-THEORY 99 12.1. Projections on Matrix Algebras 99 12.2. The Grothendieck Construction 101 12.3. K (A)—the Unital Case 103 0 12.4. K (A)—the Nonunital Case 105 0 12.5. Exactness and Stability Properties of the K Functor 106 0 12.6. Inductive Limits 107 12.7. Bratteli Diagrams 108 Bibliography 113 Index 115 Itisnotessentialforthevalueofaneducationthateveryideabeunderstoodatthe timeofitsaccession. Anypersonwithagenuineintellectualinterestandawealth of intellectual content acquires much that he only gradually comes to understand fully in the light of its correlation with other related ideas. ...Scholarship is a progressive process, and it is the art of so connecting and recombining individual itemsoflearningbytheforceofone’swholecharacterandexperiencethatnothing is left in isolation, and each idea becomes a commentary on many others. - NORBERT WIENER 1 CHAPTER 1 LINEAR ALGEBRA AND THE SPECTRAL THEOREM 1.1. Vector Spaces and the Decomposition of Diagonalizable Operators 1.1.1. Convention. In this course, unless the contrary is explicitly stated, all vector spaces will be assumed to be vector spaces over C. That is, scalar will be taken to mean complex number. 1.1.2. Definition. The triple (V,+,M) is a (complex) vector space if (V,+) is an Abelian group and M: C → Hom(V) is a unital ring homomorphism (where Hom(V) is the ring of group homomorphisms on V). A function T: V → W between vector spaces is linear if T(u + v) = Tu + Tv for all u, v ∈ V and T(αv) = αTv for all α ∈ C and v ∈ V. Linear functions are frequently called linear transformations or linear maps. When V = W we say that T is an operator on V. The collection of all linear maps from V to W is denoted by L(V,W) and the set of operators on V is denoted by L(V). Depending on context we denote the identity operator x (cid:55)→ x on V by id or I or V V just I. Recall that if T: V → W is a linear map, then the kernel of T, denoted by kerT, is T←({0}) = {x ∈ V : Tx = 0}. Also, the range of T, denoted by ranT, is T→(V) = {Tx: x ∈ V}. 1.1.3. Definition. A linear map T: V → W between vector spaces is invertible (or is an isomorphism) if there exists a linear map T−1: W → V such that T−1T = id and TT−1 = id . V W Recallthatifalinearmapisinvertibleitsinverseisunique. Recallalsothatforalinearoperator T on a finite dimensional vector space the following are equivalent: (a) T is an isomorphism; (b) T is injective; (c) the kernel of T is {0}; and (d) T is surjective. 1.1.4. Definition. Two operators R and T on a vector space V are similar if there exists an invertible operator S on V such that R = S−1TS. 1.1.5. Proposition. If V is a vector space, then similarity is an equivalence relation on L(V). 1.1.6. Definition. Let V be a finite dimensional vector space and B = {e1,...,en} be a basis for V. An operator T on V is diagonal if there exist scalars α ,...,α such that Tek = α ek for 1 n k each k ∈ N . Equivalently, T is diagonal if its matrix representation [T] = [T ] has the property n ij that T = 0 whenever i (cid:54)= j. ij Asking whether a particular operator on some finite dimensional vector space is diagonal is, strictly speaking, nonsense. As defined the operator property of being diagonal is definitely not a vector space concept. It makes sense only for a vector space for which a basis has been specified. This important, if obvious, fact seems to go unnoticed in beginning linear algebra courses, due, I suppose, to a rather obsessive fixation on Rn in such courses. Here is the relevant vector space property. 1.1.7. Definition. An operator T on a finite dimensional vector space V is diagonalizable if there exists a basis for V with respect to which T is diagonal. Equivalently, an operator on a finite dimensional vector space with basis is diagonalizable if it is similar to a diagonal operator. 3 4 1. LINEAR ALGEBRA AND THE SPECTRAL THEOREM 1.1.8.Definition. LetM andN besubspacesofavectorspaceV. IfM∩N = {0}andM+N = V, then V is the (internal) direct sum of M and N. In this case we write V = M ⊕N . We say thatM andN arecomplementary subspaces and that each is a (vector space) comple- ment of the other. The codimension of the subspace M is the dimension of its complement N. 1.1.9. Example. Let C = C[−1,1] be the vector space of all continuous real valued functions on the interval [−1,1]. A function f in C is even if f(−x) = f(x) for all x ∈ [−1,1]; it is odd if f(−x) = −f(x) for all x ∈ [−1,1]. Let C = {f ∈ C: f is odd } and C = {f ∈ C: f is even }. o e Then C = C ⊕C . o e 1.1.10. Proposition. If M is a subspace of a vector space V, then there exists a subspace N of V such that V = M ⊕N. 1.1.11. Proposition. Let V be a vector space and suppose that V = M⊕N. Then for every v ∈ V there exist unique vectors m ∈ M and n ∈ N such that v = m+n. 1.1.12. Definition. Let V be a vector space and suppose that V = M ⊕N. We know from 1.1.11 that for each v ∈ V there exist unique vectors m ∈ M and n ∈ N such that v = m+n. Define a function E : V → V by E v = n. The function E is the projection of V along M MN MN MN onto N. (Frequently we write E for E . But keep in mind that E depends on both M and N.) MN 1.1.13. Proposition. Let V be a vector space and suppose that V = M⊕N. If E is the projection of V along M onto N, then (i) E is linear; (ii) E2 = E (that is, E is idempotent); (iii) ranE = N; and (iv) kerE = M. 1.1.14. Proposition. Let V be a vector space and suppose that E: V → V is a function which satisfies (i) E is linear, and (ii) E2 = E. Then V = kerE ⊕ranE and E is the projection of V along kerE onto ranE. It is important to note that an obvious consequence of the last two propositions is that a function T: V → V from a finite dimensional vector space into itself is a projection if and only if it is linear and idempotent. 1.1.15. Proposition. Let V be a vector space and suppose that V = M⊕N. If E is the projection of V along M onto N, then I −E is the projection of V along N onto M. As we have just seen, if E is a projection on a vector space V, then the identity operator on V can be written as the sum of two projections E and I −E whose corresponding ranges form a direct sum decomposition of the space V = ranE ⊕ran(I −E). We can generalize this to more than two projections. 1.1.16. Definition. Suppose that on a vector space V there exist projection operators E , ..., 1 E such that n (i) I = E +E +···+E and V 1 2 n (ii) E E = 0 whenever i (cid:54)= j. i j Then we say that I = E +E +···+E is a resolution of the identity. V 1 2 n 1.1. VECTOR SPACES AND THE DECOMPOSITION OF DIAGONALIZABLE OPERATORS 5 1.1.17. Proposition. If I = E +E +···+E is a resolution of the identity on a vector space V 1 2 n V, then V = (cid:76)n ranE . k=1 k 1.1.18. Example. Let P be the plane in R3 whose equation is x−z = 0 and L be the line whose equations are y = 0 and x = −z. Let E be the projection of R3 along L onto P and F be the projection of R3 along P onto L. Then 1 0 1 1 0 −1 2 2 2 2 [E] = 0 1 0 and [F] = 0 0 0 . 1 0 1 −1 0 1 2 2 2 2 1.1.19. Definition. A complex number λ is an eigenvalue of an operator T on a vector space V if ker(T −λI ) contains a nonzero vector. Any such vector is an eigenvector of T associated V with λ and ker(T −λI ) is the eigenspace of T associated with λ. The set of all eigenvalues of V the operator T is its point spectrum and is denoted by σ (T). p If M is an n × n matrix, then det(M − λI ) (where I is the n × n identity matrix) is a n n polynomial in λ of degree n. This is the characteristic polynomial of M. A standard way of computing the eigenvalues of an operator T on a finite dimensional vector space is to find the zeros of the characteristic polynomial of its matrix representation. It is an easy consequence of the multiplicative property of the determinant function that the characteristic polynomial of an operator T on a vector space V is independent of the basis chosen for V and hence of the particular matrix representation of T that is used. 1.1.20. Example. The eigenvalues of the operator on (the real vector space) R3 whose matrix 0 0 2 representation is 0 2 0 are −2 and +2, the latter having (both algebraic and geometric) 2 0 0 multiplicity 2. The eigenspace associated with the negative eigenvalue is span{(1,0,−1)} and the eigenspace associated with the positive eigenvalue is span{(1,0,1),(0,1,0)}. The central fact asserted by the finite dimensional vector space version of the spectral theorem is that every diagonalizable operator on such a space can be written as a linear combination of projection operators where the coefficients of the linear combination are the eigenvalues of the operator and the ranges of the projections are the corresponding eigenspaces. Thus if T is a diagonalizable operator on a finite dimensional vector space V, then V has a basis consisting of eigenvectors of T. Here is a formal statement of the theorem. 1.1.21. Theorem (Spectral Theorem: vector space version). Suppose that T is a diagonalizable operator on a finite dimensional vector space V. Let λ ,...,λ be the (distinct) eigenvalues of T. 1 n Then there exists a resolution of the identity I = E +···+E , where for each k the range of the V 1 n projection E is the eigenspace associated with λ , and furthermore k k T = λ E +···+λ E . 1 1 n n Proof. A good proof of this theorem can be found in [17] on page 212. (cid:3) 1.1.22. Example. LetT betheoperatoron(therealvectorspace)R2 whosematrixrepresentation (cid:20) (cid:21) −7 8 is . −16 17 (a) The characteristic polynomial for T is c (λ) = λ2−10λ+9. T (b) The eigenspace M associated with the eigenvalue 1 is span{(1,1)}. 1 (c) The eigenspace M associated with the eigenvalue 9 is span{(1,2)}. 2 6 1. LINEAR ALGEBRA AND THE SPECTRAL THEOREM (d) We can write T as a linear combination of projection operators. In particular, (cid:20) (cid:21) (cid:20) (cid:21) 2 −1 −1 1 T = 1·E +9·E where [E ] = and [E ] = . 1 2 1 2 −1 2 −2 2 (e) Notice that the sum of [E ] and [E ] is the identity matrix and that their product is the 1 2 zero matrix. (cid:20) (cid:21) (cid:20) (cid:21) 1 1 1 0 (f) The matrix S = diagonalizes [T]. That is, S−1[T]S = . 1 2 0 9 √ (cid:20)−1 2(cid:21) (g) A matrix representing T is . −4 5 1.2. Normal Operators on an Inner Product Space 1.2.1. Definition. Let V be a vector space. A function which associates to each pair of vectors x and y in V a complex number (cid:104)x,y(cid:105) is an inner product (or a dot product) on V provided that the following four conditions are satisfied: (a) If x, y, z ∈ V, then (cid:104)x+y,z(cid:105) = (cid:104)x,z(cid:105)+(cid:104)y,z(cid:105). (b) If x, y ∈ V, then (cid:104)αx,y(cid:105) = α(cid:104)x,y(cid:105). (c) If x, y ∈ V, then (cid:104)x,y(cid:105) = (cid:104)y,x(cid:105). (d) For every nonzero x in V we have (cid:104)x,x(cid:105) > 0. Conditions (a) and (b) show that an inner product is linear in its first variable. Conditions (a) and (b) of proposition 1.2.3 say that an inner product is conjugate linear in its second variable. When a mapping is linear in one variable and conjugate linear in the other, it is often called sesquilinear (the prefix “sesqui-” means “one and a half”). Taken together conditions (a)–(d) say that the inner product is a positive definite conjugate symmetric sesquilinear form. 1.2.2. Notation. If V is a vector space which has been equipped with an inner product and x ∈ V we introduce the abbreviation (cid:112) (cid:107)x(cid:107) := (cid:104)x,x(cid:105) which is read the norm of x or the length of x. (This somewhat optimistic terminology is justified in proposition 1.2.11 below.) 1.2.3. Proposition. If x, y, and z are vectors in an inner product space and α ∈ C, then (a) (cid:104)x,y+z(cid:105) = (cid:104)x,y(cid:105)+(cid:104)x,z(cid:105), (b) (cid:104)x,αy(cid:105) = α(cid:104)x,y(cid:105), and (c) (cid:104)x,x(cid:105) = 0 if and only if x = 0. 1.2.4. Example. For vectors x = (x ,x ,...,x ) and y = (y ,y ,...,y ) belonging to Cn define 1 2 n 1 2 n n (cid:88) (cid:104)x,y(cid:105) = x y . k k k=1 Then Cn is an inner product space. 1.2.5. Example. Let l be the set of all square summable sequences of complex numbers. (A 2 sequence x = (x )∞ is square summable if (cid:80)∞ |x |2 < ∞.) For vectors x = (x ,x ,...) and k k=1 k=1 k 1 2 y = (y ,y ,...) belonging to l define 1 2 2 ∞ (cid:88) (cid:104)x,y(cid:105) = x y . k k k=1