ebook img

Group theoretic structures in the estimation of an unknown unitary transformation PDF

0.19 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Group theoretic structures in the estimation of an unknown unitary transformation

1 1 Group theoretic structures in the estimation of an 0 2 unknown unitary transformation n a J Giulio Chiribella 4 2 Perimeter Institutefor Theoretical Physics, 31 Caroline Street North, Waterloo, Ontario, N2L 2Y5, Canada ] h p Abstract. This paper presents a series of general results about the optimal estimation of - t physicaltransformationsinagivensymmetrygroup. Inparticular,itisshownhowthedifferent n symmetries of the problem determine different properties of the optimal estimation strategy. a Thepaperalsocontainsadiscussionabouttheroleofentanglementbetweentherepresentation u and multiplicity spaces and about theoptimality of square-root measurements. q [ 2 v 0 1. Introduction 3 1 The estimation of an unknown unitary transformation in a given symmetry group is a general 2 problem related to many stimulating topics, such as high-precision measurements [1], coherent . 2 states and uncertainty relations [2], quantum clocks [3], quantum gyroscopes [4] and quantum 1 reference frames [5]. The aim of this paper is to provide a synthetic account of the general 0 theory of optimal estimation for symmetry groups, presenting new proofs and highlighting the 1 underlying group theoretical and algebraic structures into play. : v i X 1.1. Prologue: dense coding r Let us start with a simple example. Suppose that we have at disposal a quantum system a with two-dimensional Hilbert space = C2 and a black box that performs an unknown H transformation ,i,j 0,1 , defined on the set St( ) of all density matrices on as ij U ∈ { } H H Uij(ρ) := UijρUi†j,∀ρ ∈St(H), where {Uij}1i,j=0 are the unitary matrices 1 0 0 1 1 0 0 1 U00 = (cid:18)0 1(cid:19), U01 = (cid:18)1 0(cid:19), U10 = (cid:18)0 1(cid:19) U11 = (cid:18)1 −0 (cid:19). − The value of the indices (i,j) is unknown to us and our goal is to find it out using the black box only once. Here the natural figure of merit is the minimization of the probability of error. Clearly, if we apply the black box to a state ρ St( ) there will always be an error: the ∈ H four states ρ := (ρ) 1 will never be perfectly distinguishable (in a two-dimensional { ij Uij }i,j=0 Hilbert space there cannot be more than two perfectly distinguishable states!). However, if we introduce a second system and apply the unknown transformation to the projector on K ≃ H the entangled vector Φ = 1 (0 0 + 1 1 ) , where 0 , 1 is the standard basis | i √2 | i| i | i| i ∈ H⊗K {| i | i} for C2, we obtain four states (the projectors on the vectors Φ := (U I)Φ ) that are ij ij | i ⊗ | i perfectly distinguishable: hΦij|Φkli = Tr[Ui†jUkl]/2 = δikδjl. This remarkable fact was observed for the first time by Bennett and Wiesner, who exploited it to construct a protocol known as dense-coding [6]. The moral of dense coding is that the use of entanglement with an ancillary system can improve dramatically the discrimination of unknown transformations. Despite the extremesimplicity of themathematics, this curiousexamplepoints outstructuresthat aremuch deeper than it might seem to the first sight. The aim of the present paper is to illustrate these structures in the general context of the estimation of an unknown group transformation. Our analysis will include dense coding, where the group of interest is the Klein group Z Z . 2 2 × 2. General problem: estimation of an unknown group transformation 2.1. The problem Supposethat we have at disposal one use of a black box and that we want to identify the action of the black box. Suppose also that we have some prior knowledge about the black box: in particular, we know that (i) the black box acts on a quantum system with finite dimensional Hilbert space Cd,d < H ≃ . ∞ (ii) it performsa deterministic transformation belonging to a given representation of a given g U symmetry group G. Mathematically, a deterministic transformation (also known as quantum channel) is described byacompletely positivetrace-preservinglinearmap acting onthesetSt( )ofquantumstates C H (non-negative matrices with unittrace) on theHilbertspace . Therepresentation of the group H G is then given a function : g from G to the set of quantum channels, with the usual g U 7→ U requirements = e U IH g g−1 = g G U U IH ∀ ∈ = g,h G g h gh U U U ∀ ∈ where e G is the identity element of the group and is the identity channel, given by (ρ) :=∈ρ, ρ St( ). In this paper the group G will beIaHlways assumed to be either finite or IH ∀ ∈ H compact. 2.2. Estimation strategies without ancillary systems SincethegroupGandtherepresentation arebothknown,theproblemhereistoidentify g g G the group element g G that specifies th{eUac}ti∈on of the black box. How can we accomplish this ∈ task? A first idea is to prepare the system in a suitable input state ρ St( ) and to apply the ∈ H unknown transformation to it, thus obtaining the output state ρ := (ρ). The procedure g g g U U can be represented diagrammatically as (/.)ρ H Ug H = (/.)ρg H Inthisway,theoutputstateρ willcarrysomeinformationaboutgandwecantrytoextractthis g information with a quantum measurement and to produce an estimate gˆ G. The combination ∈ of the quantum measurement with our classical data processing can be described by a single mathematical object, namely a positive operator-valued measure (POVM, for short). Let us denote by Lin( ) the set of linear operators on and by Lin ( ) Lin( ) the set + H H H ⊂ H of non-negative operators. If G is a finite group, a POVM with outcomes in G is just a function P : G Lin ( ) sending the element gˆ G to the non-negative operator P Lin ( ) + gˆ + → H ∈ ∈ H and satisfying the requirement P = I , where I is the identity on . The conditional probabilityofinferringgˆwhenthPegˆt∈ruGevgˆalueiHsgisthengHivenbytheBornrulHep(gˆg) = Tr[Pgˆρg]. | In general, if G is a compact group and σ(G) is the collection of its measurable subsets [7], a POVM with outcomes in G is an operator-valued measure P : σ(G) Lin ( ) sending the + → H measurable subset B σ(G) to the non-negative operator P Lin ( ) and satisfying the B + ∈ ∈ H requirements P = I G • H • if B = ∞i=1Bi and {Bi}∞i=1 are disjoint, then PB = ∞i=1PBi. The conditioSnal probability that the estimate gˆ lies in thePsubset B σ(G) is given by the Born ∈ rule p(B g) = Tr[P ρ ]. B g | In the following I will frequently use the notation P(dgˆ) to indicate the POVM P. Accordingly, I will also write p(dgˆg) = Tr[P(dgˆ)ρ ] and P = P(dgˆ). In the case of finite | g B B groups, the notation P(dgˆ) will be understood as synonym of PgˆRand the integral over a subset B will be understood as a finite sum. The pair (ρ,P) of an input state and a POVM will be referred to as an estimation strategy. 2.3. Estimation strategies with ancillary systems Inthepreviousparagraphwediscussedstrategieswheretheunknowntransformationwasapplied to a suitable state ρ St( ). However, these strategies are not most general ones: we can ∈ H introduceanancillarysystemwithHilbertspace ,prepareabipartite inputstateρ St( ), K ∈ H⊗K and then apply the black box on system , thus obtaining the output state ρ := ( )(σ). g g H U ⊗IK The schematic of this procedure is ?>ρ H Ug H = ?>ρg H 89 89 K K In this case, to obtain an estimate gˆ of the unknown group element g we will use a POVM P on the tensor product Hilbert space . H⊗K Classically, wewouldnotexpectanyimprovementfromtheuseofanancillarysystembecause thissystemwillnotcarryanyinformationabouttheunknowntransformation. However, herethe stateρcanbeanentangledstate, thatis,astatethatcannotbewrittenasaconvex combination of tensor product states. As the example of dense coding teaches, the use of entanglement can improve the estimation dramatically. Note, however, that mathematically every estimation strategy with ancillary system can be reduced to the form of an estimation strategy without ancillary system by choosing := and := . ′ g′ g H H⊗K U U ⊗IK 2.4. Cost of an estimation strategy To quantify how far is the guess gˆ from the true value one typically introduces a cost function c : G G R,(gˆ,g) c(gˆ,g) which takes its minimum value when gˆ = g. If the true value is × → 7→ g, the expected cost of the estimation strategy (ρ,P) is c(ρ,P g) := c(gˆ,g)p(dgˆg) = c(gˆ,g)Tr[P(dgˆ)ρ ]. g | Z | Z G G Ideally, we would like to minimize the cost simultaneously for every g. However, in general this is not possible because the number of states in the orbit ρ is larger than the dimension g g G { } ∈ of the Hilbert space and therefore there is no way in which the states ρ can be perfectly g g G { } ∈ distinguishable. Since the true value g G is unknown, one approach to optimization is to minimize the ∈ worst-case cost of the strategy, defined as c (ρ,P) = maxc(ρ,P g). (1) wc g G | ∈ An alternative (and less conservative) approach consists in assuming a prior distribution π(dg) forthetruevalueg Gandinminimizingtheaveragecost. Giventhatg iscompletelyunknown, ∈ a natural choice of prior is the Haar measure dg, normalized as dg = 1. With this choice, the G average cost is given by R c (ρ,P) := dg c(ρ,P g). (2) ave Z | G For finite groups we can understand the integral as a finite sum and dg as synonymous of 1/G , | | where G is the cardinality of the group G. | | In general, the worst-case and the average approach are very different: for example, the functional c (ρ,P) is linear in both arguments while the functional c (ρ,P) is not. The two ave wc approaches may lead to very different optimal strategies: for example, since c (ρ,P) is linear ave in ρ the optimal input state can be searched without loss of generality in the set of pure states ρ = ϕ ϕ, while this reduction is generally not possible for the minimization of c (ρ,P). wc | ih | However, when the cost function enjoys suitable group symmetries it is possible to show that there is a strategy (ρ,P) that is optimal for both the worst-case and the average approach. This point will be illustrated in Section 4. 3. Basic notions and notations 3.1. Bipartite states Let φ dH (resp. ψ dK ) be a fixed orthonormal bases for the Hilbert space (resp. ). {| ii}i=1 {| ji}j=1 H K The choice of orthonormal bases induces a bijective correspondence between linear operators from to and bipartite vectors in . Following the “double-ket notation” of Ref. [8], if K H H⊗K A Lin( , ) is a linear operator from to we define the bipartite vector A as ∈ K H K H | ii ∈ H⊗K dH dK A := φ Aψ φ ψ . (3) i j i j | ii h | | i | i| i Xi=1Xj=1 Two useful properties of this correspondence are given by the equations AB = Tr[A B] A,B Lin( , ) (4) † hh | ii ∀ ∈ K H A = (A I )I | ii ⊗ K | Kii = (I AT)I A Lin( , ), (5) H⊗ | Hii ∀ ∈ K H where AT denotes the transpose of A with respect to the fixed bases. Using the singular value decomposition of A, it is easy to see that the state A is of the product form A = φ ψ if | ii | ii | i| i and only if A is rank-one. Moreover, if the dimension of is larger than d , then for every K H state A there exists a d -dimensional subspace such that A . This is clear A A from E|qi.i(5): the dimensioHn of the image of AT caKnnot exceed th|e diiim∈enHsi⊗onKof its domain . H 3.2. Group representations The unknown transformation in our estimation problem belongs to a group representation , where each map is a completely positive trace preserving map sending operators g g G g {inUL}in∈( ) to operators in LUin( ). Now, the condition g−1 g = implies that the map g H H U U IH U must have the form g(ρ) = UgρUg†, where Ug Lin( ) is a unitary matrix. In particular, U ∈ H for g = e we can choose U = I . Moreover, the condition = , g,h G implies e g h gh H U U U ∀ ∈ that the unitaries U define a projective unitary representation, that is, a function g g G { } ∈ U : g U sending elements of the group to unitary operators and satisfying the relation g 7→ U U = ω(g,h)U , g,h G, where ω(g,h) C is a multiplier, satisfying the properties g h gh ∀ ∈ ∈ ω(g,h) =1 g,h G | | ∀ ∈ ω(g,h)ω(gh,k) = ω(g,hk)ω(h,k) g,h,k G ∀ ∈ ω(g,e) = ω(e,g) = 1 g G. ∀ ∈ Since the group G is compact and the space is finite dimensional, with a suitable choice of H basis the Hilbert space can be decomposed as = Cmµ, (6) µ H H ⊗ µMIrr(U) ∈ wherethe sumrunsover theset Irr(U) of all irreduciblerepresentations contained in the isotypic decomposition of U, is a representation space of dimension d , carrying the irreducible µ µ representationUµ,andHCmµ isamultiplicity space, wherem isthemultiplicityoftheirreducible µ representation Uµ in the decomposition of U. Accordingly, the representation U can be written in the block diagonal form U = Uµ I . (7) ⊗ mµ µMIrr(U) ∈ NotethatallirreduciblerepresentationsUµ mustbeprojectiveunitaryrepresentations andmust have the same multiplier ω. Intheoptimization oftheestimationstrategy(ρ,P)wecantakethemultiplicities m aslarge µ aswewant. Indeed,introducinganancillary system meansreplacingtheHilbertspace with K H the tensor product := and replacing the unitary U with the unitary U := U I for ′ g g′ g every g G. In EqsH. (6) aHnd⊗(K7) this means replacing the multiplicity space Cmµ with C⊗mµK ∈ ⊗K and the multiplicity m with m := m d . In particular, we are free to choose the ancillary µ ′µ µ system so that the condition m d isKmet for every µ Irr(U). µ µ K ≥ ∈ 3.3. Bipartite states and group representations Here we combine the facts observed in the paragraphs 3.1 and 3.2. Let us consider a vector Ψ in a Hilbert space where the representation U acts. Using the isotypic decomposition |ofiE∈q.H(6) and the correspondence of Eq. (3) for each tensor product Cmµ, we can write µ H ⊗ Ψ = Ψ Cmµ, µ µ | i | ii ∈ H ⊗ µMIrr(U) µMIrr(U) ∈ ∈ where Ψµ is a linear operator in Lin(Cmµ, µ). In other words, the vector Ψ can be written as H | i a linear superposition of bipartite vectors. As we will see in the next sections, the fact that we canhaveentanglementbetweeneachrepresentationspace andthecorrespondingmultiplicity µ space Cmµ is the key ingredient to the construction of theHoptimal estimation strategies. Suppose that U has been chosen so that m d for every µ Irr(U). From the last µ µ ≥ ∈ observation of paragraph 3.1 we know that each vector Ψ will be contained in a subspace µ , where Cdµ is a suitable subspa|ce ioif Cmµ. Therefore, every vector µ µ µ µ H ⊗ K K ≃ ≃ H Ψ belongs to a suitable subspace . Note that the whole orbit | i H ≃ µ Irr(U)Hµ ⊗ Hµ ∈ Ψ := U Ψ belongs to the subspacLe . In conclusion, as long as we are interested g g g G e {| i | i} ∈ H in pure input states we can effectively replace with the space := where He H µ Irr(U)Hµ⊗Hµ m = d for every µ Irr(U) and we can replace the representation ULwi∈th the representation µ µ ∈ e µ U defined by U := U I , where I denotes the identity on . g µ Irr(U) g ⊗ µ µ Hµ ∈ L e e 3.4. The class states An important family of states in = is the family of states of the form H µ Irr(U)Hµ⊗Hµ ∈ L e c Φ = µ I , c C, c 2 = 1 (8) µ µ µ | i d | ii ∈ | | µMIrr(U) µ µ XIrr(U) ∈ p ∈ These states are linear superpositions of maximally entangled states in the sectors . µ µ H ⊗H In the following I will call these states class states. The reason for the name comes from the fact that these states correspond to class functions (see definition below) through the Fourier- Plancherel theory (see e.g. [9]). The remaining part of this paragraph is aimed at making this correspondence clear. Let us start from Fourier-Plancherel theory. Denote by Irr(G,ω) the set of all irreducible representationsofGwithmultiplierωanddenotebyuµ(g) := µ,iUµ µ,j thematrixelementof ij h | g| i theoperatorUµ withrespecttoafixedorthonormalbasis µ,i dµ fortherepresentationspace g µ∗ µ {| i}i=1 Hµ. Recallthatthefunctions{u˜ij (g) := dµuij∗(g)}µ∈Irr(G,ω);i,j=1,...,dµ areanorthonormalbasis for the Hilbert space L (G,dg) endowedpwith the scalar product f ,f := dg f (g)f (g). 2 h 1 2i G 1∗ 2 Using this fact, we can expand every function f L2(G,dg) as R ∈ dµ µ µ |fi= fij|u˜ij∗i. (9) µ IMrr(G,ω)iX,j=1 ∈ OntheHilbertspaceL (G,dg)weconsidertheactionofGgivenbytheleft-regularrepresentation 2 T withmultiplierω,definedby(T f)(h) := ω(g,g 1h)f(g 1h), f L (G,dg), g,h G.[10]. g − − 2 µ ∀ ∈ ∀ ∈ In particular, the transformation of the functions u˜ij∗(g) is given by dµ µ µ µ Tg|u˜ij∗i = uki(g)|u˜kj∗i, (10) Xk=1 that is, for every fixed value of j the vectors {|u˜µij∗i}di=µ1 span an invariant subspace carrying the irreducible representation Uµ. It is then easy to see that the unitary operator V : L (G,dg) 2 → defined by µ Irr(G,ω)Hµ⊗Hµ ∈ L V uµ∗ := µ,i µ,j µ Irr(G,ω), i,j = 1,...,d (11) | ij i | i| i ∀ ∈ ∀ µ µ intertwinesthetworepresentationsT andW = U I : indeed,usingEqs. (10)and g g µ Irr(G,ω) g ⊗ µ (11)onehasVTg|uµij∗i = Ugµ|µ,ii|µ,ji = (Ugµ⊗Iµ)LV|u∈µij∗iforeveryµ ∈ Irr(G,ω),∀i,j = 1,...,dµ, and therefore VT = W V. Applying the unitary V to the expansion of Eq. (9) we then obtain g g dµ µ V f = f µ,i µ,j . | i ij| i| i µ IMrr(G,ω)iX,j=1 ∈ Letuscomenowtoclassfunctions. Aclassfunctionisafunctionthatisconstantonconjugacy classes, thatis, f(hgh 1) = f(g)forevery g,h G. Any class functioncan bewritten as alinear − ∈ µ combination of irreduciblecharacters, namely f(g) = µ Irr(G)fµχ∗µ(g), where χ∗µ(g) = Tr[Ug∗]. Exploiting the correspondence given by the unitary VPwe∈obtain f µ V f = I . µ | i d | ii µ IMrr(G,ω) µ ∈ p ItisnowclearthattheclassstatesofEq. (8)arenothingbuttheprojectionoftheclassfunctions to the finite dimensional subspace . H ⊂ µ Irr(G,ω)Hµ⊗Hµ BesidesprovidingajustificationforthLech∈oiceofthenameclass states, theFourier-Plancherel e theory also provides a hint of the fact that the class states are optimal for estimation. Indeed, if we take the scalar product of two states in the orbit Φ := U Φ we obtain g g hΦg|Φhi = µ∈Irr(U) |cdµµ|2χµ(g−1h). If cµ is chosen so that cµ = λd{µ| foir somee c|onis}tant λ ∈ C we then obtPain Φ Φ d χ (g 1h), which is nothing but the truncated version of h g| hi ∝ µ Irr(U) µ µ − the Dirac delta δ(g,h) = P ∈ d χ (g 1h). This provides an heuristic argument for the µ Irr(G,ω) µ µ − ∈ optimality of the class statPes, although the choice of the coefficients c to provide the “best” µ { } finitedimensionalapproximationoftheDiracdeltawilldependonthechoiceofthecostfunction c(gˆ,g). 3.5. Notations If V Lin( ) is a unitary matrix, the symbol will always denote the completely positive ∈ H V map (ρ) = VρV . Let : Lin( ) Lin( ) be a completely positive map and let † V C H → K C(ρ) = ri=1CiρCi†, Ci ∈ Lin(H,K) be a Kraus form for C. The notation C†,C∗,CT will be used forPthe following maps r C†(ρ) := Ci†ρCi Xi=1 r (ρ) := C ρCT ∗ i∗ i C Xi=1 r T(ρ) := CiTρCi∗ = ( ∗)†(ρ), C C Xi=1 wherethe complex conjugation and the transposeT are definedwith respect to thefixed bases ∗ for and . Note that themap is theadjointof with respecttotheHilbert-Schmidtscalar † H K C C product AB = Tr[A B],A,B, Lin( ): indeed, we have A (B = (A)B .) Note also † † hh | ii ∈ H hh |C ii hhC | ii that the definition of the maps , , T does not depend on the choice of a particular Kraus † ∗ C C C form. ConsidertheHilbertspace = ,therepresentation U = Uµ I , H µ Irr(U)Hµ⊗Hµ µ Irr(U) ⊗ µ and let L ∈ L ∈ e e U′ =  Iµ Bµ Bµ Lin( µ) µMIrr(U) ⊗ | ∈ H  e ∈   U′′ =  Aµ Iµ Aµ Lin( µ) µMIrr(U) ⊗ | ∈ H  e ∈   be the commutant and the bicommutant of U, respectively. Every class state in Eq. (8) then defines a modular conjugation: for every operator A = A I U I will denote by e µ Irr(U) µ⊗ µ ∈ ′′ ∈ AR U its modular conjugate, given by L ′ e ∈ e AR := I A . (12) µ ∗µ ⊗ µMIrr(U) ∈ Every class state Φ then enjoys the symmetry | i ∈H e U UR Φ = Φ g G, (13) g g | i | i ∀ ∈ e e as it can be easily verified using Eq. (5) for every µ Irr(U). The modular conjugate ∈ representation will be denoted by UR. For a completely positive map C(ρ) = ri=1CiρCi† with C U for every i = 1,...,r we will denote by R the completely pPositive map CR(ρ) :=i ∈ ri=e1′′CiRρCiR†. Note that ien general we have C P (Φ Φ ) = R (Φ Φ ). (14) † C | ih | C | ih | Introducingtheswap map ,definedby A B := B A ,weobtain S S µ Irr(U) µ⊗ µ µ Irr(U) µ⊗ µ (cid:16) ∈ (cid:17) ∈ the relations L L A= AR A U (15) ∗ ′′ S ∀ ∈ (cid:0) (cid:1) r e C = SCR∗S ∀C :C(ρ) = CiρCi†, Ci ∈ U′′ ∀i= 1,...,r. (16) Xi=1 e 4. Optimal estimation 4.1. Left-invariant cost functions: optimality of covariant measurements In this paragraph I will shortly review two classic results about the structure of covariant measurements and about their optimality. The proofs of these results can be found in the monographs [11, 12, 13]. Let us start from the definition: Definition 1 A POVM P : σ(G) Lin ( ) is covariant with respect to the representation + → H U if P = (P ) g G, B σ(G). g g G gB g B { } ∈ U ∀ ∈ ∀ ∈ Covariant POVMs have a very simple structure: Theorem 1 (Structure of covariant POVMs) Let G be a compact group, U be a projective unitary representation of G on the Hilbert space . A POVM P :σ(G) Lin ( ) is covariant + H → H with respect to U if and only if it has the form P(dgˆ) = (ξ) dgˆ (17) gˆ U where dgˆ is the normalized Haar measure and ξ Lin ( ) is a suitable operator, called the seed + ∈ H of the covariant POVM. The special form of covariant POVMs provides a great simplification in optimization problems, because it reduces the optimization of the whole POVM to the optimization of a single non-negative operator ξ Lin ( ). Luckily, this simplification can be done without loss + ∈ H ofgenerality inmostsituations. Precisely, covariant measurementareoptimalwheneverthecost function c(gˆ,g) is left-invariant, that is c(hgˆ,hg) = c(gˆ,g) g,h G. ∀ ∈ Theorem 2 (Optimality of covariant POVMs) Let G be a compact group, U be a g g G unitary projective representation of G on the Hilbert space and c(gˆ,g) be a left-i{nva}ria∈nt cost H function. Then, for every input state ρ St( ) the optimal POVM P for both the worst-case ∈ H and uniform average approach can be assumed without loss of generality to be covariant. With this choice one has c (ρ,P) = c (ρ,P) = c(ρ,P g) g G. ave wc | ∀ ∈ 4.2. Right-invariant cost functions: optimality of class states In this paragraph I illustrate with a new proof a recent result on the optimality of class states [14]. This result is dual to the classic result on the optimality of covariant POVMs showed in the previous paragraph. A central feature of quantum theory is the validity of the purification principle [15]: for every mixed state ρ St( ) there exists a Hilbert space and a pure state Ψ ∈ H K | ii ∈ H ⊗ K such that ρ = Tr [Ψ Ψ ] The pure state Ψ is referred to as a purification of ρ. From Eq. (5) we have ρ = ΨKΨ| aiinhhd S|upp(ρ) = Rng(Ψ), where Supp and Rng denote the support and the † range, respectively. The simplest example of purification of a state ρ St( ) is the square-root 1 ∈ H purification ρ2 ′, where ′ . | ii ∈ H⊗H H ≃ H A large number of quantum features, such as teleportation and no cloning, are simple consequences of the purification principle [15]. The structure of the optimal states for the estimation ofgroupparametersisnoexception tothat. InthefollowingIwillgiveasimpleproof of the optimality of the states in Eq. (8) based on the purification principle. To this purpose I will use a remarkable consequence of purification, namely the fact, first noticed by Schro¨dinger [16], that every ensemble decomposition of a state can be induced from the purification via a quantum measurement on the purifying system: Lemma 1 Let (X,σ(X)) be a measurable space and let ρ(dx) : σ(X) Lin ( ),B ρ be + B → H 7→ an ensemble, that is, an operator-valued measure such that ρ St( ). If Ψ is a X ∈ H | ii ∈ H⊗K purification of ρ then there exists a POVM P : σ(X) Lin ( ),B σ(X) P such that X + B → K ∈ 7→ ρ = Tr [(I P )Ψ Ψ ] B σ(X). (18) B B K H⊗ | iihh | ∀ ∈ Proof. Let Ψ 1 be the inverse of Ψ on its support and define the POVM P via P := − B Ψ 1ρ (Ψ ) 1 T. The POVM P provides a resolution of the projector on the range of ΨT and − B † − c(cid:2)an beeasily ex(cid:3)tended to aresolution of theidentity on K. Usingtheidentity ΨΨ−1 = PSupp(ρX) where Π is the projector on the support of ρ we have Supp(ρX) X Tr [(I P )Ψ Ψ ] = Tr [(I (Ψ 1ρ )T Ψ Ψ (I (Ψ ) 1)] B − B ∗ − K H⊗ | iihh | K H⊗ | iihh | H⊗ = Tr ′[(ΨΨ−1ρB ΨΨ−1 ] H | iihh | = TrH′[|ΠSupp(ρX)ρBiihhΠSupp(ρX)] = ρ . B (cid:4) Suppose that the cost function c(gˆ,g) is right-invariant, that is c(gˆh,gh) = c(gˆ,g), gˆ,g,h ∀ ∈ G. Then we have the following Theorem 3 (Optimality of the purification of invariant states) Let G be a compact group, U be a projective unitary representation of G on the Hilbert space and c(gˆ,g) be a H right-invariant cost function. Then, the optimal input state for both the worst-case and uniform average approach can be assumed without loss of generality to be the purification of an invariant state ρ U . Denoting by Ψ such a purification, there exists an optimal POVM P on ′ ∈ | i ∈ H⊗L such that c (Ψ,P) = c (Ψ,P) = c(Ψ,P g), g G. ave wc H⊗L | ∀ ∈ Proof. Suppose that σ St( ) is an optimal input state and that Q(dgˆ) is an optimal ∈ H ⊗K POVM for the average approach (resp. for the worst case approach). We now prove that there is a Hilbert space and another state Ψ that is the purification of an invariant L | i ∈ H ⊗ L state ρ U and is optimal as well. Consider the orbit σ := ( )(σ) and the ′ h h h G ∈ { U ⊗ IK } ∈ operator-valued measure σ(dh) = σ dh. Clearly the state σ = dh σ is invariant under h G G h U I . Let Ψ ′ ′, with ′ , ′ , be tRhe square-root purification ⊗ K | i ∈ H ⊗K ⊗ H ⊗ K H ≃ H K ≃ K of σ and define := . Note that Ψ is also a purification of the G ′ ′ L K ⊗ H ⊗ K | i ∈ H ⊗ L invariant state ρ := Tr [σ ] U . We now have to show that Ψ is optimal for the uniform G ′ K ∈ | i average (resp. worst-case) approach. By lemma 1, we know that there is a POVM M dh h on ′ ′ such that ρh = Tr ′ ′[(I I Mh)Ψ Ψ ] h G. Define now the POVM H ⊗K H⊗K H ⊗ K ⊗ | ih | ∀ ∈ P(dgˆ) := dh Q(d(gˆh)) M . It is easy to see that the average cost of the strategy (Ψ,P) is G ⊗ h equal to tRhe average cost of the strategy (σ,Q): c (Ψ,P) = dg c(g,gˆ) Ψ P(dgˆ)Ψ ave g g Z Z hh | | ii G G = dg dh c(g,gˆ) Ψ (U Q(d(gˆh))U M )Ψ g† g h Z Z Z hh | ⊗ | ii G G G = dh dg c(g,gˆ) Tr[U Q(d(gˆh))U σ ] g† g h Z Z Z G G G = dh d(gh) c(gh,gˆh) Tr[Q(d(gˆh))σ ] gh Z Z Z G G G = c (σ,Q), ave having used the right-invariance of the Haar measure and of the cost function. On the other hand, the worst-case cost of the strategy (Ψ,P) cannot be larger than the worst-case cost of the strategy (σ,Q): c (Ψ,P) = max c(gˆ,g) Ψ P(dgˆ)Ψ wc g g g G ZG h | | i ∈ = max dh c(gˆ,g) Ψ (U Q(d(gˆh))U M )Ψ g† g h ′ g G ZG ZG h | ⊗ | i ∈ = max dh c(g,gˆ) Tr[U Q(d(gˆh))U σ ] g† g h g G ZG ZG ∈ = max dh c(gˆh,gh) Tr[Q(d(gˆh))σ ] gh g G ZG ZG ∈ = max dh c(σ,Q gh) g G Zh | ∈ dhmax c(σ,Q gh) ≤ Zh g G | ∈ = c (σ,Q). wc Hence, we proved that, both for the average and for the worst case approach, whichever the optimal strategy (σ,Q) is, the strategy (Ψ,P) cannot be worse. Finally, we observe that for the strategy (Ψ,P) we have c(Ψ,P g) = dh c(gˆh,gh) Tr[Q(d(gˆh))σ ]= dh c(σ,Q h). This proves that c(Ψ,P g) is indepe|ndentRoGf g RGG, whence c(Ψ,P g) = cagvhe(Ψ,PRh) = cwc(Ψ,|P). (cid:4) | ∈ | Theorem 4 (Optimality of the class states) Let G be a compact group, U be a unitary projective representation of G on the Hilbert space satisfying the property d m µ Irr(U) µ µ H ≤ ∀ ∈ and c(gˆ,g) be a right-invariant cost function. Then, the optimal input state for both the worst- case and uniform average approach can be assumed without loss of generality to be a class state of the form p µ Φ = I = , (19) µ µ µ | i µMIrr(U)rdµ| ii∈ H µMIrr(U)H ⊗H ⊆ H ∈ e ∈ where p 0 and p = 1. µ ≥ µ µ P

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.