Analysis of energy transfer in quantum networks using kinetic network approximations David K. Moser1,2 1: Departmentof Mathematics 2 2: Departmentof Physics 1 Northeastern University 0 2 Boston MA 02115 n [email protected] a J January 30, 2012 7 2 Abstract ] h Coherent energy transfer in pigment-protein complexes has been studied by mapping the quantum network p to a kinetic network. This gives an analytic way to find parameter values for optimal transfer efficiency. In the - t case of the Fenna-Matthews-Olson (FMO) complex, the comparison of quantum and kinetic network evolution n showsthatdephasing-assistedenergytransferisdrivenbythetwo-sitecoherentinteraction,andnotsystem-wide a u coherence. UsingtheSchurcomplement,wefindanewkineticnetworkthatgivesacloserapproximationtothe q quantum network by including all multi-site coherence contributions. Our new network approximation can be [ expandedas a series with contributions representing different numbersof coherently interacting sites. Forbothkineticnetworkswestudythesystemrelaxation time,thetimeittakesfortheexcitationtospread 1 throughout the complex. We make mathematically rigorous estimates of the relaxation time when comparing v kineticandquantumnetwork. Numericalsimulationscomparingthecoherentmodelandthetwokineticnetwork 1 6 models,confirmourbounds,andshowthattherelativeerrorofthenewkineticnetworkapproximationisseveral 7 orders of magnitudesmaller. 5 Keywords: exciton transfer, quantum efficiency, kinetic networks, FMO, coherent energy transfer, quantum . networks, Schurcomplement. 1 0 2 Contents 1 : v 1 Introduction 2 i X 2 The quantum network 5 r a 2.1 Master equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2 Converting to vector equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.3 The coherent evolution matrix M . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 3 Kinetic networks 7 3.1 Extracting the kinetic network N . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3.2 Expanding N . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 3.3 The network N . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 0 3.4 Including re-emission, recombination and trapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 3.5 Numerical simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 4 Preliminaries 10 4.1 Norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 4.2 Conditions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 4.3 Inverse bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 4.4 Spectral properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1 5 Bounding relaxation time error 13 5.1 Relaxation time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 5.2 Resolvent difference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 5.3 Comparing the relaxation time of M and N . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 5.4 Comparing the relaxation time of N and N . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 0 6 Bounding evolution error 16 6.1 Comparing the evolution of M and N . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 6.2 Comparing the evolution of N and N . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 0 7 Applications 19 7.1 Highly connected network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 7.2 Linear network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 7.3 The FMO-complex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 8 Resolvent difference bounds 23 8.1 Bounds in the right half plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 8.2 Bounds parallel to the imaginary axis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 9 Conclusion 26 10 Acknowledgments 26 A Three sites 26 B General construction 28 B.1 Constructing a˜ and˜b . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 0 B.2 Constructing ν˜ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 C Calculations for applications 29 C.1 Highly connected network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 C.2 Circular chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 References 30 1 Introduction SincecoherentenergytransferintheFenna-Matthews-Olsoncomplex(FMO)hasbeenobserved[6,9,13],extensive experimentalandtheoreticalresearchhasbeendedicatedtostudyingcoherentresonanttransfer[5]andthecoherent pigment-proteininteraction[12,8]. Inparticular,numericalsolutionsofsimplemodelshaveshownthatdephasing– the destruction of the coherences – at an intermediate rate helps to increase the energy transfer efficiency [10, 11]. This has been called dephasing- or environment-assisted energy transfer, and is analogous to a critically damped oscillator. The dephasingcorrespondsto dampingandcausesthe excitonto relaxtoanequaldistributionforevery pigment site instead of staying localized due to the energy mismatch between the sites. Themodelsarebasedontwoassumptions. First,onlyasingleexcitonispresent,itislocatedatanyoftheseven pigments. The pigment exciton energy, and the pigment dipole-dipole interaction [4, 1] then lead to an oscillatory evolutionofthesystem. Andsecond,thesite-environmentinteractionsareassumedtobepurelyMarkovianwithout any temporal or spatial correlations. The environment interactions are dephasing, recombination and trapping. Dephasing destroys the site coherences without destroying the exciton itself, and phonon recombination or photon re-emission lead to loss of the exciton to the environment. Trapping is the transfer of the exciton to the reaction center, where the electronic energy is converted to chemical energy, in FMO it occurs at pigment 3. The transfer efficiency is the probability that an exciton starting at site 1 or site 6 reaches the reaction center. For a general system with n pigments, we convert the master equation of the coherent model into vector form ρ~˙ =Mρ~ where ρ~ Rn2 is the density matrix in vector form and M is a real n2 n2-matrix. Two procedures to find M are ∈ × presented in 2.2 and 3.5. 2 1 quantum kinetic N 0 kinetic N 0.9 f 0.8 0.7 -2 f ∆f 0 -4 1 g o l -6 -8 -3 -2 -1 0 1 2 3 4 5 log γ(cm−1) 10 Figure 1: The efficiency of energy transfer in the FMO monomer. Model parameters are described in 7.3 To study population transfer channels and conditions for optimal transfer, a mapping to kinetic networks has been proposed [3, 7]. A kinetic network is a system where the exciton jumps incoherently between sites according to some fixed rates, i.e. a continuous-time Markov process. In its simplest version this approximation only takes into account the coherent interaction between pairs of sites to derive the transfer rate between them. If the two sites interact with strength V, have an energy separation E, and both sites experience dephasing at rate γ and population loss at rate κ then the rate is 2 V 2(γ+κ) µ= | | . (1) (γ+κ)2+E2 This rate is maximized for the intermediate dephasing rate γ = E κ so the phenomenon of dephasing-assisted − transfer is maintained in this approximation. For a system with n sites, these rates constitute the off-diagonals of a n n rate matrix N , and the system populations evolve according to 0 × p~˙ =N p~ 0 wherep~ Rn isthe time-dependentpopulationvector. Figure1displaysthetransferefficiencywithmodelsM and ∈ N for different γ, the dephasing-assisted regime clearly shows as a peak around γ 170cm−1. At the peak the 0 ≈ population evolution of M is well approximatedby that of N , therefore dephasing-assistedenergy transfer can be 0 explained by the relatively simple coherent dynamic between pairs of sites that enters the rate µ and the influence of system-wide coherence is small. Toextractthelimitofgoodapproximationweintroducescalingvariables,Γwhichisproportionaltothe energy separations,dephasingandpopulationlossrates,andΘwhichisproportionaltothesiteinteractions. Wewillshow that the approximationof N to M becomes goodas ΘΓ−1 approaches0. We generalizethe procedure of finding a 0 kinetic network approximationin a mathematically appealing way using block matrices. We find a kinetic network matrix N that follows the evolution of M much closer – it is over three orders of magnitude more precise than the network N as shown in Figure 1. Further, it can be expanded in ΘΓ−1 as 0 ∞ N = N k k=0 X 3 where N is the approximation described above, and the N are rate corrections due to coherent interactions via 0 k k intermediate sites. The expansion terms become smaller for increasing k, N Θ (ΘΓ−1)k. By stopping the k ∝ · expansion at a finite k kinetic networks approximation of varying accuracy can be formed allowing the study of coherent interaction at different “scales” or number of involved sites. We restrict our further investigation to the dominant contribution N and the entire sum N. 0 In our exact bounds we study the system with all population-loss mechanisms removed. Due to dephasing the exciton spreads throughout the system at the exciton relaxation time τ and all populations become equal. The difference ∆τ betweenrelaxationtimes ofM andN orN givesasimple measureofhowgoodthe kineticnetworks 0 approximate the quantum network. As ΘΓ−1 becomes small the kinetic networks approach the quantum network and ∆τ becomes small as well. We define τ and ∆τ as follows, using the Euclidean norm p~ = n p2 to compare population vectors. k k2 i=1 i Definition 1. pP 1. The map T : Rn2 Rn is the restriction of density vectors ρ~ to population vectors p~, and consequently T† → givestheembeddingofpopulationvectorspaceindensityvectorspace. Inparticular,ifthefirstncomponents of ρ~ represent the site populations, then T =( n,0n×(n2−n)). 1 2. The maximum relaxation time is ∞ 1 τ =max eNtp~ dt p~0 (cid:13)ˆ0 0− n (cid:13)2 and the corresponding minimal relaxation rate is(cid:13)(cid:13) (cid:13)(cid:13) (cid:13) (cid:13) 1 µ= τ 3. The maximum deviation of relaxation time between the quantum network M and the kinetic network N is ∞ ∆τ =max TeMtT† eNt p~ dt . p~0 (cid:13)ˆ0 − 0 (cid:13)2 (cid:13) (cid:0) (cid:1) (cid:13) 4. Define τ , µ and ∆τ in the same way, repla(cid:13)cing N with N . (cid:13) 0 0 0 (cid:13) 0 (cid:13) For our bounds we require that every site experiences dephasing. Further, the network has to be connected, meaning that any two sites can exchange populations -directly or indirectly- such that the relaxed state will have equal population everywhere. And finally we also require our site interactions to be real – but it is clear from our proofs that the generalizationto complex interactions could be treated in a similar manner. Our first results shows how fast the relaxation time of the two kinetic networks N and N approximate that of 0 the quantum network M as ΘΓ−1 gets small. Theorem 2. There are scaling invariant constants k and k , such that for ΘΓ−1 small enough we have the 1 2 following bounds: 1. The relative difference of relaxation time between quantum evolution M and kinetic evolution N is bounded 0 by ∆τ =∆τ /τ k ΘΓ−1. 0,rel 0 0 1 ≤ 2. The relative difference of relaxation time between quantum evolution M and kinetic evolution N is bounded by ∆τ =∆τ/τ k Θ2Γ−2. rel 2 ≤ This Theorem follows from Theorem 5 and Corollary 7 in Section 5. We also find the following exponential bounds on the time dependence. Theorem 3. There are scaling invariant constants k , k and k , such that for any initial population distribution 3 4 5 p~ we have the following bounds, as long as ΘΓ−1 is small enough: 0 1. For all times t 0 ≥ TeMtT†p~ eN0tp~ k e−µ0t/2 ΘΓ−1. 0− 0 2 ≤ 3 · 2. For all times t 0 (cid:13) (cid:13) ≥ TeMtT(cid:13)†~p eNtp~ k e−(cid:13)µt/2 Θ2Γ−2(1+k logΘΓ−1). 0− 0 2 ≤ 4 · 5 ThisTheoremfollowsfrom(cid:13)Theorem8andCoro(cid:13)llary10inSection6. Weexpectthatmoresophisticatedmethods (cid:13) (cid:13) might yield the same bound without the Θ2Γ−2logΘΓ−1 term. 4 2 The quantum network WefirstintroducetheMasterequationforthecoherentmodel. Thenwereformulatetheequationinvectorformand combine the entire dynamicinthe realn2 n2-matrixM. We describethe generalstructureofM asapreparation × to the next section, where we generate kinetic networks from parts of M. 2.1 Master equation We consider the same quantum mechanical system studied in [11] with n sites carryinga single excitation which is equivalent to a system with n states/levels. The site energies are E R so the energy operator is k ∈ n H = E k k . k | ih | k=1 X The site k couples to site l with interaction strength V C so the interaction operator is kl ∈ V = V k l kl | ih | k6=l X where V =V . Site trapping, re-emission and recombination can be incorporated by an anti-hermitian operator kl lk A. Let κ be the combined rate of exciton loss at site k due to these effects, then A is defined as k n i A= − κ k k . k 2 | ih | k=1 X Finally, every site is also under the influence of dephasing at rate γ 0 incorporatedin the Lindbladian superop- k ≥ erator n 1 (ρ)= L ρL† ρ,L†L L k k− 2{ k k} k=1 X with Lk =√γk i i. Setting ~=1, the single exciton manifold of the quantum network is described by the master | ih | equation ρ˙ = i[H +V,ρ] i A,ρ + (ρ) (2) − − { } L where square and curly brackets represent commutator and anti-commutator respectively. For now we set A=0,ignoring excitondepleting processes asexplained above. We willmention how to include them in the kinetic network approximations later on. Our approximation becomes exact in the limit where the energy difference between sites is large, the dephasing is large and the interactions are small. To be specific, we introduce scaling parameters Γ and Θ and consider the limit ΘΓ−1 0. Energies and dephasing scale like Γ and → interactions scale like Θ E Γ, k ∝ γ Γ, k ∝ V Θ. kl ∝ With these assumptions the master equation turns into ρ˙ = i[ΓH +ΘV,ρ]+Γ (ρ). (3) − L Because this equation is linear in ρ it can be converted into vector form ρ~˙ =Mρ~ where ρ~ Rn2 is the density matrix in vector form and M is a real n2 n2-matrix. Two procedures to find M are ∈ × presented in 2.2 and 3.5. 5 2.2 Converting to vector equation We rewrite the master equation (3), skipping the scaling factors Θ and Γ, it is easy to reintroduce them at a later point ρ˙ = i[H+V,ρ]+ (ρ). (4) − L Our first goal is to convert this into the differential equation ρ~˙ =Mρ~ for density “vectors” ρ~ Rn2. Notice thatbecause ρ=ρ† the space ofdensity matrix has n2 realdimensions,so we ∈ are not using any information when mapping ρ to ρ~. We use the following conversion: 1. The first n entries of the density vector are the populations – the real diagonal entries of ρ. 2. For the entries n+1 to n2 we alternate betweenrealandimaginaryparts ofthe coherences– the off-diagonal entriesofρ–startingwithentryρ wherek =1andl =2continuingbyincreasingluntill=n,thenmoving kl to the entry ρ . We multiply all these entries with √2, a normalization factor useful to achieve simpler 23 expressions later on. In terms of index equations this is: 1. For k =1...n ρ~ =ρ . k kk 2. For k,l 1,...,n with k <l ∈{ } ρ~ =√2Reρ n+2n(k−1)−k(k+1)+2l−1 kl ρ~ =√2Imρ . n+2n(k−1)−k(k+1)+2l kl Other mappings will yield the same kinetic networks, as long as they allow for an easy separation of population and coherence space. While somewhat tedious, it is now relatively straightforwardto find the matrix M such that ρ~˙ =Mρ~. Tofindtherowsk =1...nwewriteoutthediagonalcomponentsoftheRHSof(4),andtofindrowsk =n+1,...,n2 wewriteouttheoff-diagonalsoftheRHSof(4). Wefollowthisprocedureexplicitlyforthecasen=3inAppendixA. From there it is obvious how the procedure generalizes to larger n. Here we will only present the final form. 2.3 The coherent evolution matrix M ForsimplenotationandtosimplyextractthekineticnetworkswesplitupthedensityvectorspaceRn2. LetP =Rn be the space of populations and let C = Rn2−n be the space of coherences. We can then write density vectors as p~ ρ~= with ~p P and~c C. With this splitting the matrix M describing the quantum network looks like ~c ∈ ∈ (cid:18) (cid:19) 0 a† M = − a b (cid:18) (cid:19) where a:P C and b:C C are real matrices (so a† =a⊤, but we’ll keep the more general notation for later). → → Notice that the populations do not affect each other directly, but only via the coherences. Matrixadescribeshowpopulationscoupletocoherences,itsentriesarerealandimaginarypartsofV ,naturally, kl site k will onlycouple to coherenceskl withl=k, thus ofthe (n2 n)entriesinthe k-thcolumnofa only 2(n 1) 6 − − are nonzero. Matrix b describes how coherences couple to other coherences, if considered as a block matrix with 2 2-blocks the diagonal block for the coherence between site k and site l is of the form × γ E − kl − kl (5) E γ kl kl (cid:18) − (cid:19) 6 where 1 γ = (γ +γ ) kl k l 2 E =E E . kl k l − The off-diagonal blocks consist of real and imaginary parts of V . kl From the form of M, when ignoring the off-diagonal blocks of b, we see that the site k couples to the site l via the coupling strength V , then some mixture of γ and E and then again via the coupling strength V . This kl kl kl kl reminds us of the rates of the form µ = 2V2γ described in (1) that make up the matrix N . We will make this γ2+E2 0 intuition precise is the next subsections. 3 Kinetic networks In this section we show how the kinetic network N emerges naturally out of the study of the resolvent (z M)−1. − We expand N in powers of ΘΓ−1, giving the series ∞ N = N k k=0 X withtheleadingordercontributionbeingN . Forsomestepsinvolvingmatrixcalculationsweonlygiveasimplified 0 version. However,in Appendix (A) we follow the procedure described below, giving the full expressionsin the case n=3. 3.1 Extracting the kinetic network N To extract kinetic networks from M we consider its resolvent (z M)−1. Remember that for any holomorphic − function f we have 1 f(M)= f(z)(z M)−1dz. 2πi˛ − Therefore, if one can bound the resolvent appropriately, one can also bound the evolution operator eMt and other related quantities. Because we only care about approximating the population dynamics we restrictour view to the population block of the resolvent of M. The Banchiewicz formula [2] gives the inverse of a 2 2-block matrix. × The first block of the inverse – in our case the population block – is called the Schur complement, and due to its basic nature has many applications in applied mathematics, statistics and physics [14]. Here we use it to “pull” the coherence dynamic back into population space. Only writing the Schur complement and skipping the other blocks of the resolvent we have (z a†(b z)−1a)−1 (z M)−1 = − − · . − (cid:18) · ·(cid:19) Remember the operator T, the restriction to population space. With our choice of density vector basis it has the form T = n 0n×(n2−n) . 1 (cid:0) ~p (cid:1) The difference ofevolutionfor initialconditionsρ~ = 0 =T†p~ (zerocoherences)betweenquantumnetworkM 0 0 0 (cid:18) (cid:19) and kinetic network N is thus 1 TeMtT† eNt p~ = ezt (z a†(b z)−1a)−1 (z N)† dz. − 0 2πi˛ − − − − (cid:0) (cid:1) (cid:0) (cid:1) For a good approximation we require (z a†(b z)−1a)−1 (z N)†. (6) − − ≈ − At this point it is a small step to drop the second z on the LHS, in which case the formula becomes equality if we set N =a†b−1a. 7 To see intuitively that this approximation is good, consider the following. Matrix b contains terms proportional to Γ on its diagonal and terms proportionalto Θ on its off-diagonal, matrix a is proportionalto Θ, therefore N Θ2Γ−1 ∝ when ΘΓ−1 becomes small. For values of z that are smaller than eigenvalues of b the approximation (6) is good because (b z)−1 b−1, for values ofz largerthaneigenvaluesof b is good,because then z is muchlargerthanthe − ≈ eigenvalues of N, and so both sides of (6) are approximately z−1. This basic insight is what drives our bounds in Section 8. 3.2 Expanding N As mentioned in 2.3, b consists of 2 2-blocks proportional to Γ on the diagonal and 2 2-blocks proportional to × × Θ on the off-diagonal. We separate this contributions defining b=b +ν 0 where b Γ andν Θ is the block-diagonalandblock-off-diagonalof b respectively. IfΘΓ−1 is smallenoughand 0 ∝ ∝ if b is invertible we can expand 0 ∞ b−1 = b−1 νb−1 k . 0 − 0 k=0 X (cid:0) (cid:1) This leads to the expansion ∞ N =a†b−1a= N k k=0 X with N =a†b−1 νb−1 ka. (7) k 0 − 0 When using explicit forms of a, b0 and ν one can see that(cid:0)the rat(cid:1)es in Nk consist of corrections due to interactions via k intermediates. Roughly speaking,every of the (k+1)sites along the chaincontributes a factorof Θ, every of the k coherences (links) contributes a factor of Γ−1, thus N scales like Θk+1Γ−k. k 3.3 The network N 0 We now present the explicit form of N =a†b−1a 0 0 the dominant contributionto N. We only show the crucial parts of the calculations that should make clear how to get the result for general n. Notice that, from 3.2 and (5), it follows that b is a (n2 n) (n2 n) matrix with the only nonzero entries 0 − × − being 2 2 blocks × γ E kl kl − − E γ kl kl (cid:18) − (cid:19) along the diagonal. With the unitary transformation 1 i i U0 = √2 −1 1 (cid:18) (cid:19) wecandiagonalizethese2 2blocks. Hence,theentirematrixb canbediagonalizedbyapplyingthetransformation 0 × U = (n2−n)/2 U0 (8) 1 ⊗ and ˜b =U†b U =diag(α ,α¯ ,α ,α¯ ,...,α¯ ) (9) 0 0 12 12 13 13 n−1,n with α = γ +iE kl kl kl − 8 where diag denotes a diagonal matrix with given diagonal entries. In fact, U also helps to simplify a, consider the case n=3 V V 12 12 − V V 12 12 − V V a˜=U†a= 13 − 13 , (10) V V 13 − 13 V V 23 − 23 V V 23 − 23 and the same happens for ν˜=U†νU (derivation in Appendix A). Notice that both˜b and a˜ are complex matrices, 0 still, we can use the transformed matrices a˜, ˜b , and ν˜ when finding explicit expressions for the real matrices N , 0 k because U cancels out. For example N =a†b−1a 0 0 =a†UU†b−1UU†a. 0 =(U†a)†(U†b−1U)−1(U†a) 0 =a˜†˜b−1a˜. 0 In the case n=3 we get µ µ µ µ 12 13 12 13 − − N = µ µ µ µ (11) 0 12 12 23 23 − − µ µ µ µ 13 23 13 23 − − with 2 V 2γ kl kl µ = | | . kl γ2 +E2 kl kl The following simplified calculation illustrates how the rates µ result from the matrix multiplication a˜†˜b−1a˜ kl 0 V¯ † α−1 0 V¯ − = VV¯(α−1+α¯−1) V 0 α¯−1 V − (cid:18) (cid:19) (cid:18) (cid:19)(cid:18)− (cid:19) 2 V 2γ = | | . γ2+E2 More generally for any n we have (N ) =µ (12) 0 kl kl for i=j and 6 (N ) = µ (13) 0 kk − kl l6=k X This is just the network described in [3] and the introduction. 3.4 Including re-emission, recombination and trapping The population decreasing effects of re-emission, recombination and trapping can all be described by the rates κ k of the diagonal anti-hermitian operator n i A= − κ k k k 2 | ih | k=1 X included in our general master equation (2). The contribution to the rate of change ρ˙ is easily calculated 1 i A,ρ = (κ +κ )i j , k l − { } − 2 | ih | k,l X and M becomes c a† M = 1 − a b+c 2 (cid:18) (cid:19) 9 with the new contributions c = diag(κ ,κ ,...,κ ) 1 1 2 n − and c = diag(κ ,κ ,κ ,κ ,...,κ ) 2 12 12 13 13 n−1,n − with κ = 1(κ +κ ) the rate that decreases the coherence of sites k and l. With this the networks become kl 2 k l N =a†(b +c )−1a+c 0 0 2 1 N =a†(b+c )−1a+c 2 1 N =a†(b +c )−1 ν(b +c )−1 ka (14) k 0 2 0 2 − which also hold with the replacements a a˜, b ˜b and ν(cid:0) ν˜, while lea(cid:1)ving c and c unchanged. 1 2 → → → The rates in N can again be calculated directly 0 2 V 2(γ +κ ) kl kl kl (N ) =µ = | | 0 kl kl (γ +κ )2+E2 kl kl kl for k =l and 6 (N ) = κ µ . 0 kk − k− kl l6=k X 3.5 Numerical simulations According to the last two subsections, network N is easy to calculate directly, while network N and any k-site 0 contributionN canbeformedfromthegeneraldefinitionofa˜,˜b andν˜(seeAppendix(B))whichcanbesomewhat k 0 tedious. However,thereisanotherapproachrelatedtonumericalsimulations. Whenrunningnumericalcalculations to simulate a complex master equation (2) on a software like Octave or Matlab, the need to convert the equation to the form ρ~˙ =Mρ~ with a real M arises in any case. This can be done as we describe it in 2.2, or more easily – because we have the help of a computer – by defining an orthonormal density space basis. For example set σ = k k k | ih | ξ =(k l + l k )/√2 kl | ih | | ih | η =( ik l +il k )/√2 kl − | ih | | ih | for k < l. Then matrix M can be formed by applying the master equation to those vectors and finding their coordinates. The following gives the population space block of M M =Tr σ† (σ ) kl kM l (cid:16) (cid:17) where is the superoperatorformed by the RHS of the master equation (2). Once the entire realmatrix is found M it is cut into population and coherence blocks m m M = PP PC m m CP CC (cid:18) (cid:19) and a generalized kinetic network of the same form as N is calculated as N =m m−1m +m . PC CC CP PP Hence, if one has already calculated M in order to simulate a quantum network, it only takes a few steps to find the kinetic network approximation N. 4 Preliminaries Inthissectionwegivesomedefinitionsandconditions. Theconditionsallowustoinferbasicfactsaboutthespectra of the operators N , N and M, which are required for all our bounds in Sections (5), (6) and (8). 0 10