ebook img

Adiabatic Quantum Computing for Random Satisfiability Problems PDF

0.2 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Adiabatic Quantum Computing for Random Satisfiability Problems

Adiabatic Quantum Computing for Random Satisfiability Problems∗ Tad Hogg HP Labs Palo Alto, CA 94304 Thediscreteformulationofadiabaticquantumcomputingiscomparedwithothersearchmethods, classicalandquantum,forrandomsatisfiability(SAT)problems. Withthenumberofstepsgrowing only as thecubeof thenumberof variables, theadiabatic method gives solution probabilities close to 1 for problem sizes feasible to evaluate via simulation on current computers. However, for these sizes the minimum energy gaps of most instances are fairly large, so the good performance scaling seen for small problems may not reflect asymptotic behavior where costs are dominated by tiny gaps. Moreover, theresultingsearch costsaremuchhigherthanforothermethods. Variantsofthe quantum algorithm that do not match the adiabatic limit give lower costs, on average, and slower 4 growth than theconventional GSATheuristic method. 0 0 PACSnumbers: 03.67.Lx 2 n I. INTRODUCTION 1/2. Thus clauses are selected uniformly from among a J the M = n 2k possible clauses. We focus on the deci- k sion problem, i.e., finding a solution, rather than the re- 3 Quantumcomputers[1,2,3,4]canrapidlyevaluateall (cid:0) (cid:1) 2 searchstates of nondeterministic polynomial (NP) prob- latedoptimization problem,i.e., finding a minimum cost lems[5],butappearunlikelytogiveshortworst-casesolu- state. Thisallowsdirectcomparisonwithpriorempirical 2 studies of heuristic methods for SAT. The algorithms tiontimes[6]. Ofmorepracticalinterestiswhethertheir v we consider are probabilistic, so cannot definitively de- averageperformanceimprovesonconventionalheuristics. 9 terminenosolutionexists. Thusweusesolubleinstances: 5 Adiabatic quantum computing, using a slowly chang- after random generation, we solve the instances with an 0 ing time-dependent Hamiltonian, appears to give poly- exhaustive conventional method and only retain those 6 nomial average cost growth for some NP combinatorial 0 searchproblems[7]. Theseobservations,while encourag- with a solution. This ensemble has a high concentra- 2 tion of hard instances near a phase transition in search ing, are limited to small problems for which other meth- 0 difficulty [10, 11, 12, 13]. For 3-SAT, we generate in- ods,bothconventionalandquantum,canhaveevenlower / stances near this transition by using µ m/n = 4.25, h costs. Furthermore, although adiabatic methods appar- ≡ p entlyshowexponentialcostscalingforsetpartitioning[8] though for those n not divisible by 4, half the samples - had m= 4.25n and half had m larger by 1. t andfindingthegroundstateofspinglasses[9],thetypical ⌊ ⌋ n Theremainderofthispaperdescribesseveralquantum performance of adiabatic quantum computing for large a search algorithms in the context of satisfiability prob- NP search problems remains an open question. Thus it u lems, and then compares their behavior. q isofinteresttocomparetheadiabaticmethodwithother : techniques for NP problems having a well-studied class v of hard instances. i II. ALGORITHMS X This paper provides such a comparison for k- r satisfiability (k-SAT), consisting of n Boolean variables a and m clauses. A clause is a logical OR of k variables, The adiabatic technique [7] is based on two Hamil- each of which may be negated. A solution is an assign- tonians H(0) and H(c). The first is selected to have a ment, i.e., a value, true or false, for each variable, satis- knowngroundstate,whilethe groundstatesofH(c) cor- fying all the clauses. An example 2-SAT instance with 3 respond to the solutions of the problem instance to be variablesand2clausesisv OR(NOTv )andv ORv , solved. The algorithm continuously evolves the state of 1 2 2 3 whichhas4solutions,e.g.,v =v =falseandv =true. thequantumcomputerusingH(f)=(1 f)H(0)+fH(c) 1 2 3 − For a giveninstance, let the cost c(s) of anassignments with f ranging from 0 to 1. Under suitable conditions, be the number of clauses it does not satisfy. i.e., with a nonzero gap between relevant eigenvalues of H(f), the adiabatic theorem guarantees that, with suffi- For k 3, k-SAT is NP-complete [5], i.e., among the ≥ cientlyslowchangesinf,the evolutionmapstheground mostdifficultNPproblemsintheworstcase. Foraverage stateofH(0) intoagroundstateofH(c),soasubsequent behavior we use the random k-SAT ensemble, in which measurement gives a solution. The choices of H(0), H(c) the m clauses are selected uniformly at random. I.e., and how f varies as a function of time are somewhat for each clause, a set of k variables is selected randomly, arbitrary. and each selected variable is negated with probability Inmatrixform,oneHamiltonianwithminimal-costas- (c) signments as ground states is H =c(s)δ , for assign- r,s r,s ments r and s, where δ is 1 if r = s and 0 otherwise. r,s ∗appearedinPhys Rev A67,022314(2003) This Hamiltonian introduces a phase factor in the am- 2 plitude of assignment s depending on its associated cost algorithm parameters phase functions c(s). T ∆ For H(0), we introduce a nonnegative weight wi for adiabatic T ∆ 0 ρ(0)=0=τ(1) variable i, let ω ≡ ni=1wi and take discrete adiabatic T →→∞∞ cons→tant ρ(0)=0=τ(1) heuristic constant ∆ 0 suitable ρ,τ ω/2 Pif r=s → Hr(,0s) = wi/2 if r and s differ only for variable i (1) (0− otherwise TABLE I: Summary of quantum search algorithms using problemstructure. Theheuristicmethodrequiresfindingap- This Hamiltonian can be implemented with elementary propriatechoicesfor thephasefunctionstogivegood perfor- quantum gates by use of the Walsh-Hadamard trans- manceandforthenumberofstepsj toincreasewithproblem form W, with elements Wr,s = 2−n/2( 1)r·s (treating sizen. Theadiabaticmethodsrequiresufficientlylargevalues − the states r and s as vectors of bits so their dot product of T = j∆. A constant value for a parameter in this table counts the number of variables assigned the value 1 in means it is taken to beindependent of n and j. both states). Specifically, H(0) = WDW where D is a diagonal matrix with the value for state r given by the weighted sum of the bits: n w r with r represent- spondence with the continuous evolution for sufficiently i=1 i i i ing the value of the ith bit of r. In particular, if all the large j. weightsequal1,D justcoPuntsthenumberofbitsequal TheunweightedH(0)usesequalweights: w =1soω = r,r i to 1. n. Alternatively,w canbethenumberoftimesvariablei i Theadiabaticmethodisacontinuousprocess. Tocom- appearsinaclause[7],asalsousedbysomeconventional pare with other algorithms, we use the algorithmically heuristics to adjust the importance of changes in each equivalentdiscreteformulation[14,15]actingontheam- variable. This choice gives ω = mk. By matching H(0) plitude vector initially in the ground state of H(0), i.e., totheprobleminstance,onemightexpectsuchweightsto ψ(0) =2−n/2. This formulation consists of j steps and a improve performance. Instead, for random 3-SAT these s parameter ∆. Step h is a matrix multiplication: weights give higher costs C, requiring about n times as many steps to achieve the same P as the unweighted soln ψ(h) =e−iτ(f)H(0)∆ e−iρ(f)H(c)∆ ψ(h−1) (2) choice. If instead these weights are normalized so their averagevalue is 1, the performance is about the same as with the mixing phase function τ(f)=1 f, cost phase in the unweighted case, but still slightly worse. In light function ρ(f) = f and taking h¯ = 1. Af−ter these steps, oftheseobservations,weusethe unweightedH(0) inthis the probability to find a solution is P = ψ(j) 2, paper. soln s| | with the sum over all solutions s. We compare the adiabatic limit with two other meth- As a simple choiceforthe evolution,we takePf to vary ods, summarized in Table I. First, for the discrete adia- linearly from 0 to 1. We exclude the steps with f = 0 baticcasewetake∆independentofnandj,violatingthe and 1 since they have no effect on P . Specifically, we condition ∆n 0 so Eq. (2) no longer closely approxi- soln → take f =h/(j+1) for step h, ranging from 1 to j. mates the continuous evolution and does not necessarily The expected number of steps required to find a solu- give Psoln 1 as j . In this case, a discrete ver- → → ∞ tion is C = j/P , providing a commonly used proxy sionoftheadiabatictheorem,describedintheappendix, soln for the computational cost of discrete methods, pending ensures Psoln is close to 1 if ∆ is not too large. furtherstudyofclockratesfortheunderlyinggateopera- Second, the heuristic method, studied previously [16, tions andthe ability ofcompilersto eliminate redundant 17], has ∆ = 1/j and forms for τ(f) and ρ(f) that do operations. As also observed with conventional heuris- not range between 0 and 1. Instead, these phase func- tics, the cost distribution for random k-SAT is highly tionsmustbe selectedappropriatelyto givegoodperfor- skewed, so a few instances dominate the mean cost. In- mance. Identifying such choices and characterizing their stead, we use the median cost to indicate typical behav- performance are major issues for this algorithm, though ior. The time for the continuous formulation is T =j∆, mean-field approximations based on a few problem pa- so the adiabatic limit is j∆ . By contrast, in the rameters, e.g., the ratio m/n for k-SAT, can give rea- → ∞ discrete formulation, ∆ parameterizes the operators of sonably good choices. This method does not correspond Eq.(2)ratherthandeterminingthetimerequiredtoper- to the adiabatic limit: P has a limit less than 1 as soln form them. j . →∞ Eq. (2) follows the continuous evolution, ψ(h+1) For all these techniques, expected cost C = j/P is soln e−iH(f)∆ψ(h), when ∆ H 0 which holds when ∆ ≈ minimizedforintermediatevaluesofj ratherthantaking || || → ≪ 1/n [14, 15]. This last condition uses the fact that the j asusedinthe limits listedinTableI. Identifying →∞ norm H is the largest eigenvalue of H, which is O(n) parameters and phase functions, ρ(f) and τ(f), giving || || since we consider k-SAT problems with m n. As a minimal cost for a given problem instance depends on ∝ specific choice, we use ∆ = 1/√j. Other scaling choices details of the searchspace structure unlikely to be avail- ∆ = 1/jα with 0 < α < 1 give qualitatively similar be- ablepriortosolvingthatinstance. However,asdescribed haviors to those reported here while maintaining corre- below, taking j to grow only as a fairly small power of n 3 1 gap n3 1 gy3 r2 0.5 e n1 2 0.8 e n 0 0.5 1 0.2 0.6 f 3/2 n 0.1 0.4 n l n o s P 0.05 0.2 f 0.2 0.4 0.6 0.8 1 0.02 0.01 FIG.2: Differencebetweeneigenvaluesofthelowest5excited states and the ground state vs. f for an instance with n = 20, m = 85 and 5 solutions. The inset shows the actual 8 12 16 20 24 eigenvalues, with the gray curve showing the expected cost n c in theground state. h i FIG.1: Log-logplotofmedianP fortheadiabaticmethod soln energy vs.nwiththenumberofstepsj equalton,theintegernear- est n3/2, n2 and n3 (solid curves, from bottom to top, re- 6 spectively). We use ∆ = 1/√j. For comparison, the dashed 5 curve shows P for the heuristic method using at most n soln steps. The error bars show the 95% confidence intervals [18, 4 p. 124] of the medians estimated from the random sample of instances. Thesameinstancesweresolvedwitheachmethod. 3 Weuse1000instancesforeach nupto20, and500forlarger n, except only 100 for j =n3 for n 16. 2 ≥ 1 providesrelativelymodestcosts,onaverage,forproblem f sizes feasible to simulate. 0.2 0.4 0.6 0.8 1 FIG.3: Lowesttwoeigenvaluesvs.f foraninstancewithn= 20, m = 85, one solution and a particularly small minimum III. BEHAVIOR gap. Thegraycurveshowstheexpectedcost c intheground state, equal to m/2k = 10.625 at f = 0. Nhoite the abrupt For the adiabatic method, Fig. 1 shows the median drop at the location of the minimum gap. The ground state Psoln for various growth rates of the number of steps. for f = 1 is the solution, whose cost is zero, so c 0 as P 1 as j increases. At least for n < 20, P 1 f 1. h i → wshoelnn→j = n3, so median costs are O(n3∼), a subssotlann≈tial → improvement over all known classical methods if it con- tinues for larger n. However, for smaller powers of n, using sparse matrix techniques [19] for n 20 gives the ≤ P values decrease, but this is only evident for j =n2 median G in the range 0.3 0.5, as illustrated for one soln − for n > 20. This raises the possibility of such a decline, instanceinFig.2,and,moresignificantly,itdoesnotde- atsomewhatlargern, forlargerj aswell. Providedsuch crease over this range of n. This minimum is not much a decline only leads to P decreasing as a power of smallerthanothervaluesofg(f). Hence,unlikeforlarge soln n, corresponding to a straight line on the log-log plot of n, the cost is not dominated by the minimum gap size Fig. 1, median costs would still only grow as a power of and so the values of Fig. 1 may not reflect asymptotic n. The remainder of this section describes the algorithm scaling. behaviors in more detail. By contrast, Fig. 3 illustrates the behavior of an in- stance with a small minimum gap. One characterization of the eigenstates of H(f) is their expected cost, i.e., A. Energy Gap c = c(s)φ(a)(f)2 where φ(a)(f) is the ath eigen- h ia s | s | vector of H(f). In particular, for a = 1 this gives the P Asymptotically, the adiabatic method’s cost is domi- expected cost in the ground state, which we denote sim- nated by the growth of 1/G2 where G = min g(f) and ply as c . The expected cost in the ground state drops f h i g(f) is the energy gap in H(f), i.e., the difference be- rapidly at the minimum gap location, in contrast to the tweenthegroundstateeigenvalueandthesmallesthigher smooth behavior for instances with larger gaps (as, for eigenvalue corresponding to a non-solution. Evaluation example, in Fig. 2). We thus see a difference in behavior 4 of the ground state for instances with small gaps, pre- 5000 sumably representative of typical behavior for larger n, and the behavior of more typical instances for n 20. ≈ WiththeadiabaticmethodandT sufficientlylarge,the actual state of the quantum computer after step h, ψ(h), 1000 closely approximates the ground state eigenvector φ(1), adiabatic up to an irrelevant overallphase. Thus the computation 500 will also show the jump in expected cost. t s Detailedquantitativecomparisonofthetypicalbehav- o c unstructured iorsduetosmallminimumgapsandconventionalheuris- tics requires larger problem sizes. Nevertheless, we can 100 gain some insight from instances with small gaps for GSAT n 20, which tend to have high costs for both the 50 ≈ quantum methods and conventional heuristics, such as heuristic GSAT [20], even when restricting comparison to prob- lems with the same numbers of variables and solutions. For the instance shown in Fig. 3, GSAT trials readily 12 14 16 18 20 22 24 26 28 n reach states with 1 or 2 conflicts, but have a relatively low chance to find the solution. This behavior, typical FIG.4: Logplotofmediansearchcostvs.nfortheheuristic ofconventionalheuristics[21],correspondstotheabrupt (diamond), unstructured search (box), GSAT with restarts dropin c ofFig.3. Thusfindingassignmentswithcosts h i afternsteps(circle) andadiabaticsearch withj =n2 (trian- below this value dominates the running time of both the gle). ThevaluesarebasedonthesameinstancesasinFig.1. quantum and conventionalmethods. These observations The lines are exponential fits to the unstructured (dashed) suggest small energy gaps characterize hard problems and adiabatic (solid) methods. moregenerallythanjustfortheadiabaticmethod,which may provide useful insights into the nature of search along with quantities such as the backbone (i.e., vari- for quantum computer implementations with relatively ables with the same values in all solutions [13]). few qubits and limited coherence times which are thus Simpleproblemsoralgorithmsignoringproblemstruc- limitedtosmallproblemsandfewsteps. Fig.4compares ture allow determining the gap for large n [14, 15, 22]. the median values of the expected search costs C. For This is difficult for random SAT. For instance, although the adiabatic method, using j =n3 gives large costs, far randomk-SATcorrespondstorandomcostsforH(c) and higher than those of conventional heuristics and other theextremeeigenvaluesofrandommatricescanbedeter- quantum methods. Using just enough steps to achieve mined when elements are chosen independently [23, 24], moderate values of P reduces cost [7], e.g., j = n2. soln the costs of nearby states for SAT instances are highly Alternatively,foreachn,testingvariousjonasmallsam- correlatedsincetheylikelyconflictwithmanyofthesame pleofinstancesindicatesthenumberofstepsrequiredto clauses. Alternatively, upper [25] and lower [26] bounds achieve a fixed value of P , e.g., 1/8. In our case, the soln for eigenvalues can be based on classes of trial vectors. latter approach has median costs about 20% lower than For instance, vectors whose components for state s de- the former,butwiththe samecostgrowthrate. Because pend only on c(s) give fairly close upper bounds for the this improvement is minor compared to the differences ground state of random 3-SAT, on average, as well as a with other algorithms shown in the figure, and to avoid mean-field approximation for the heuristic method [16]. theadditionalvariabilityduetoestimatingj fromasam- However, simple lower bounds for higher energy states ple of instances, we simply take j = n2 to illustrate the are below the upper bound for the ground state for adiabatic method. some values of f, and so do not give useful estimates ThefigurealsoshowsGrover’sunstructuredsearch[27] for G. Furthermore, typicalsoluble instances have expo- (withoutpriorknowledgeofthenumberofsolutions[28]) nentially many solutions (although still anexponentially and the conventional heuristic GSAT [20]. Unlike the small fraction of all states). Thus a full analysis of per- quantum methods, conventionalheuristics canfinish im- formance based on energy values must also consider the mediately when a solution is found rather than waiting behavior of the many eigenvalues corresponding to solu- until all j steps are completed. For comparisonwith the tions, which can be complicated, as illustrated in Fig. 2. differentchoicesofj inFig.1,themediancostsatn=20 for j = n, n3/2, n2 and n3 are, respectively, 1741, 879, 1010 and 8222. The unstructured search cost grows as B. Search Cost e0.32n. The exponential fit to the adiabatic method is e0.13n. This fit gives a residual about half as large as Even if n<20 does not identify asymptotic behavior, that from a power-law fit. The growth rate is about the this range of∼feasible simulations allows comparing algo- same as that of GSAT. rithm costs. Such comparisons are particularly relevant Fig.4 showsthe heuristic,using atmostnsteps,gives 5 150 p(f) = 1.92708f 2.66179f2+ 1.73471f3. This cubic − issimilartothe functionalformoptimizingtheadiabatic methodforunstructuredsearch[15,22]. Fig.5showsthe 100 resulting cost reduction. Hence, tuning the algorithm to the problem ensemble is beneficial, as also suggested by GSAT a mean-field analysis of the heuristic [16]. 70 The simulations also show these quantum algorithms have a large performance variance among instances with ost 50 given n and m, and no single choice for ρ and τ is best c heuristic forallprobleminstances. Thus portfolios[30]combining a variety of such choices can give further improvements. cubic 30 linear IV. CONCLUSION 20 In summary, for random SAT, the adiabatic method 15 improves on unstructured search and provides a gen- 12 14 16 18 20 22 24 26 28 30 n eral technique to exploit readily computed properties of hard search problems through the choice of Hamil- FIG. 5: Log plot of median search cost vs. n for GSAT tonians. However nonadiabatic-limit algorithms require (circle), the heuristic method (diamond), both of which are fewer steps, comparable to GSAT, and appear to have alsoshowninFig.4,andtwoversionsofthediscreteadiabatic slower cost growth. As a caveat, small energy gaps ap- method: ∆ = 1.2 with linear phase functions (triangle) and peartobeassociatedwithinstancesdifficulttosolvewith the cubic polynomial variation with f (gray box) described both quantum and classical methods. Thus the simula- in the text. The lines are exponential fits to GSAT (dashed) tionresultspresentedhere,basedonfairlysmallproblem andthetwodiscreteadiabatic quantummethods. Thefigure sizes for which most instances have fairly large energy uses thesame instances as Fig. 4. gaps, may not reveal the asymptotic scaling of the typi- calsearchcostforhardrandomSAT problems. Evaluat- ingthebehaviorofthesealgorithmsand,moregenerally, low costs due to its fairly high values for P shown in soln identifying better ways to use state costs in quantum al- Fig. 1. The constant∆ scaling for the discrete adiabatic gorithms remain open questions. method also gives large P values for j n. Thus soln both ∆ = 1/j and ∆ independent of j make≈better use Quantum computers with only a moderate number of of quantum coherence in the discrete formulation than qubits could testalgorithms beyondthe rangeofsimula- the continuous adiabatic limit (with 1/j ∆ 1/n) tors,andhenceprovideusefulinsightseveniftheproblem ≪ ≪ sizes are still readily solved by conventional heuristics. for hard random 3-SAT. These behaviors are shown in Fig. 5. Such studies could help addressthe question ofwhether, with suitable tuning based on readily evaluated average Because these quantum methods and GSAT consistof properties of search states, the ability to operate on the aseriesofindependenttrials,theycanbecombinedwith entire search space allows quantum computers to effec- amplitude amplification to give an additional quadratic tivelyexploitweakcorrelationsamongstatecostsinways performance improvement [29]. However, this is only a classical machines cannot. significantbenefitwhenP is fairlysmall,whichisnot soln the case for the heuristic and GSAT methods for these problem sizes. Acknowledgments For the adiabatic method, taking ρ(f) and τ(f) in Eq. (2) to vary according to g(f)2 reduces costs [15, 22]. This concentrates steps at values of f close to the min- I have benefited from discussions with Rob Schreiber imum gap. While g(f) is costly to evaluate for SAT in- and Wim van Dam. I thank Miles Deegan and the HP stances, using average values of g(f) based on a sample High Performance Computing Expertise Center for pro- of instances gives some benefit. E.g., for j = n2, P viding computational resources for the simulations. soln increases from around 0.4 shown in Fig. 1 to a range of 0.5 0.6 but this does not appear to reduce the cost’s − growth rate. APPENDIX A: DISCRETE ADIABATIC Similar improvement occurs with constant ∆. Opti- BEHAVIOR mizing τ and ρ separately for each step on a sample of instances gives values close to a cubic polynomial in f. When ∆ is held constant, the steps of Eq. (2) do not Restricting attention to such polynomials, for a set of approximate the continuous evolution induced by H(f), 100 n = 12 instances the best performance was with and hence ψ(h) does not closely follow the ground state ∆ = 1.31275, ρ(f) = p(f), τ(f) = 1 p(f) where of H(f) when T . Nevertheless, ψ(h) does closely − → ∞ 6 follow an eigenstate of the unitary operator involved in 2 Eq. (2). This discrete version of the adiabatic theorem ensures good performance of the algorithm provided the 1 continuous change in the eigenvector takes the initial y ground state into the final one, rather than into some g 0 r other eigenvector. e n e -1 1. The Discrete Adiabatic Limit -2 Considerasmoothlychangingsequenceofunitaryma- 0 0.2 0.4 0.6 0.8 1 f trices U(f) defined for 0 f 1 and vectors ψ(h+1) = U(f)ψ(h) with f =h/j fo≤r h=≤0,...,j 1. Let e−iθr(f) and eˆ (f) be the rth eigenvalue and (no−rmalized) eigen- FIG.6: Energyvaluesθr(f)correspondingtothetwoeigen- r values of U(f) vs. f for ∆ = 1 (gray) and 4 (black). The vector of U(f). values are defined only up to a multiple of 2π, and we take We start with ψ(0) equal to the eigenvector eˆ1(0) of π < θ π. The ground states of H(0) and H(c) corre- − ≤ U(0), which we assume to be nondegenerate for sim- spond to θ(0) = 0 and θ(1) = 0, respectively. The values plicity. Provided the difference between eigenvalues is for ∆ = 1 are close to those of the combined Hamiltonian boundedawayfromzero,forsufficientlylargej,ψ(j) will H(f)=(1 f)H(0)+fH(c). However, the ∆=4 values do − beclosetoaneigenvectorofU(1). Toseethisletǫ=1/j not remain close to those of H(f)∆. andexpandψ(h) = c (f)Λ (f)eˆ (f)intheeigenbasis r r r r of U(f) where P Psoln 1 h−1 Λ (h/j) exp i θ (k/j) r ≡ − r ! 0.8 k=0 X 0.6 Firstorderperturbationtheorygivesthechangeinthe cr values during one step to be O(ǫ). After j steps, 0.4 it might appear that these changes could build up to O(ǫj) = O(1). However, this is not the case due to the 0.2 rapid variation in phases when j is large. Specifically, steps the changes in coefficients for r =1 are 10 20 30 40 50 6 dc r df =P1,r(f)Φr(f) (A1) FIG. 7: Psoln vs. j for ∆ = 1 (gray) and 4 (black). For comparison, the dashed curve uses ∆ = 1/√j corresponding to thecontinuousadiabatic limit. where Ps,r(h/j) e−ijΘs,r(f), ≡ h−1 1 2. An Example Θ (f) (θ (k/j) θ (k/j)) s,r s r ≡ j − k=0 X An important caveat in applying this result to quan- and tum algorithms is that while j suffices to ensure → ∞ ψ(h) closely follows the evolution of an eigenvector of r dU/df 1 Φr ≡ eh−|iθr e−|iθi1 U(f),thisevolutionmaynotleadtothedesiredeigenvec- − tor of U(1), i.e., correspondingto solutions to the search Since c (0)=0, Eq. (A1) gives problem. This is because the eigenvalues of U(f) lie r on the unit circle in the complex plane and can “wrap f around” as ∆ increases. Hence, in addition to ensur- cr(f)= e−ijΘ1,r(κ)Φr(κ)dκ ing the eigenvalue gap does not get too small, good per- Z0 formance also requires selecting appropriate ∆. Alter- As j increases, the integrand oscillates increasingly natively, one could start from a different eigenvector of rapidlysotheintegralgoestozeroasj byapplying U(0),whichwouldbeusefulifonecoulddeterminewhich the Riemann-Lebesgue lemma since dΘ→∞/df = θ θ eigenvector maps to the solutions. 1,r 1 r − is nonzeroand Φ (f) isboundedfor allf andr =1,by Oneguaranteeofavoidingthisproblemisthatnoneof r | | 6 the assumption of no level crossing. Hence c (f) 0 so the eigenvalues of U(f) wrap around the unit circle, i.e., r ψ(j) approaches eˆ (1), up to an overall phase fac→tor, as ∆ H 0, corresponding to the continuous adiabatic 1 || || → j . limit. Simulations show performance remains good for →∞ 7 100 so U(f) = e−iH(0)(1−f)∆e−iH(c)f∆. Fig. 6 shows the be- havior of the two eigenvalues of U(f) for two values of 10-1 discrete ∆. For ∆ = 4 the initial ground state eigenvector, with eigenvalue 1, evolves into the 2nd eigenvector of U(1) n rather than the eigenvectorcorresponding to the ground l Pso10-2 state of H(c). continuous Fig.7showstheconsequenceofthisbehavior: when∆ 10-3 is too large, ψ(h) follows the evolving eigenvector to the heuristic unstructured wrong state when f =1, giving P 0 as j . As soln → →∞ anotherobservationfromthisfigure,P (j)exhibitsos- 100 101 102 103 104 105 soln steps cillations (though they are quite small for ∆=1). With appropriatephasechoices,theseoscillationscanbequite FIG. 8: P vs. j for several search methods solving a 20- large, allowing P to approach 1 with only a modest soln soln variable 3-SAT instance with 85 clauses and 5 solutions, the numberofsteps, evenwhenP approaches0 forlarger soln same instance used in Fig. 2. The gray curve is the discrete j. This observation is the basis of the heuristic method. adiabatic method with ∆ = 1, the thick black curve is for ∆ = 1/√j corresponding to the continuous adiabatic limit. To see the consequence of this behavior for search, For comparison, the thin black curve is the heuristic, with Fig. 8 compares the behavior of several search methods. ∆ = 1/j, and the dashed curve is for unstructured search In this case ∆ = 1 is sufficiently large that the initial (showing only the first period of its sinusoidal oscillation on eigenstate of the unitary operator evolves into a nonso- this log-log plot). lutioneigenstate. Thus asthe numberofsteps increases, theprobabilitytofindasolutiongoestozero,aswiththe large ∆ case in Fig. 7. Nevertheless, for smaller j, most moderate values of j even if ∆ does not go to zero, pro- of the amplitude “crosses”the gapto another eigenstate vided∆isbelowsomethresholdvalue. Forhardrandom that does evolve to a solution state. Consequently, this 3-SATproblemswithj n,thisthresholdappearstobe ∝ discreteadiabaticmethodgivesloweroverallsearchcost, somewhat larger than 1. using a moderate number of steps, than the continuous Toillustratetheseremarksconsiderthen=1example adiabatic method (which has P 1 as j ). By soln → → ∞ contrast, for ∆ larger than 2 or so P always remains 1 1 1 0 0 soln H(0) = − , H(c) = small. 2 1 1 0 2 (cid:18)− (cid:19) (cid:18) (cid:19) [1] D.Deutsch. Quantumtheory,theChurch-Turingprinci- Where the really hard problems are. In J. Mylopoulos ple and the universal quantum computer. Proc. R. Soc. and R. Reiter, editors, Proceedings of IJCAI91, pages London A, 400:97–117, 1985. 331–337, San Mateo, CA, 1991. Morgan Kaufmann. [2] David P. DiVincenzo. Quantum computation. Science, [11] Scott Kirkpatrick and Bart Selman. Critical behaviorin 270:255–261, 1995. thesatisfiabilityofrandombooleanexpressions. Science, [3] RichardP.Feynman.FeynmanLecturesonComputation. 264:1297–1301, 1994. Addison-Wesley,Reading, MA, 1996. [12] Tad Hogg, Bernardo A. Huberman, and Colin P. [4] Andrew Steane. Quantum computing. Reports on Williams, editors. Frontiers in Problem Solving: Phase Progress in Physics, 61:117–173, 1998. Transitions and Complexity, volume 81, Amsterdam, [5] M. R. Garey and D. S. Johnson. Computers and In- 1996. Elsevier. Special issue of Artificial Intelligence. tractability: AGuidetothe Theory ofNP-Completeness. [13] Remi Monasson, Riccardo Zecchina, Scott Kirkpatrick, W. H.Freeman, San Francisco, 1979. Bart Selman, and Lidror Troyansky. Determining com- [6] Charles H. Bennett, Ethan Bernstein, Gilles Brassard, putational complexity from characteristic “phase transi- and Umesh V. Vazirani. Strengths and weaknesses of tions”. Nature, 400:133–137, 1999. quantum computing. SIAM Journal on Computing, [14] Edward Farhi, Jeffrey Goldstone, Sam Gutmann, and 26:1510–1523, 1997. MichaelSipser. Quantumcomputationbyadiabaticevo- [7] Edward Farhi et al. A quantum adiabatic evolution al- lution. Technical Report MIT-CTP-2936, MIT, Jan. gorithm applied torandom instances of an NP-complete 2000. problem. Science, 292:472–476, 2001. [15] Wim van Dam, Michele Mosca, and Umesh Vazirani. [8] V.N.Smelyanskiy,U.V.Toussaint, andD.A.Timucin. How powerful is adiabatic quantum computation? In Simulations of the adiabatic quantum optimization Proc. of the 42nd Annual Symposium on Foundations of for the set partition problem. Los Alamos preprint Computer Science (FOCS2001), pages 279–287. IEEE, quant-ph/0112143, 2002. 2001. [9] GiuseppeE.Santoroetal. Theoryofquantumannealing [16] Tad Hogg. Quantum search heuristics. Physi- of an Ising spin glass. Science, 295:2427–2430, 2002. cal Review A, 61:052311, 2000. Preprint at pub- [10] PeterCheeseman,BobKanefsky,andWilliamM.Taylor. lish.aps.org/eprint/gateway/eplist/aps1999oct19 002. 8 [17] Tad Hogg. Solving random satisfiability problems [25] J. K. L. MacDonald. Successive approximations by with quantum computers. Los Alamos preprint the Rayleigh-Ritz variation method. Physical Review, quant-ph/0104048, 2001. 43:830–833, 1933. [18] George W. Snedecor and William G. Cochran. Statisti- [26] Per-OlovLowdin. Studiesinperturbationtheory: Lower cal Methods. Iowa State Univ. Press, Ames, Iowa, 6th bounds to energy eigenvalues in perturbation-theory edition, 1967. ground state. Physical Review, 139:A357–A372, 1965. [19] R.B. Lehoucq,D. C. Sorensen, and C. Yang. ARPACK [27] Lov K. Grover. Quantum mechanics helps in search- User’s Guide: Solution of Large-Scale Eigenvalue Prob- ing for a needle in a haystack. Physical Review lems with Implicitly Restarted Arnoldi Methods. SIAM, Letters, 79:325–328, 1997. Los Alamos preprint Philadelphia, 1998. quant-ph/9706033. [20] Bart Selman, Hector Levesque, and David Mitchell. A [28] Michel Boyer, Gilles Brassard, Peter Hoyer, and Alain new method for solving hard satisfiability problems. In Tapp. Tightboundsonquantumsearching. InT.Toffoli Proc. of the 10th Natl. Conf. on Artificial Intelligence et al., editors, Proc. of the Workshop on Physics and (AAAI92),pages440–446, MenloPark,CA,1992.AAAI Computation (PhysComp96), pages 36–43, Cambridge, Press. MA, 1996. New England Complex SystemsInstitute. [21] J. Frank, P. Cheeseman, and J. Stutz. When gravity [29] Gilles Brassard, Peter Hoyer, and Alain Tapp. Quan- fails: Local search topology. J. of Artificial Intelligence tum counting. In K. Larsen, editor, Proc. of 25th Intl. Research, 7:249–281, 1997. Colloquium on Automata, Languages, and Programming [22] JeremieRolandandNicolasJ.Cerf. Quantumsearchby (ICALP98), pages 820–831, Berlin, 1998. Springer. Los local adiabatic evolution. Physical Review A,65:042308, Alamos preprint quant-ph/9805082. 2002. Los Alamos preprintquant-ph/0107015. [30] Sebastian M. Maurer, Tad Hogg, and Bernardo A. Hu- [23] S. F. Edwards and Raymund C. Jones. The eigenvalue berman. Portfolios of quantum algorithms. Physical spectrum of a large symmetric random matrix. J. Phys. Review Letters, 87:257901, 2001. Los Alamos preprint A: Math. Gen., 9(10):1595–1603, 1976. quant-ph/0105071. [24] Z. Furedi and K. Komlos. The eigenvalues of random symmetric matrices. Combinatorica, 1:233–241, 1981.

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.