ebook img

Private Incremental Regression PDF

0.4 MB·
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Private Incremental Regression

Private Incremental Regression ShivaPrasad Kasiviswanathan∗ KobbiNissim† HongxiaJin ‡ Abstract 7 Dataiscontinuouslygeneratedbymoderndatasources,andarecentchallengeinmachinelearninghasbeento 1 develop techniques that perform well inanincremental (streaming) setting. A variety of offlinemachine learning 0 2 tasksareknowntobefeasibleunderdifferentialprivacy,wheregenericconstructionexistthat,givenalargeenough inputsample,performtaskssuchasPAClearning,EmpiricalRiskMinimization(ERM),regression,etc.Inthispaper, n weinvestigatetheproblemofprivatemachinelearning,whereascommoninpractice,thedataisnotgivenatonce, a butratherarrivesincrementallyovertime. J We introduce the problems of private incremental ERM and private incremental regression where the general 4 goalistoalwaysmaintainagoodempiricalriskminimizerforthehistoryobservedunderdifferentialprivacy. Our firstcontributionisagenerictransformationofprivatebatchERMmechanismsintoprivateincrementalERMmech- ] S anisms,basedonasimpleideaofinvokingtheprivatebatchERMprocedureatsomeregulartimeintervals.Wetake D thisconstructionasabaselineforcomparison. Wethenprovidetwomechanismsfortheprivateincrementalregres- sionproblem. Ourfirstmechanismisbasedonprivatelyconstructinganoisyincrementalgradientfunction,which . s isthenusedinamodifiedprojectedgradientprocedureateverytimestep. Thismechanismhasanexcessempirical c [ riskof √d, wheredisthedimensionalityofthedata. WhilefromtheresultsofBassilyetal.[2]thisbound is tightin≈theworst-case,weshowthatcertaingeometricpropertiesoftheinputandconstraintsetcanbeusedtoderive 1 significantlybetter resultsfor certaininterestingregression problems. Oursecond mechanism whichachievesthis v isbasedontheideaofprojectingthedatatoalowerdimensionalspaceusingrandomprojections,andthenadding 3 privacynoiseinthislowdimensionalspace.Themechanismovercomestheissuesofadaptivityinherentwiththeuse 9 ofrandomprojectionsinonlinestreams,andusesrecentdevelopmentsinhigh-dimensionalestimationtoachievean 0 excessempiricalriskboundof T1/3W2/3,whereT isthelengthofthestreamandW isthesumoftheGaussian 1 ≈ 0 widthsoftheinputdomainandtheconstraintsetthatweoptimizeover. . 1 0 1 Introduction 7 1 Most modern data such as documents, images, social media data, sensor data, and mobile data naturally arrivein : v astreamingfashion, givingrisetothechallengeofincrementalmachinelearning,wherethegoalisbuildandpub- i lishamodelthatevolvesasdataarrives. Learningalgorithmsarefrequentlyrunonsensitivedata,suchaslocation X information in a mobile setting, and results of such analyses could leak sensitive information. For example, Ka- r siviswanathanetal.[32]showhowtheresultsofmanyconvexERMproblemscanbecombinedtocarryoutrecon- a structionattacksinthespiritofDinurandNissim[11]. Giventhis,anaturaldirectiontoexplore,iswhether,wecan carryoutincrementalmachinelearning, withoutleakinganysignificantinformationabout individual entriesinthe data. For example, adatascientist, might want tocontinuously update theregressionparameter of alinear model builtonastreamofuserprofiledatagatheredfromanongoingsurvey,buttheseupdatesshouldnotrevealwhether anyonepersonparticipatedinthesurveyornot. Differentialprivacy[15]isarigorousnotionofprivacythatisnowwidelystudiedincomputerscienceandstatis- tics. Intuitively,differentialprivacyrequiresthatdatasetsdifferinginonlyoneentryinducesimilardistributionson theoutputofa(randomized)algorithm. Oneofthestrengthsofdifferentialprivacycomesfromthelargevarietyof machinelearningtasksthatitallows.GoodgenericconstructionsexistfortaskssuchasPAClearning[4,31]andEm- piricalRiskMinimization[41,34,25,26,48,27,2,12,51,47].Theseconstructions,however,aretypicallyfocused SamsungResearchAmerica,[email protected] ∗ GeorgetownUniversity,SupportedbyNSFgrantCNS1565387andgrantsfromtheSloanFoundation.Kobbi.nissim@georgetown.edu † SamsungResearchAmerica,[email protected] ‡ 1 on the batch (offline) setting, where information is first collected and then analyzed. Considering an incremental setting,itisnaturaltoaskwhetherthesetaskscanstillbeperformedwithhighaccuracy,underdifferentialprivacy. Inthispaper, weintroducetheproblemofprivateincremental empiricalriskminimization(ERM)andprovide algorithmsforthisnewsetting. Ourparticularfocuswillbeontheproblemofprivateincrementallinearregression. Let us start with a description of the traditional batch convex ERM framework. Given a dataset and a constraint space ,thegoalinERMistopickaθ thatminimizestheempiricalerror(risk). Formally,givenndatapoints z ,...C,z fromsomedomain ,andac∈loCsed,convexset Rd,considertheoptimizationproblem: 1 n Z C ⊆ n min (θ;z ,...,z )where (θ;z ,...,z )= (θ;z ). (1) 1 n 1 n i θ J J ∈C i=1 X Thelossfunction : n Rmeasuresthefitofθ tothegivendataz ,...,z ,thefunction: R 1 n J C×Z → ∈C C×Z → isthelossassociatedwithasingledatapointandisassumedtobeconvexinthefirstparameterθforeveryz .Itis ∈Z commontoassumethatthelossfunctionhascertainproperties,e.g.,positivevalued.TheM-estimator(trueempirical riskminimizer)θˆassociatedwithagivenafunction (θ;z ,...,z ) 0isdefinedas:1 1 n J ≥ n θˆ argmin (θ;z ,...,z )=argmin (θ;z ). ∈ θ∈CJ 1 n θ∈C i i=1 X Thistypeofprogramcapturesavarietyofempiricalriskminimization(ERM)problems,e.g.,theMLE(Maximum LikelihoodEstimators)forlinearregressioniscapturedbysetting(θ;z)=(y x,θ )2in(1),wherez=(x,y)for x Rdandy R.Similarly,theMLEforlogisticregressioniscapturedbyset−tinhg(θi;z)=ln(1+exp( y x,θ )). ∈ ∈ − h i Anothercommonexampleisthesupportvectormachine(SVM),where(θ;z)=hinge(y x,θ ),wherehinge(a)= h i 1 aifa 1and0otherwise. − ≤ The main focus of this paper will be on a particularly important ERM problem of linear regression. Linear regressionisapopular statisticaltechnique thatiscommonly used tomodel therelationshipbetweentheoutcome (label)andtheexplanatoryvariables(covariates). Informally,inalinearregression,givenncovariate-responsepairs (x ,y ),...,(x ,y ) Rd R,wewishtofinda(regression)parametervectorθˆsuchthat x ,θˆ y formost 1 1 n n i i i’s. Specifically, let y ∈= (y1×,...,yn) Rn denote avector of theresponses, and letX hRn×id≈bethedesign ∈ ∈ matrixwherex⊤i istheithrow. Considerthelinearmodel: y=Xθ⋆+w,wherewisthenoisevector,thegoalin linearregressionistoestimatetheunknownregressionvectorθ⋆.Assumingthatthenoisevectorw=(w ,...,w ) 1 n followsa(sub)Gaussiandistribution,estimatingthevectorθ⋆amountstosolvingthe“ordinaryleastsquares”(OLS) problem: n θˆ argmin (y x ,θ )2. ∈ θ i−h i i i=1 X Typically,foradditionalguaranteessuchassparsity,stability,etc.,θisconstrainedtobefromaconvexset Rd. C ⊂ Popularchoicesof includetheL -ball(referredtoasLassoregression)andL -ball(referredtoasRidgeregression). 1 2 C Inanincrementalsetting,the(x ,y )’sarriveovertime,andthegoalinincrementallinearregressionistomaintain i i overtime(anestimateof)theregressionparameter. Weprovideamoredetailedbackgroundonlinearregressionin AppendixA.1. IncrementalSetting.Inanincrementalsettingthedataarrivesinastreamatdiscretetimeintervals.Theincremental settingisavariantofthetraditionalbatchsettingcapturingthefactthatmoderndataisrarelycollectedatonesingle timeandmorecommonlydatagatheringandanalysismaybeinterleaved. Anincrementalalgorithmismodeledas follows:ateachtimestepthealgorithmreceivesaninputfromthestream,computes,andproducesoutputs.Typically, constraintsareplacedonthealgorithmintermsofsomecomputationalresource(suchasmemory,computationtime) availability.Inthispaper,thechallengeinthissettingcomesfromthedifferentialprivacyconstraintbecausefrequent releasesaboutthedatacanleadtoprivacyloss(see,e.g.,[11,32]). WefocusontheincrementalERMproblem. Inthisproblemsetting,thedataz ,...,z arrivesonepoint 1 T ∈ Z at each timestep t 1,...,T . The goal of an incremental ERMalgorithm isto release at each timestep t, an estimatorthatminim∈iz{estherisk}measuredonthedataz ,...,z .Inmoreconcreteterms,thegoalistooutputθˆ at 1 t t everyt 1,...,T ,where: ∈{ } t IncrementalERM: θˆ argmin (θ;z ). t ∈ θ∈C i i=1 X 1ThisformulationalsocapturesregularizedERM,inwhichanadditionalconvexfunctionR(θ)isaddedtothelossfunctiontopenalizecertain typeofsolutions,e.g.,“penalize” forthe“complexity”ofθ. ThelossfunctionJ(θ;z1,...,zn)thenequalsPni=1(θ;zi)+R(θ),whichis sameasreplacing(θ;zi)by(θ;zi)+R(θ)/nin(1). 2 ThegoalofprivateincrementalERMalgorithm,isto(differential)privatelyestimateθˆ ateveryt 1,...,T .2 In t ∈{ } thispaper,wedevelopthefirstprivateincrementalERMalgorithms. Thereisalonglineofworkindesigningdifferentiallyprivatealgorithmsforempiricalriskminimizationproblems inthebatchsetting[41,34,26,48,27,2,12,51,47].Anaiveapproachtotransformtheexistingbatchtechniquesto workintheincrementalmodelisbyusingthemtorecomputetheoutcomeaftereachdatapoint’sarrival. However, forachievinganoverallfixedlevelofdifferentialprivacy,thiswouldresultinanunsatisfactorylossintermsofutility. Precise statements can be obtained using composition properties of differential privacy (as in Theorem A.4), but informallyifadifferentiallyprivatealgorithmisexecutedT timesonthesameinput,thentheprivacyparameter(ǫin Definition4)degradesbyafactorof √T,andthisaffectstheoverallutilityofthisapproachastheutilitybounds ≈ typicallydependinverselyontheprivacyparameter. Therefore, theabovenaiveapproachwillsufferanadditional multiplicativefactor of √T over theriskbounds obtainedforthebatchcase. Ourgoalistoobtainriskbounds ≈ intheincrementalsettingthatarecomparabletothoseinthebatchsetting,i.e.,boundsthatdonotsufferfromthis additionalpenaltyof√T. Generalization. Forthekindofproblemsweconsiderinthispaper,iftheprocessgeneratingthedatasatisfiesthe conditionsforgeneralization(e.g.,ifthedatastreamcontainsdatapointswhereeachdatapointissampledindepen- dent and identically from an unknown distribution), the incremental ERM would converge to the true risk on the distribution(viauniformconvergenceandotherideas,referto[53,43,2]formoredetails). Inthiscase,themodel learnedinanincrementalfashionwillhaveagoodpredictiveaccuracyonunseenarrivingdata. Ifhowever,thedata doesnotsatisfytheseconditions,thenθˆ canbeviewedasa“summarizer”forthedataseensofar. Generatingthese t summariescouldalsobeusefulinmanyapplications, e.g.,theregressionparametercanbeusedtoexplaintheas- sociationsbetween theoutcome (y ’s)and thecovariates(x ’s). Theseassociations areregularlyusedindomains i i suchaspublichealth,socialsciences,biologicalstudies,tounderstandwhetherspecificvariablesareimportant(e.g., statisticallysignificant)orunimportant predictorsoftheoutcome. Inpractice,theseassociationswouldneedtobe constantlyre-evaluatedovertimeasnewdataarrives. Comparison to Online Learning. Online learning (or sequential prediction) is another well-studied setting for learning when the data arrives in a sequential order. There are differences between the goals of incremental and onlinelearning. InonlineERMlearning,theaimistoprovideanestimatorθ˜ thatcanbeusedforfutureprediction. t Moreconcretely,attimet,anonlinelearner,choosesθ˜ andthentheadversarypicksz andthelearnersufferslossof t t (θ˜;z ). Onlinelearnerstrytominimizeregretdefinedasthedifferencebetweenthecumulativelossofthelearner t t andthecumulativelossofthebestfixeddecisionattheendofT rounds[42].Inanincrementalsetting,thealgorithm gets to observe z before committing to the estimator, and the goal is to ensure at each timestep t, the algorithm t maintainsasingleestimatorthatminimizestheriskonthehistory. Therearestronglowerboundsontheachievable regretforonlineERMlearning.Inparticular,evenunderthedifferentialprivacyconstraint,theexcessriskboundson incrementallearningthatweobtainherehavebetterdependenceonT (streamlength)thantheregretlowerbounds foronlineERM.Incrementallearningmodelshouldbeviewedasavariantofbatch(offline)learningmodel,where thedataarrivesovertime,andthegoalistooutputintermediateresults. 1.1 OurResults Before stating our results, let us define how we measure success of our algorithms. As is standard in ERM, we measurethequalityofouralgorithmbytheworst-case(overinputs)excess(empirical)risk(definedasthedifference from the minimum possible risk over a function class). In an incremental setting, we want this excess risk to be alwayssmallforanysequenceofinputs. Thefollowingdefinitioncapturesthisrequirement. Allouralgorithmsare randomized,andtheytakeaconfidenceparameterβandproduceboundsthatholdwithprobabilityatleast1 β. − Definition1. A(randomized)incrementalalgorithmisan(α,β)-estimatorforlossfunction ,ifwithprobabilityat J least1 β overthecoinflipsofthealgorithm,foreacht 1,...,T ,afterprocessingaprefixofthestreamof − ∈ { } lengtht,itgeneratesanoutputθ thatsatisfiesthefollowingboundonexcess(empirical)risk: t ∈C (θ ;z ,...,z ) (θˆ;z ,...,z ) α, t 1 t t 1 t J −J ≤ whereθˆ argmin (θ;z ,...,z )isthetrueempiricalriskminimizer. Here,αisreferredtoastheboundon theexcetss∈risk. θ∈CJ 1 t 2Apointtonoteintheabovedescription,whilewehavetheprivacyconstraint,wehaveplacednocomputationalconstraintsonthealgorithm. Inparticular,theabovedescriptionallowsalsothosealgorithmsthatattimetusethewholeinputhistoryz1,...,zt.However,aswewilldiscuss inSections4and5,ourproposedapproachesforprivateincrementalregressionarealsoefficientintermsoftheirresourcerequirements. 3 IncrementalERM BoundontheExcess(Empirical)Riskunder(ǫ,δ)-DifferentialPrivacy ProblemObjective (αinDefinition1) 1 ConvexFunction (Td)31log52(1/δ) (usingagenerictransformation) ǫ2/3 2 StronglyConvexFunction √dlog4(1/δ) (usingagenerictransformation) ν1/2ǫ Mech1: √d√log(1/δ) ǫ 1 2 3 LinearRegression Mech2: T3W3√log(1/δ) +T16W31√OPT+T14W12√4OPT ǫ (whereW =w( )+w( )andOPTistheminimumempiricalriskattimestepT) X C Table1:Summaryofourresults. ThestreamlengthisT anddisthedimensionalityofthedata(numberofcovariates inthecaseofregression). TheboundsarestatedforthesettingwhereboththeLipschitzconstantofthefunctionand arescaledto1. TheboundsignorepolylogfactorsindandT,andthevalueinthetablegivestheboundwhen kCk it is below T, i.e., the boundsshouldbe read as min T, . ν is the strong convexityparameter. For the regression { ·} problem, isthedomainfromwhichtheinputs(covariates)aredrawnand istheconstraintspace.OPTstandsfor X C theminimumempiricalriskattimeT.TheexactresultsareprovidedinTheorems3.1,4.2,and5.7,respectively. In this paper, we propose incremental ERM algorithms that provide a differential privacy guarantee on data streams. Informally,differentialprivacyrequiresthattheoutputofadataanalysismechanismisnotoverlyaffected byanysingleentryintheinputdataset. Inthecaseofincrementalsetting,weinsistthatthisguaranteeholdsateach timestep for all outputs produced up to that timestep (precise definition in Section 2). This would imply that, an adversaryisunabletodeterminewhetheraparticulardatapointwaspresentornotintheinputstreambyobserving theoutput of thealgorithmover time. Twoparameters, ǫandδ control thelevelof privacy. Veryroughly, ǫisan upperboundontheamountofinfluenceanindividualdatapointhasontheoutcomeandδistheprobabilitythatthis boundfailstohold(forapreciseinterpretation,referto[30]),sothedefinitionbecomesmorestringentasǫ,δ 0. → Therefore, whileparameters α,β measure theaccuracy of an incremental algorithm, the parameters ǫ,δ represent itsprivacyrisk. Ourprivateincrementalalgorithmstakeǫ,δ asparametersandsatisfies: a)thedifferentialprivacy constraintwithparametersǫ,δandb)(α,β)-estimatorproperty(Definition1)foreveryβ>0andsomeα(parameter thatthealgorithmtriestominimize). ThereisatrivialdifferentiallyprivateincrementalERMalgorithmthatcompletelyignorestheinputandoutputs ateveryt 1,...,T ,anyθ (thisschemeisprivateastheoutputisalwaysindependent oftheinput). The excessrisk∈of{thisalgor}ithmisa∈tmCost2TL ,3 whereListheLipschitzconstantof (Definition8)and is kCk kCk themaximumattainednormintheconvexset (Definition2). Allboundspresentedinthispaper,asalsotruefor C allotherexistingresultsintheprivateERMliterature,areonlyinterestingintheregimewheretheyarelessthanthis trivialbound. Forthepurposes of thissection, wemake somesimplifyingassumptions andomit4 dependence onallbutkey variables(dimensiondandstreamlengthT).SlightlymoredetailedboundsarestatedinTable1.Allouralgorithms runintimepolynomialindandT (exactboundsdependonthetimeneededforEuclideanprojectionontothecon- straintsetwhichwilldifferbasedontheconstraintset). Additionally,ourprivateincrementalregressionalgorithms, whichutilizetheTreeMechanism of [16,7] havespacerequirement whosedependence on thestreamlength T is onlylogarithmic. (1) AGenericTransformation. Anaturalfirstquestioniswhethernon-trivialprivateERMalgorithmsexistingen- eralfortheincrementalsetting. Ourfirstcontributionistoanswerthisquestionintheaffirmative–wepresent asimplegenerictransformationofprivatebatchERMalgorithmsintoprivateincrementalERMalgorithms. The constructionideaissimple: ratherthaninvokingthebatchERMalgorithmeverytimestep,thebatchERMalgo- rithmisinvokedeveryτ timesteps,whereτ ischosentobalancetheexcessriskfactorcomingfromthestalerisk minimizer(becauseofinaction)withtheexcessriskfactorcomingfromtheincreasedprivacynoiseduetoreuseof thedata.UsingthisideaalongwithrecentresultsofBassilyetal.[2]forprivatebatchERM,weobtainanexcess 3Thisfollowsasineachtimestept∈{1,...,T},foranyθ∈C,J(θ;z1,...,zt)−J(θˆt;z1,...,zt)≤tLkθ−θˆtk≤tL(kθk+kθˆtk)≤ 2tLkCk≤2TLkCk. 4Thisincludesparametersǫ,δ,β,kCk,andtheLipschitzconstantofthelossfunction. 4 riskbound(αinDefinition1)ofO˜(min (Td)1/3,T ).5 Usingthissameframework,wealsoshowthatwhenthe lossfunctionisstronglyconvex(Definiti{on9)theexc}essriskboundcanbeimprovedtoO˜(min √d,T ). { } Afollow-upquestionis:howmuchworseisthisprivateincrementalERMriskboundcomparedtobestknown privatebatchERMriskbound(withasamplesizeofT datapoints)?Inthebatchsetting,theresultsofBassilyetal. [2]establishthatforanyconvexERMproblem,itispossibletoachieveanexcessriskboundofO˜(min √d,T ) { } (whichisalsotightingeneral). Therefore, ourtransformationfromthebatchtoincrementalsetting,causesthe excessrisktoincreasebyatmostafactorof max T1/3/d1/6,1 .Notethatevenforalow-dimensionalsetting ≈ { } (smalldcase),thefactorincreaseinexcessriskof T1/3(asnowmax T1/3/d1/6,1 T1/3)ismuchsmaller ≈ { }≈ thanthefactorincreaseof T1/2 fortheearlierdescribednaiveapproachbasedonusingaprivatebatchERM ≈ algorithmateverytimestep.Thesituationonlyimprovesforlargerdimensionaldata. (2) Private Incremental LinearRegression UsingTree Mechanism. Weshow that wecan improve the generic construction from(1) for the important problem of linear regression. Wedo so by introducing the notion of a privategradientfunction(Definition5)thatallowsdifferentiallyprivateevaluationofthegradientatanyθ .6 ∈C Moreformally,foranyθ ,aprivategradientfunctionallowsdifferentiallyprivateevaluationofthegradientat ∈C θtowithinasmallerror(withhighprobability).Sincethedataarrivescontinually,ouralgorithmutilizestheTree Mechanismof[16,7]tocontinuallyupdatetheprivategradientfunction. Nowgivenaccesstoaprivategradient function,wecanuseanyfirst-orderconvexoptimizationtechnique(suchasprojectedgradientdescent)toprivately estimatetheregressionparameter. Theideais,sincetheseoptimizationtechniquesoperatebyiterativelytaking stepsinthedirectionoppositetothegradientevaluatedatthecurrentpoint,wecanuseaprivategradientfunction forevaluatingalltherequiredgradients. Usingthis,wedesignaprivateregressionalgorithmfortheincremental setting, that achieves an excess risk bound of O˜(min √d,T ). It is easy to observe, for private incremental { } linearregression,thisresultimprovestheboundsfromthegenericconstructionaboveforanychoiceofd,T (as min √d,T min (Td)1/3,T ).7 { }≤ { } Ignoringpolylogfactors,thisworst-caseboundmatchesthelowerboundsontheexcessriskforsquared-loss inthebatchcase[2],implyingthatthisboundcannotbeimprovedingeneral(anexcessriskupperboundfora problemintheincrementalsettingtriviallyholdsforthesameprobleminthebatchsetting). (3) PrivateIncrementalLinearRegression:GoingBeyondWorst-Case.Thenoiseaddedinourprevioussolution (2)growsapproximatelyasthesquarerootoftheinputdimension(forsufficientlylargeT),whichcouldbepro- hibitiveforahigh-dimensionalinput.Whileinaworst-casesetupthis,aswediscussedabove,seemsunavoidable, weinvestigatewhethercertaingeometricpropertiesoftheinput/outputspacecouldbeusedtoobtainbetterresults incertaininterestingscenarios. Anaturalstrategyforreducingthedependenceoftheexcessriskboundsondistousedimensionalityreduc- tiontechniquessuchastheJohnson-LindenstraussTransform(JLT):Theserverperformingthecomputationcan choosearandomprojectionmatrixΦ Rm×d whichisthenusedforprojectingallthexi’s(covariates)ontoa ∈ lower-dimensionalspace. Theadvantagebeingthat,usingthetechniquesfrom(2),onecouldprivatelyminimize the excess risk in the projected subspace, and in doing so, the dependence on the dimension in excess risk is reduced to √m(from √d).8 However, therearetwosignificantchallengesinimplementingthisideafor ≈ ≈ incrementalregression,bothofwhichweovercomeusinggeometricideas. Thefirstonebeingthatthisonlysolvestheproblemintheprojectedsubspace,whereasourgoalistoproduce an estimate to true empirical risk minimizer. To achieve this we would need to “lift” back the solution from the projected subspace to the original space. We do so by using recent developments in the problem of high dimensional estimationwithconstraints[54]. TheGaussianwidthoftheconstraintspace playsanimportant C roleinthisanalysis. Gaussianwidthisawell-studiedquantityinconvex geometrythatcapturesthegeometric complexityofanysetofvectors.9 Theroughideaherebeingthatagoodestimation(lifting)canbedonefrom fewobservations(smallm)aslongastheconstraintset hassmallGaussianwidth.Manypopularsetshavelow Gaussianwidth,e.g.,thewidthofL -ball(Bd isΘ(√lCogd),andthatofsetofallsparsevectorsinRd withat 1 1 mostknon-zeroentriesisΘ( klog(d/k)). Thesecondchallengecomesfromtheincrementalnatureofinputgeneration,becauseitallowsforgeneration p 5Forsimplicityofexposition,theO˜(·)notationhidesfactorspolynomialinlogT andlogd. 6Intuitively,thiswouldimplythatforanyθ ∈ C,theoutputofaprivategradientfunctioncannotbeusedtodistinguishtwostreamsthatare almostthesame(differentialprivacyguarantee). 7Alinearregressioninstancetypicallydoesnotsatisfystrongconvexityrequirements. 8Apointtonotethatthisuseofrandomprojectionsforlinearregressionisdifferentfromitstypicaluseinpriorwork[57],wheretheyareused notforreducingdimensionalitybutratherforreducingthenumberofsamplesusedintheregressioncomputation,andhenceimprovinginrunning time. 9ForasetS⊆Rd,Gaussianwidthisdefinedasw(S)=Eg (0,1)d[supa Sha,gi]. ∈N ∈ 5 of x ’s after Φ is fixed. This is an issue because the guarantees of random projections, such as JL transform, i onlyholdiftheinputsonwhichitisappliedarechosenbeforechoosingthetransformation. Forexample,given a random projection matrix Φ Rm×d with m d, it is simple to generate x such that the norm on x is substantiallydifferentfromthen∈ormofΦx.10 Agai≪ntodealwiththesekindofadaptivechoiceofinputs,werely onthegeometricpropertiesofproblem. Inparticular,weuseGordon’stheoremthatstatesthatonecanembed asetofpointsS onaunitsphereintoa(much)lower-dimensionalspaceRm usingaGaussianrandommatrix11 Φsuchthat, sup Φa 2 a 2 issmall (withhighprobability), provided misatleastthesquareof the Gaussianwidthoaf∈SS[|2k1].kIna−seknske,|w(S)2canbethoughtasthe“effectivedimension”ofS,soprojectingthe dataontom w(S)2dimensionalspacesufficesforguaranteeingtheabovecondition. ≈ Using the above geometric ideas, and the Tree Mechanism to incrementally construct the private gradient function(asin(2)),wepresentoursecondprivatealgorithmforincrementalregression,withan O˜(min T31W23 +T61W13√OPT+T14W21√4OPT,T ) { } excess risk bound, where W = w( )+w( ), Rd is the domain from which the x ’s (covariates) are i X C X ⊂ drawn,andOPTisthetrueminimumempiricalriskattimeT. AswediscussinSection5,formanypractically interestingregressioninstances,suchaswhen isadomainofsparsevectors,and isboundedbyanL -ball 1 X C (asisthecaseofpopularLassoregression)orisapolytopedefinedbypolynomial(indimension)manyvertices, W polylog(d),inwhichcasetheriskboundcanbesimplifiedtoO˜(min T31+T16√OPT+T14√4OPT,T ). ≈ { } Inmostpracticalscenarios,whentherelationshipbetweencovariatesandresponsesatisfyalinearrelationship,one wouldalsoexpectOPT T.Theseboundsshowthat,forcertaininstances,itispossibletodesigndifferentially ≪ privateriskminimizersintheincrementalsettingwithexcessriskthatdependsonlypoly-logarithmicallyonthe dimensionalityofthedata,adesiredfeatureinahigh-dimensionalsetting. Organization. In Section 1.2, we discuss some related work. In Section 2, we present some preliminaries. Our generic transformation from a private batch ERM algorithm to a private incremental ERM algorithm is given in Section3. Wepresenttechniquesthatimproveupontheseboundsfortheproblemofprivateincrementalregression inSections 4and 5. The appendices contain some proof detailsand supplementary material. In Appendix B, we analyzetheconvergencerateofanoisyprojectedgradientdescenttechnique,andinAppendixC,wepresenttheTree Mechanismof[16,7]. 1.2 RelatedWork PrivateERM.StartingfromtheworksofChaudhurietal.[8,9],privateconvexERMproblemshavebeenstudiedin varioussettingsincludingthelow-dimensionalsetting[41,34],high-dimensionalsparseregressionsetting[34,48], onlinelearningsetting[25,49,27,37],localprivacysetting[12],andinteractivesetting[26,51]. Bassilyetal. [2]presentedalgorithmsthatforageneralconvexlossfunction(θ;z)whichis1-Lipschitz(Def- inition 8) for every z, achieves an expected excess risk of √d under (ǫ,δ)-differential privacy and d under ǫ-differentialprivacy(ignoringthedependenceonotherpara≈metersforsimplicity).12 Weusetheirbatchm≈echanisms inourgenericconstructiontoobtainriskboundsforincrementalERMproblems(Theorem3.1). Theyalsoshowed that these bounds cannot be improved in general, even for the least-squares regression function. However, if the constraintspacehaslowGaussianwidth(suchaswiththeL -ball),Talwaretal.[47,46]recentlyshowedthat,under 1 (ǫ,δ)-differentialprivacy,theaboveboundcanbeimprovedbyexploitingthegeometricpropertiesoftheconstraint space. Ananalogousresultunderǫ-differentialprivacyfortheclassofgeneralizedlinearfunctions(whichincludes linear, logistic regression) was recently obtained by Kasiviswanathan and Jin [29]. Our excess risk bounds based on Gaussian width (presented in Section 5) uses a lifting procedure similar to that used by Kasiviswanathan and Jin[29]. Alloftheseabovealgorithmsoperateinthebatchsetting, andoursisthefirstworkdealingwithprivate ERMproblemsinanincrementalsetting. PrivateOnlineConvexOptimization.Differentiallyprivatealgorithmshavealsobeendesignedforalargeclassof online(convexoptimization)learningproblems,inboththefullinformationandbanditsettings[25,49,27].Adapting the popular Follow the Approximate Leader framework [23], Smithand Thakurta [49] obtained regret bounds for privateonlinelearningwithnearlyoptimaldependenceonT (thoughthedependenceonthedimensionalitydinthese resultsismuchworsethantheknownlowerbounds). Asdiscussedearlier,incrementallearningisavariantofthe batchlearning,withgoalsdifferentfromonlinelearning. 10Notethatthisissuearisesindependentofthedifferentialprivacyrequirements,andholdseveninanon-privateincrementalsetting. 11Recentresults[5]haveshownotherdistributionsforgeneratingΦalsoprovidesimilarguarantees. 12Betterriskboundswereachievedunderstrongconvexityassumptions [2,47]. 6 PrivateIncrementalAlgorithms. Dworketal.[16]introducedtheproblemofcountingunderincremental(contin- ual)observations. ThegoalistomonitorastreamofT bits,andcontinuallyreleaseacounterofthenumberof1’s thathavebeenobservedsofar,underdifferentialprivacy. TheelegantTreeMechanismintroducedbyDworketal. [16]andChanetal.[7]solvesthisproblem,underǫ-differentialprivacy,witherrorroughlylog5/2T.Theversatility ofthismechanismhasbeenutilizedindifferentwaysinsubsequentworks[49,17,24]. WeusetheTreeMechanism asabasicbuildingblockforcomputingtheprivategradientfunctionincrementally. Dworketal.[16]alsoachieve pan-privacy for theircontinual release(whichmeans thatthemechanism preservesdifferentialprivacy even when anadversarycanobservesnapshotsofthemechanism’sinternalstates),apropertythatwedonotinvestigateinthis paper. UseofJLTransformforPrivacy. TheuseofJLtransformforachievingdifferentialprivacywithbetterutilityhas beenwelldocumentedforavarietyofcomputationaltasks[58,3,33,44,52]. Blockietal.[3]haveshownthatif Φ Rm×nisaGaussianrandommatrixofappropriatedimension,thenΦX Rm×d isdifferentiallyprivateifthe lea∈stsingularvalueofthematrixX Rn×dis“sufficiently”large.Theboundo∈ntheleastsingularvaluewasrecently ∈ improvedbySheffet[44]. Heretheprivacycomesasaresultofrandomizationinherentinthetransform. However, theseresultsrequirethattheprojectionmatrixiskeptprivate,whichisanissueinanincrementalsetting,wherean adversarycouldlearnabout Φovertime. Kenthapadi etal.[33]useJohnson-Lindenstrauss transformtopublisha private sketch that enables estimation of the distance between users. Their main idea is based on projecting a d- dimensionaluserfeaturevectorintoalowerm-dimensionalspacebyfirstapplyingarandomJohnson-Lindenstrauss transformandthenaddingGaussiannoisetoeachentryoftheresultingvector.Noneoftheseaboveresultsdealwith anincrementalsetting,whereapplyingtheJLtransformisitselfachallengebecauseoftheadaptivityissues. TraditionalStreamingAlgorithms. Theliteratureonstreamingalgorithmsisrepletewithvarioustechniquesthat cansolvelinearregressionandrelatedproblemsonvariousstreamingmodelsofcomputationundervariouscomputa- tionalresourceconstraints.WereferthereadertothesurveybyWoodruff[57]formoredetails.However,incremental regressionunderdifferentialprivacy,posesadifferentchallengethanthatfacedbytraditionalstreamingalgorithms. Theissueisthatthesolution(regressionparameter)ateachtimestepreliesonallthedatapointsobservedinthepast, andfrequentreleasesaboutearlierpointscanleadtoprivacyloss. 2 Preliminaries NotationandDataNormalization. Wedenote[n] = 1,...,n . Vectorsareincolumn-wisefashion,denotedby { } boldfaceletters.Foravectorv,v⊤denotesitstranspose, v it’sEuclidean(L2-)norm,and v 1it’sL1-norm.For k k k k amatrixM, M denotesitsspectralnormwhichequalsitslargestsingularvalue,and M itsFrobeniusnorm. F k k k k Weuse0todenotead-dimensionalvectorofallzeros.Thed-dimensionalunitballinL -normcenteredatoriginis p denotedbyBd.I representsthed didentitymatrix. (µ,Σ)denotestheGaussiandistributionwithmeanvector p d × N µandcovariancematrixΣ. Foravariablen,weusepoly(n)todenoteapolynomialfunctionofnandpolylog(n) todenotepoly(log(n)). WeassumeallstreamsareofafixedlengthT,whichisknowntothealgorithm. Wemakethisassumptionfor simplifyingthediscussion.Infact,inourpresentedgenerictransformationforincrementalERMthisassumptioncan bestraightforwardlyremoved. Whereasinthecaseofalgorithmsforprivateincrementalregressionthisassumption canberemoved byusing asimpletrick13 introduced by Chanet al. [7]. For astreamΓ, weuseΓ todenote the t streamprefixoflengtht. Throughoutthispaper,weuseℓand toindicatetheleast-squaredlossonasingledatapointandacollectionof L datapoints,respectively.Namely, ℓ(θ;(x,y))=(y x,θ )2and −h i (θ;(x ,y ),...,(x ,y ))= n ℓ(θ;(x ,y )). L 1 1 n n i=1 i i InAppendixA,wereviewafewadditionaldefinitionsrelatPedtoconvexfunctionsandGaussianconcentration. Forasetofvectors,wedefineitsdiameterasthemaximumattainednormintheset. Definition2. (DiameterofSet)Thediameter ofaclosedset Rd,isdefinedas =sup θ . kCk C ⊆ kCk θ∈Ck k 13Chanetal.[7]presentedaschemethatprovidesagenericwayforconvertingtheTreeMechanismthatrequirespriorknowledgeofT intoa mechanismthatdoesnot.Theyalsoshowedthatthisnewmechanism(referredtoastheHybridMechanism)achievesasymptoticallythesameerror guaranteesastheTreeMechanism.Thesameideasworkinourcasetoo,andtheasymptoticexcessriskboundsarenotaffected. 7 Forimprovingtheworst-casedependenceondimensiond,weexploitthegeometricpropertiesoftheinputand constraintspace.Weusethewell-studiedquantityofGaussianwidththatcapturestheL -geometriccomplexityofa 2 setS Rd. ⊆ Definition3(GaussianWidth). GivenaclosedsetS Rd,itsGaussianwidthw(S)isdefinedas: ⊆ w(S)=E [sup a,g ]. g∈N(0,1)d a Sh i ∈ In particular, w(S)2 can be thought as the “effective dimension” of S. Many popular convex sets have low Gaussian width, e.g., the width of both the unit L -ball in Rd (Bd) and the standard d-dimensional probability 1 1 simplexequalsΘ(√logd),andthewidthofanyballBpdfor1 ≤p ≤ ∞is≈ d1−1/p. ForasetCcontainedinthe Bd,w( )isalwaysO(√d). AnotherprominentsetwithlowGaussianwidthisthatmadeupofsparsevectors. For ex2ampleC,thesetofallk-sparsevectors(withatmostknon-zeroentries)inRdhasGaussianwidthΘ( klog(d/k)). DifferentialPrivacyonStreams.Wewillconsiderdifferentialprivacyondatastreams[16].Astreamisasequence p ofpointsfromsomedomainset . WesaythattwostreamsΓ,Γ′ ∗ ofthesamelengthareneighborsifthere Z ∈ Z existsadatapoint z Γ andz′ such that if wechange z inΓ toz′ weget the streamΓ′. The result of an ∈ ∈ Z algorithmprocessingastreamisasequenceofoutputs. Definition4(Event-leveldifferentialprivacy[15,16]). AlgorithmAlgis(ǫ,δ)-differentiallyprivate14ifforallneigh- boringstreamsΓ,Γ′andforallsetsofpossibleoutputsequences RN,wehave R⊆ Pr[Alg(Γ) ] exp(ǫ) Pr[Alg(Γ′) ]+δ, ∈R ≤ · ∈R wheretheprobabilityistakenovertherandomnessofthealgorithm.Whenδ=0,thealgorithmAlgisǫ-differentially private. We provide additional background on differential privacy along with some techniques for achieving it in Ap- pendixA.2. 3 Private Incremental ERM: A Generic Mechanism WepresentagenerictransformationforconvertinganyprivatebatchERMalgorithmintoaprivateincrementalERM algorithm.Wetakethisconstructionasabaselineforcomparisonforourprivateincrementalregressionalgorithms. MechanismPRIVINCERMdescribesthissimpletransformation. Ateverytimestep,MechanismPRIVINCERM outputsaθpriv,adifferentiallyprivateapproximationof t t θˆ argmin (θ;z ,...,z ), where (θ;z ,...,z )= (θ;z ). t ∈ θ∈CJ 1 t J 1 t i i=1 X Theidea isto perform “relevant” computations only every τ timesteps, thereby ensuring that no z isused in i morethanT/τ invocationsof theprivatebatchERMalgorithm(for simplicity, assumethat T isamultipleof τ). Thisideaisreminiscentofmini-batchprocessingideascommonlyusedinbigdataprocessing[6]. InTheorem3.1, theparameterτ issettobalancetheincreaseinexcessriskduetolackofupdatesontheestimatorandtheincrease intheexcessriskduetothechangeintheprivacyriskparameterǫ(whicharisesfrommultipleinteractionswiththe data). Mechanism PRIVINCERMinvokes a differentiallyprivate (batch) ERMalgorithm for timestepst which area multipleofτ, andinallothertimestepsitjustoutputstheresultfromtheprevious timestep. InStep5ofMecha- nismPRIVINCERManydifferentiallyprivatebatchERMalgorithmcanbeused, andthisstepdominatesthetime complexityofthismechanism.HerewepresentexcessriskboundsobtainedbyinvokingMechanismPRIVINCERM withthedifferentiallyprivateERMalgorithmsofBassilyetal.[2]andTalwaretal.[46]. Asmentionedearlier,the boundsofBassilyetal.[2]aretightintheworst-case.ButasshownbyTalwaretal.[46],iftheconstraintspace has C asmallGaussianwidth,thentheseboundscouldbeimproved. TheboundsofTalwaretal.dependonthecurvature constantofdefinedas: 2 C =sup sup ((θ ;z) (θ ;z) θ θ , (θ ;z) ).  z∈Zθa,θb∈C,l∈(0,1],θc=θa+l(θb−θa)l2 c − a −h c− a ∇ a i 14Inthepracticeofdifferentialprivacy,wegenerallythinkofǫasasmallnon-negligibleconstant,andδasaparameterthatiscryptographically small. 8 Mechanism1:PRIVINCERM (ǫ,δ) Input: AstreamΓ=z ,...,z ,whereeachz isfromthedomain Rd,andτ N 1 T t Output:Adifferentiallyprivateestimateofθˆ argmin t Z(θ;⊆z )ateveryt∈imestept [T] t ∈ θ∈C i=1 i ∈ 1 Setǫ′ ǫ andδ′ δτ P ← 2 2T ln(2) ← 2T (cid:18) q τ δ (cid:19) 2 θ0priv ←0 3 forallt [T]do ∈ 4 iftisamultipleofτ then 5 θtpriv ←Outputofan(ǫ′,δ′)-differentiallyprivatealgorithmminimizingJ(θ;z1,...,zt) 6 end 7 else 8 θtpriv ←θtp−ri1v 9 end 10 Returnθtpriv 11 end Forlinearregressionwhere(θ;z) = ℓ(θ;(x,y)) = (y x,θ )2,thenwith x 1and y 1,itfollowsthat C 2[10]. −h i k k ≤ | | ≤ ℓ ≤kCk WenowshowthatMechanismPRIVINCERMisevent-leveldifferentiallyprivate(Definition4),andanalyzeits utilityundervariousinvocationsinStep5. Theorem3.1. MechanismPRIVINCERMis(ǫ,δ)-differentiallyprivatewithrespecttoasingledatapointchangein thestreamΓ.Also, 1. (UsingTheorem2.4,Bassilyetal.[2]).Ifthefunction(θ;z) : Risapositive-valuedfunctionthatis convexwithrespecttoθoverthedomain Rd. ThenforanyβC×>Z0,→withprobabilityatleast1 β,foreach t∈[T],θtprivgeneratedbyMechanismPRCIV⊆INCERMwithτ =⌈(Tǫd2)/13/3⌉satisfies: − (Td)1/3L log5/2(1/δ)polylog(T/β) J(θtpriv;Γt)−mθ in J(θ;Γt)=O min{ kCk ǫ2/3 ,TLkCk} , ∈C (cid:18) (cid:19) whereListheLipschitzparameterofthefunction. 2. (UsingTheorem2.4,Bassilyetal.[2]).Ifthefunction(θ;z) : Risapositive-valuedfunctionwhich isν-stronglyconvexwithrespecttoθ overthedomain Rd.CT×heZnf→oranyβ > 0,withprobabilityatleast C ⊆ 1−β,foreacht∈[T],θtprivgeneratedbyMechanismPRIVINCERMwithτ =⌈(ν1/2√ǫdL1/2)⌉satisfies: kCk √dL3/2 1/2log4(1/δ)polylog(T/β) J(θtpriv;Γt)−mθin J(θ;Γt)=O min{ kCk ν1/2ǫ ,TLkCk}!, ∈C whereListheLipschitzparameterofthefunction. 3. (UsingTheorem2.6ofTalwaretal.[46]).Ifthefunction(θ;z) : Risapositive-valuedfunctionthat isconvexwithrespecttoθoverthedomain Rd.ThenforanyβC>×0Z,w→ithprobabilityatleast1 β,foreach t∈[T],θtprivgeneratedbyMechanismPRICVI⊆NCERMwithτ =⌈((√LTw()C1/)C4ǫ11//42)⌉satisfies: − kCk Tw( )C1/4(L )3/4log7/3(1/δ)polylog(T/β) (θpriv;Γ ) min (θ;Γ )=O min C  kCk ,TL , J t t − θ∈C J t {p ǫ1/2 kCk}! whereListheLipschitzparameterandC isthecurvatureconstantofthefunction.  Proof. PrivacyAnalysis. Eachz isaccessedatmostT/τ timesbythealgorithminvokedinStep5. Letl =T/τ. i By using the composition Theorem A.4 with δ∗ = δ/2 it follows that the entire algorithm is (ǫ′ 2lln(2/δ)+ 2lǫ′2,l·(δ/2l)+δ/2)-differentiallyprivate. Wesetǫ′ asǫ/2 = ǫ′ 2lln(2/δ). Notethatwiththpissettingofǫ′, 2lǫ′2 ≤ǫ/2,therefore,ǫ′ 2lln(2/δ)+2lǫ′2 ≤ǫ.Hence,MechanismpPRIVINCERMis(ǫ,δ)-differentiallyprivate. p 9 UtilityAnalysis. IfT < τ,algorithmdoesnotaccessthedata,andinthatcase,theexcessriskcanbeboundedby TL . NowassumeT τ. Notethatthealgorithmperformsnocomputationwhentisnotamultipleofτ. Let Γ dkeCnkotetheprefixofstr≥eamΓtilltimet. Lettlieintheintervalof[jτ,(j+1)τ]forsomej N. Thetotalloss t ∈ accumulatedbythealgorithmattimetcanbesplitas: t jτ t jτ (θpriv;z )= (θpriv;z )+ (θpriv;z ) (θpriv;z )+τL , t i jτ i jτ i ≤ jτ i kCk i=1 i=1 i=jτ+1 i=1 X X X X asθpriv =θpriv. t jτ Letθˆ argmin t (θ;z ).Asispositive-valued, t ∈ θ∈C i=1 i P t jτ jτ (θˆ;z ) (θˆ;z ) (θˆ ;z ). t i t i jτ i ≥ ≥ i=1 i=1 i=1 X X X Hence,weget, t t jτ jτ (θpriv;z ) (θˆ;z ) (θpriv;z ) (θˆ ;z )+τL . t i − t i ≤ jτ i − jτ i kCk i=1 i=1 i=1 i=1 X X X X UsingtheresultsfromBassilyetal.[2]orTalwaretal.[46]tobound jτ (θpriv;z ) jτ (θˆ ;z ),setting i=1 jτ i − i=1 jτ i τ tobalancethevariousopposingterms,andafinalunionboundprovidetheclaimedbounds. P P 4 Private Incremental Linear Regression using Tree Mechanism Wenowfocusontheproblemofprivateincrementallinearregression. Ourfirstapproachforthisproblemisbased onaprivateincrementalcomputationofthegradient. ThealgorithmisparticularlyeffectiveintheregimeoflargeT andsmalld.Acentralideaofourapproachistheconstructionofaprivategradientfunctiondefinedasfollows. Definition5. Let Rd.AlgorithmAlgcomputesan(α,β)-accurategradientofthelossfunction (θ;z ,...,z ) 1 t withrespecttoθ C ⊆,ifgivenz ,...,z itoutputsafunctiong : Rdsuchthat: J 1 t t ∈C ∈Z C → (i) Privacy: Algis(ǫ,δ)-differentiallyprivate(asinDefinition4),i.e.,forallneighboringstreamsΓ,Γ′ ∗and subsets Rd, ∈Z R⊆C → Pr[Alg(Γ) ] exp(ǫ) Pr[Alg(Γ′) ]+δ. ∈R ≤ · ∈R (ii) Utility:Thefunctiong isan(α,β)-approximationtothetruegradient,inthat, t Pr max g (θ) (θ;z ,...,z ) α β. t 1 t Alg(cid:20)z1,...,zt∈Z,θ∈Ck −∇J k≥ (cid:21)≤ NotethattheoutputofAlgintheabovedefinitionisafunctiong . ThefirstrequirementonAlgspecifiesthatit t satisfiesthedifferentialprivacycondition(Definition4).ThesecondrequirementonAlgspecifiesthatforanyθ , ∈C itgivesa“sufficiently”accurateestimateofthetruegradient (θ;z ,...,z ). 1 t ∇J LetΓ=(x ,y ),...,(x ,y )representthestreamofcovariate-responsepairs,weuseΓ todenote(x ,y ),...,(x ,y ). 1 1 T T t 1 1 t t Consider the gradient of the loss function L(θ;Γt) where Xt ∈ Rt×d is a matrix with rows x⊤1,...,x⊤t and y =(y ,...,y ): t 1 t t t ∇L(θ;Γt)=2(Xt⊤Xtθ−Xt⊤yt)=2 xix⊤i θ− xiyi!. (2) i=1 i=1 X X Asimpleobservationfromthegradientformof(2)isthatifwecanmaintainthestreamingsumofx y ,...,x y 1 1 T T and the streaming sum of x1x⊤1,...,xTx⊤T, then we can maintain the necessary ingredients needed to compute (θ;Γ )foranyt [T]. Weusethisobservationtoconstructaprivategradientfunctiong : Rd atevery t t ∇timLestept. Theideai∈stoprivatelymaintain Ti=1xix⊤i and Ti=1xiyi overthesteamusingtheCT→reeMechanism of [16, 7]. Wepresent the entireconstruction of the Tree Mechanism inAppendix C. The rough idea behind this P P mechanismistobuildabinary treewheretheleavesaretheactual inputsfromthestream, and theinternal nodes storethepartialsumsofalltheleavesinitssub-tree. Forastreamofvectors,υ ,...,υ intheunitball,theTree 1 T 10

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.