ebook img

The Price of Information in Combinatorial Optimization PDF

25 Pages·2017·0.28 MB·English
by  
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview The Price of Information in Combinatorial Optimization

The Price of Information in Combinatorial Optimization ∗ Sahil Singla† 7 1 November2,2017 0 2 v o N Abstract 1 Consideranetworkdesignapplicationwherewewishtolaydownaminimum-costspanningtreein agivengraph;however,weonlyhavestochasticinformationabouttheedgecosts. Tolearntheprecise ] S cost of any edge, we have to conduct a study that incurs a price. Our goal is to find a spanning tree D while minimizing the disutility, which is the sum of the tree cost and the total price that we spend on the studies. In adifferentapplication,eachedgegivesa stochastic rewardvalue. Ourgoalisto finda . s spanningtreewhilemaximizingtheutility,whichisthetreerewardminusthepricesthatwepay. c [ Situations such as the above two often arise in practice where we wish to find a good solution to an optimization problem, but we start with only some partial knowledge about the parameters of the 1 problem. The missing informationcan be foundonly after paying a probing price, which we call the v 5 priceofinformation.Whatstrategyshouldweadopttooptimizeourexpectedutility/disutility? 0 AclassicalexampleoftheabovesettingisWeitzman’s“Pandora’sbox”problemwherewearegiven 4 probability distributionson values of n independentrandomvariables. The goal is to choose a single 0 variablewithalargevalue,butwecanfindtheactualoutcomesonlyafterpayingaprice. Ourworkisa 0 generalizationofthismodeltoothercombinatorialoptimizationproblemssuchasmatching,setcover, . 1 facility location, and prize-collecting Steiner tree. We give a technique that reduces such problems 1 to their non-price counterparts, and use it to design exact/approximation algorithms to optimize our 7 1 utility/disutility. Our techniques extend to situations where there are additional constraints on what : parameterscanbeprobedorwhenwecansimultaneouslyprobeasubsetoftheparameters. v i X r a ∗PartofthisworkwasdonewhiletheauthorswerevisitingtheSimonsInstitutefortheTheoryofComputing. Wearegrateful toAnupamGuptaandViswanathNagarajanforseveraldiscussionsonthisproject. †ComputerScienceDepartment,CarnegieMellonUniversity,Pittsburgh,PA15213,USA.Email: [email protected]. Re- search supported inpart by aCMU Presidential Fellowshipand NSFawards CCF-1319811, CCF-1536002, CCF-1540541, and CCF-1617790. 1 Introduction Suppose we want to purchase a house. We have some idea about the value of every available house in the market, say based on its location, size, and photographs. However, to find the exact value of a house we havetohireahouseinspectorandpayheraprice. Ourutilityisthedifferenceinthevalueofthebesthouse thatwefindandthetotalinspectionpricesthatwepay. Wewanttodesignastrategytomaximizeourutility. TheaboveproblemcanbemodeledasWeitzman’s“Pandora’sbox”problem[Wei79]. Givenprobability distributions of n independent random variables X and given their probing prices π , the problem is to i i adaptively probeasubsetProbed [n]tomaximizetheexpectedutility: ⊆ E max X π . i i "i∈Probed{ }−i∈Probed # X Weitzman gave an optimal adaptive strategy that maximizes the expected utility (na¨ıve greedy algorithms can behave arbitrarily bad: see Section A.1). However, suppose instead of probing values of elements, we probeweightsofedgesinagraph. Ourutilityisthemaximum-weightmatchingthatwefindminusthetotal probing pricesthatwepay. Whatstrategyshouldweadopttomaximizeourexpectedutility? Inadifferent scenario, consider anetwork design minimization problem. Suppose wewishto laydown a minimum-cost spanning tree in a given graph; however, we only have stochastic information about the edge costs. Tofindthe precise cost X ofany edge, wehave to conduct a study that incurs aprice π . Our i i disutility is the sum of the tree cost and the total price that we spend on the studies. We want to design a strategytominimizeourexpected disutility. Animportantdifference betweenthesetwoscenarios isthatof maximizingutilityvsminimizingdisutility. Situationsliketheaboveoftenarisewherewewishtofinda“good”solutiontoanoptimizationproblem; however,westartwithonlysomepartialknowledgeabouttheparametersoftheproblem. Themissinginfor- mationcanbefoundonlyafterpayingaprobingprice,whichwecallthepriceofinformation. Whatstrategy should weadopt to optimize our expected utility/disutility? In this work we design optimal/approximation algorithms for several combinatorial optimization problems in an uncertain environment where we jointly optimizethevalueofthesolution andthepriceofinformation. 1.1 Utility/DisutilityOptimization Tobegin, theabovemaximum-weight matchingproblemcanbeformallymodeledasfollows. Max-Weight Matching Given a graph G with edges E, suppose each edge i E takes some random ∈ weight X independently from a known probability distribution. We can find the exact outcome X only i i after paying a probing price π . The goal is to adaptively probe a set of edges Probed E and select a i matchingI Probedtomaximizetheexpected utility, ⊆ ⊆ E X π , i i − "i∈I i∈Probed # X X where the expectation is over random variables X = (X ,...,X ) and any internal randomness of the 1 n algorithm. We observe that we can only select an edge if it has been probed and we might select only a subset of the probed edges. This matching problem can be used to model kidney exchanges where testing compatibility ofdonor-receiver pairshasanassociated price. 1 Tocapturevaluefunctionsofmoregeneralcombinatorialproblemsinasingleframework,wedefinethe notionofsemiadditive functions. Definition 1.1 (Semiadditive function). We say a function f(I,X) : 2V R|V| R is semiadditive if × ≥0 → ≥0 thereexistsafunction h :2V R suchthat ≥0 → f(I,X) = X +h(I). i i∈I X For example, in the case of max-weight matching our value function f(I,X) = X is additive, i∈I i i.e. h(I) = 0. We call these functions semiadditive because the second term h(I) is allowed to effect the P function ina“non-additive” way;however,notdepending onX. Herearesomeotherexamples. Uncapacitated Facility Location: Given a graph G = (V,E) with metric (V,d), CLIENTS V, • ⊆ and facility opening costs X : V R , we wish to open facilities at some locations I V. The ≥0 → ⊆ function isthesumoffacilityopening costsandtheconnection coststo CLIENTS. Hence, f(I,X) = X + mind(j,i). (1) i i∈I Xi∈I j∈CXLIENTS Hereh(I) = j∈CLIENTSmini∈Id(j,i)onlydepends onI,andnotonfacilityopening costsX. Prize-CollectPing Steiner Tree: Givenagraph G = (V,E)withsomeedgecosts c : E R ,aroot ≥0 • → noder V,andpenaltiesX : V R . Thegoalistofindatreethatconnectsasubsetofnodesto ≥0 ∈ → r,whiletrying tominimizethecost ofthetreeandthesum ofthepenalties ofnodes Inotconnected tor. Hence, f(I,X) = X +Min-Steiner-Tree(V I), i \ i∈I X whereMin-Steiner-Tree(V I)denotes theminimumcosttreeconnecting allnodesinV Itor. \ \ We can now describe an abstract utility-maximization model that captures problems such as Pandora’s box,max-weightmatching, andmax-spanning tree,inasingleunifyingframework, Utility-Maximization Suppose we are given a downward-closed (packing)1 constraint 2V and a F ⊆ semiadditive function val. Each element i V takes a value X independently from a known probability i ∈ distribution. To find the outcome X we have to pay a known probing price π . The goal is to adaptively i i probe a set of elements Probed V and select I Probed that is feasible (i.e., I ) to maximize the ⊆ ⊆ ∈ F expectedutility, E val(I,X) π , i − " i∈Probed # X wheretheexpectation isoverrandomvariables Xandanyinternal randomness ofthealgorithm. Forexample, in the max-weight matching problem valisan additive function and asubset ofedges Iis feasibleiftheyformamatching. Similarly,whenvalisadditiveand isamatroid,thisframeworkcaptures F max-weightmatroidrankfunction, whichcontains Pandora’sboxandmax-spanning treeasspecialcases. Thefollowingisourmainresultfortheutility-maximization problem: 1AnindependencefamilyF ⊆2V iscalleddownward-closedifA∈F impliesB∈F foranyB⊆A.Aset-systemiscalled upward-closedifitscomplementisdownward-closed. 2 Theorem 1.2. For the utility-maximization problem for additive value functions and various packing con- straints ,weobtainthefollowingefficient algorithms. F k-system2: Forstochastic elementvalues, wegetak-approximation. • Knapsack: Forstochastic itemvaluesandknownitemsizes, wegeta2-approximation. • Some important corollaries of Theorem 1.2 are an optimal algorithm for the max-weight matroid rank problem3 and a2-approximation algorithm for themax-weight matching problem. Theorem 1.2isparticu- larlyinterestingbecauseitgivesapproximationresultsformixed-signobjectives,whichareusuallydifficult tohandle. Wealsoshowthatifvalisallowedtobeanymonotonesubmodularfunctionthenonecannotobtaingood approximation results: thereisanΩ˜(√n)hardnesseveninadeterministic setting(seeSectionA.3). Next,wedescribeadisutility-minimizationmodelthatcapturesproblemslikethemin-costspanningtree. Disutility-Minimization Suppose we are given an upward-closed (covering) constraints ′ 2V and a F ⊆ semiadditive function cost. Each element i V takes a value X independently from a known probability i ∈ distribution. To find the outcome X we have to pay a known probing price π . The goal is to adaptively i i probe a set of elements Probed V and select I Probed that is feasible (i.e., I ′) to minimize the ⊆ ⊆ ∈ F expecteddisutility, E cost(I,X)+ π , i " i∈Probed # X wheretheexpectation isoverrandomvariables Xandanyinternal randomness ofthealgorithm. Forexample, inthemin-cost spanning treeproblem, costisanadditive function andasubset ofedges I are in ′ if they contain a spanning tree. Similarly, when val is the semiadditive facility location function F as defined in Eq. (1) and every non-empty subset of V is feasible in ′, this captures the min-cost facility F location problem. Remark: The disutility-minimization problem can be also modeled as a utility-maximization problem by allowing itemvalues tobenegativeandworking withtheinfeasibility constraints (ifA ′ thenV A ∈ F \ ∈ ),butsuchatransformation isnotapproximation factorpreserving. F Wenowmentionourresultsinthismodel. (SeeSection4forformaldescriptions oftheseproblems). Theorem 1.3. For the disutility-minimization problem for various covering constraints ′, we obtain the F followingefficient algorithms. MatroidBasis: Forstochastic elementcosts, wegettheoptimaladaptivealgorithm. • Set Cover: For stochastic costs of the sets, we get a min O(log V ),f -approximation, where V is • { | | } theuniverse andf isthemaximumnumberofsetsinwhichanelementcanoccur. Uncapacitated Facility Location: For stochastic facility opening costs in a given metric, we get a • 1.861-approximation. 2Anindependence familyF ⊆ 2V isak-systemifforanyY ⊆ V wehave maxA∈B(Y)(|A|) ≤ k, whereB(Y)denotesthe minA∈B(Y)(|A|) setofmaximalindependent setsofF includedinY [CCPV11]. Thesearemoregeneralthanintersectionofk matroids: e.g.,a 2-systemcapturesmatchingingeneralgraphsandak-systemcapturesmatchinginahypergraphwithedgesofsizeatmostk. 3Forweightedmatroidrankfunctions,Kleinbergetal.[KWW16]independentlyobtainasimilarresult. 3 Prize-Collecting SteinerTree: Forstochasticpenalties inagivengraphwithgivenedgecosts,weget • a3-approximation. FeedbackVertexSet: ForstochasticvertexcostsinagivengraphwegetanO(logn)-approximation. • 1.2 Constrained Utility-Maximization Ourtechniquescanextendtosettingswhereweimposerestrictionsonthesetofelementsthatwecanprobe. In particular, we are given a downward-closed set system and the constraints allow us to only probe a J subset of elements Probed . This is different from the model discussed in Section 1.1 as earlier we ∈ J could probe any set of elements but could get value for only a subset elements that belong to . As an F example, consider a generalization of the Pandora’s box problem where besides paying probing prices, we canonlyprobeatmostk elements. Wenowformallydefineourproblem. ConstrainedUtility-Maximization Supposewearegivendownward-closedprobingconstraints 2V J ⊆ andprobabilitydistributionsofindependentnon-negativevariablesX fori V. TofindX wehavetopay i i ∈ aprobing priceπ . ThegoalistoprobeasetofelementsProbed tomaximizetheexpected utility, i ∈ J E max X π . i i "i∈Probed{ }−i∈Probed # X Remark: Onecandefineanevenmoregeneral versionofthisproblem wherewesimultaneously haveboth downward-closedsetsystems and ,andthegoalistomaximizeasemiadditivefunctionvalcorrespond- F J ing to , while probing a set feasible in . Forease of exposition, we do not discuss it here and consider F J ourvaluefunction tobethemaxfunction, asintheoriginalPandora’sboxproblem. Depending on the family of constraints , we design efficient approximation algorithms for some set- J tingsoftheaboveproblem. Thefollowingisourmainresultforthisproblem (proofinSection5.1). Theorem1.4. Iftheconstraints formanℓ-systemthentheconstrained utility-maximization problem has J a3(ℓ+1)-approximation algorithm. Since the cardinality (or any matroid) constraint forms a 1-system, an application of Theorem 1.4 gives a 6-approximation algorithm forthePandora’sboxproblemunderacardinality probingconstraint. Theaboveconstrainedutility-maximizationproblemispowerfulandcanbeusedasaframeworktostudy variants ofPandora’s box. Forexample, consider the Pandora’s box problem wherewealso allow toselect anunprobed boxiandgetvalueE[X ],withoutevenpayingitsprobingpriceπ . Thiscanbemodeledusing i i a partition matroid constraint where each box has two copies and the constraints allow us to probe at most oneofthem. ThefirstcopyhasadeterministicvalueE[X ]withzeroprobingpriceandthesecondcopyhas i arandom valueX withpriceπ . UsingTheorem1.4,wegeta6-approximation forthisvariant. i i Asanon-trivialapplicationofthisconstrainedutility-maximizationframework,inSection5.3wediscuss a set-probing utility-maximization problem where the costs are on subsets of random variables, instead of individual variables. Thus for a subset S V, we pay price π to simultaneously probe all the random S ⊆ variables X for i S. This complicates the problem because to find X , we can probe a “small” or a i i ∈ “large” setcontaining i,butatdifferentprices. Formally,wedefinetheproblem asfollows. Set-ProbingUtility-Maximization Givenprobability distributions ofindependent non-negative variables X for i V and given set family = S ,S ,...,S , where S V for j [m] has a probing price i 1 2 m j ∈ S { } ⊆ ∈ 4 π 0. Theproblem istoprobesomeofthesetsin withindicesinProbed [m]tomaximize j ≥ S ⊆ E max X π . i j ∃j∈Probeds.t.Sj∋i{ }−j∈Probed  X   Note that when weprobe multiple sets containing an element i, we find the same value X and not a fresh i samplefromthedistribution. Remark: IfthesetsS arepairwisedisjointthenonecansolvetheaboveproblemoptimally: replacingeach j setS withanewrandomvariableX′ = max X havingprobingpriceπ reducesittoPandora’sbox. j j i∈Sj{ i} j Weusetheconstrained utility-maximization problemframeworktoshowthefollowingresult. Theorem 1.5. The set-probing utility-maximization problem has a 3(ℓ + 1)-approximation efficient al- gorithm, where ℓ is the size of the largest set in . Moreover, no efficient algorithm can be o(ℓ/logℓ)- S approximation, unlessP = NP. 1.3 OurTechniques Howdoweboundtheutility/disutility oftheoptimaladaptivestrategy? Theusualtechniquesinapproxima- tionalgorithmsforstochastic problems(seerelatedworkinSection1.4)eitherusealinearprogram(LP)to bound the optimal strategy, or directly argue about the adaptivity gap of the optimal decision tree. Neither of these techniques is helpful because the natural LPs fail to capture a mixed-sign objective—they wildly overestimatethevalueoftheoptimalstrategy. Ontheotherhand,theadaptivitygapofourproblemsislarge evenforthespecialcaseofthePandora’sboxproblem—see anexampleinSectionA.2. Weneedtwocrucialideasforbothourutility-maximization anddisutility-minimization results. Ourfirst ideaistoshowthatforsemiadditivefunctions, onecanboundtheutility/disutility oftheoptimalstrategyin the price-of-information world (hereafter, the PoI world) using arelated instance in a world where there is no price to finding the parameters, i.e., π = 0 (hereafter, the Free-Info world). This new instance still has i independent random variables, however, the distributions are modified based on the original probing price π (seeDefn2.2). Thisproofcrucially reliesonthesemiadditive natureofourvalue/cost function. i Oursecond ideaistoshow thatanyalgorithm with“nice” properties intheFree-Info worldcanbeused togetanalgorithm withasimilarexpected utility/disutility inthePoIworld. Wecallsuchanicealgorithm FRUGAL (anddefineitformally inSection3.1). Forintuition, imaginea FRUGAL algorithm tobeagreedy algorithm, or an algorithm that is not “wasteful”—it picks elements irrevocably. This also includes simple primal-dual algorithms thatdonothavethereverse-deletion step. Theorem1.6. IfthereexistsaFRUGALα-approximation Algorithm tomaximize(minimize)asemiaddi- A tive function over some packing constraints (covering constraints ′) in the Free-Info world then there F F exists an α-approximation algorithm for the corresponding utility-maximization (disutility-minimization) problem inthePoIworld. Finally,toproveourresultsfromSection1.1,inSection4weshowwhymanyclassicalalgorithms, ortheir suitable modifications, are FRUGAL. We remark that although Theorem 1.6 gives good guarantees for several combinatorial problems in the PoIworld, therearesomenaturalproblems wheretherearenogood FRUGAL algorithms. Onesuchimpor- tantproblemistofindtheshortests tpathinagivengraphwithstochasticedgelengths. It’saninteresting − 5 openquestion tofindsomeapproximation guarantee forthisproblem. Anotherinteresting open question is toshowthatfindingtheoptimalpolicyforthemax-weightmatchinginthePoIworldishard. Ourtechniquesfortheconstrainedutility-maximizationprobleminSection5againusetheideaofbound- ingthisprobleminthePoIworldwithasimilarproblemintheFree-Infoworld. Thislatterproblemturnsout tobethesameasthestochastic probing problem studied in[GN13,ASW16,GNS16,GNS17]. Byproving an extension of the adaptivity gap result of Gupta et al. [GNS17], we show that one can further simplify theseFree-Info problems tonon-adaptive utility-maximization problems (bylosing aconstant factor). This wecannow(approximately) solveusingourtechniques fortheutility-maximization problem. 1.4 Related Work An influential work of Dean et al. [DGV04] considered the stochastic knapsack problem where we have stochasticknowledgeaboutthesizesoftheitems. Chenetal.[CIK+09]studiedstochasticmatchingswhere we find about an edge’s existence only after probing, and Asadpour et al. [ANS08] studied stochastic sub- modularmaximizationwheretheitemsmayormaynotbepresent. Severalfollowuppapershaveappeared, e.g. forknapsack[BGK11,Ma14],packingintegerprograms[DGV05,CIK+09,BGL+12],budgetedmulti- armed bandits [GM12, GM07, GKMR11, LY13, Ma14], orienteering [GM09, GKNR12, BN14], match- ing [Ada11, BGL+12, BCN+15, AGM15], and submodular objectives [GN13, ASW16]. Most of these resultsproceedbyshowingthatthestochasticproblemhasasmalladaptivitygapandthenfocusonthenon- adaptiveproblem. Infact,Guptaetal.[GNS17,GNS16]showthatadaptivity gapforsubmodular functions overanypacking constraints isO(1). Mostoftheaboveworksdonotcapture mixed-sign objective ofmaximizing thevalueminustheprices. Some of them instead model this as a knapsack constraint on the prices. Moreover, most of them are for maximization problems as for the minimization setting even the non-adaptive problem of probing k elements to minimize the expected minimum value has no polynomial approximation [GGM10]. This is also the reason we do not consider constrained (covering) disutility-minimization in Section 1.2. There is also a large body of work in related models where information has a price. We refer the readers to the followingpapersandthereferences therein[GK01,CFG+02,KK03,GMS07,CJK+15,AH15,CHKK15]. ThePandora’sboxsolutioncanbewrittenasaspecialcaseoftheGittinsindextheorem[Git74]. Dumitriu etal. [DTW03]consider aminimization variant of theGittins index theorem when there isno discounting. Anotherveryrelevant paper isthatofKleinberg etal.[KWW16],whiletheir resultsaretodesign auctions. TheirproofofthePandora’sboxprobleminspired thiswork. Organization In Section 2 we show how to bound the optimal strategy in the PoI world using a corre- sponding problem intheFree-Info world. InSection3weintroduce theideaofusing a FRUGAL algorithm to design a strategy with a good expected utility/disutility in the PoI world. In Section 4 we show why manyclassicalalgorithms, ortheirsuitablemodifications, areFRUGAL. Finally,inSection5wediscussthe settings wherewehaveprobing constraints, anditsapplication totheset-probing problem. 2 Bounding the Optimal Strategy for Utility/Disutility Optimization In this section we bound the expected utility/disutility of the optimal adaptive strategy for a combinatorial optimization in the PoI world in terms of a surrogate problem in the Free-Info world. We first define the 6 gradeτ andsurrogate Y ofnon-negative randomvariables. Definition 2.1 (Grade τ). For any non-negative random variable X , let τmax be the solution to equation i i E[(X τmax)+]= π andletτmin bethesolutiontoequation E[(τmin X )+]= π . i − i i i i − i i Definition2.2 (Surrogate Y). For any non-negative random variable X , letYmax = min X ,τmax and i i { i i } letYmin = max X ,τmin . i { i i } Note that τmax could be negative in the above definition. The following lemmas bound the optimal i strategyinthePoIworldintermsoftheoptimalstrategy ofasurrogate problemintheFree-Infoworld. Lemma2.3. Theexpectedutilityoftheoptimalstrategytomaximizeasemiadditivefunctionvaloverpack- ingconstraints inthePoIworldisatmost F EX[max val(I,Ymax) ]. I∈F { } Lemma 2.4. The expected disutility of the optimal strategy to minimize a semiadditive function cost over covering constraints ′ inthePoIworldisatleast F EX[min cost(I,Ymin) ]. I∈F′{ } We only prove Lemma 2.3 as the proof of Lemma 2.4 is similar. The ideas in this proof are similar to thatofKleinbergetal.[KWW16,Lemma1]toboundtheoptimaladaptivestrategy forPandora’sbox. ProofofLemma2.3. Consider a fixed optimal adaptive strategy. Let A denote the indicator variable that i element i is selected into I and let 1 denote the indicator variable that element i is probed by the optimal i strategy. Notethattheseindicators arecorrelated andthesetofelements withnon-zero A isfeasible in . i F Now,theoptimalstrategy hasexpected utility = E val(I,X) π i − " i∈Probed # X = E (A X 1 π ) +E[h(I)] i i i i − " # i X = E A X 1 E [(X τmax)+] +E[h(I)], i i− i Xi i − i " # i X(cid:0) (cid:1) usingthedefinitionofπ . SincevalueofX isindependent ofwhetherit’sprobedornot,wesimplifyto i i = E A X 1 (X τmax)+ +E[h(I)]. i i− i i − i " # i X(cid:0) (cid:1) 7 Moreover, since wecanselect anelement intoIonly afterprobing, wehave 1 A . Thisimplies thatthe i i ≥ expectedutilityoftheoptimalstrategyis E A X A (X τmax)+ +E[h(I)] ≤ i i− i i − i " # i X(cid:0) (cid:1) = E A Ymax +E[h(I)] i i " # i X = E[val(I,Ymax)]. Finally,sinceelementsinIformafeasibleset,thisisatmostE[maxI∈F val(I,Ymax) ]. { } 3 Designing an Adaptive Strategy for Utility/Disutility Optimization In this section we introduce the notion of a FRUGAL algorithm and prove Theorem 1.6. We need the followingnotation. Definition 3.1 ( Y ). For any vector Y with indices in V and any M V, let Y denote a vector of M M ⊆ length V withentriesY forj M andasymbol ,otherwise. j | | ∈ ∗ 3.1 A FRUGAL Algorithm ThenotionofaFRUGALalgorithmissimilartothatofagreedyalgorithm,oranyotheralgorithmthatisnot “wasteful”—itselectselementsone-by-oneandirrevocably. Itsdefinitioncaptures“non-greedy”algorithms suchastheprimal-dual algorithm forsetcoverthatdoesnothavethereverse-deletion step. We define a FRUGAL algorithm in the packing setting. Consider a packing problem in the Free-Info world(i.e., i,π = 0)wherewewanttofindafeasiblesetI and 2V aresomedownward-closed i ∀ ∈ F F ⊆ constraints, whiletryingtomaximizeasemiadditive functionval(I,Y) = Y +h(I). i∈I i Definition3.2(FRUGALPackingAlgorithm). ForapackingproblemwithPconstraints andvaluefunction F val,wesayAlgorithm isFRUGALifthereexistsamarginal-value functiong(Y,i,y) : RV V R≥0 A × × → R that is increasing in y, and for which the pseudocode is given by Algorithm 1. We note that this ≥0 algorithm alwaysreturnsafeasiblesolution ifweassume . ∅ ∈ F Algorithm1FRUGAL PackingAlgorithm A 1: StartwithM = andvi = 0foreachelementi V. ∅ ∈ 2: Foreachelementi 6∈M,computevi = g(YM,i,Yi). Letj = argmaxi6∈M &M∪i∈F{vi}. 3: Ifvj > 0thenaddj intoM andgotoStep2. Otherwise,returnM. A simple example of a FRUGAL packing algorithm is the greedy algorithm to find the maximum weight spanning tree(ortomaximizeanyweightedmatroidrankfunction), whereg(Y ,i,Y )= Y . M i i We similarly define a FRUGAL algorithm in the covering setting. Consider a covering problem in the Free-Info world where we want to find a feasible set I ′, where ′ 2V is some upward-closed ∈ F F ⊆ constraint, whiletryingtominimizeasemiadditive function cost(I,Y) = Y +h(I). i∈I i P 8 Definition 3.3(FRUGAL Covering Algorithm). For acovering problem withconstraints ′ and cost func- F tioncost,wesayAlgorithm is FRUGAL ifthereexists amarginal-value function g(Y,i,y) : RV V A ≥0× × R R thatisincreasing iny,andforwhichthepseudocode isgivenbyAlgorithm 2. ≥0 ≥0 → Algorithm2FRUGAL CoverageAlgorithm A 1: StartwithM = andvi = 0foreachelementi V. ∅ ∈ 2: Foreachelementi 6∈M,computevi = g(YM,i,Yi). Letj = argmaxi6∈M{vi}. 3: Ifvj > 0thenaddj intoM andgotoStep2. Otherwise,returnM. We note that for a covering problem it is unclear whether Algorithm 2 returns a feasible solution as we do not appear to be looking at our covering constraints ′. To overcome this, we say the marginal-value F function g encodes ′ if whenever M is infeasible then there exists an element i M with v > 0. This i F 6∈ meansthatthealgorithm willreturnafeasible solution aslongasV ′. ∈F A simple example of a FRUGAL covering algorithm is the greedy min-cost set cover algorithm, where g(Y ,i,Y ) = S S /Y . Notethathereg encodes ourcoverageconstraints. M i | j∈M∪i j|−| j∈M j| i Remark: Obser(cid:16)veSthat a crucial dSifference b(cid:17)etween FRUGAL packing and covering algorithms is that a FRUGALpackingalgorithmhastohandleY RV (i.e. someentriesinYcouldbenegative)butaFRUGAL ∈ covering algorithm has to only handle Y RV . The intuition behind this difference is that unlike the ∈ ≥0 disutility minimization problem,theutilitymaximization problemhasamixed-sign objective. 3.2 Using a FRUGAL Algorithmto DesignanAdaptiveStrategy AfterdefiningthenotionofaFRUGAL algorithm, wecannowproveTheorem1.6(restated below). Theorem1.6. IfthereexistsaFRUGALα-approximation Algorithm tomaximize(minimize)asemiaddi- A tive function over some packing constraints (covering constraints ′) in the Free-Info world then there F F exists an α-approximation algorithm for the corresponding utility-maximization (disutility-minimization) problem inthePoIworld. WeproveTheorem1.6onlyfortheutility-maximization settingastheotherproofissimilar. Lemma2.3 already givesusanupper bound ontheexpected utility oftheoptimal strategy fortheutility-maximization problem in terms of the expected value of a problem in the Free-Info world. This Free-Info problem can besolved using Algorithm . Themain idea inthe proof ofthis theorem is toshow that ifAlgorithm is A A FRUGAL thenwecanalsorunamodifiedversionof inthePoIworldandgetthesameexpectedutility. A ProofofTheorem1.6. LetAlg(Ymax, )denotethesetI returned byAlgorithm whenitrunswith A ∈ F A elementweightsYmax. Since isanα-approximation algorithm (whereα 1),weknow A ≥ 1 val(Alg(Ymax, ),Ymax) max val(I,Ymax) . (2) A ≥ α · I∈F { } The following crucial lemma shows that one can design an adaptive strategy in the PoI world with the sameexpectedutility. 9

Description:
probability distributions on values of n independent random variables. The goal is weight Xi independently from a known probability distribution. problem for discrete random variables to a Bernoulli setting [BCN+15] Alok Baveja, Amit Chavan, Andrei Nikiforov, Aravind Srinivasan, and Pan Xu.
See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.