Robust Global Solutions of Bilevel Polynomial Optimization Problems with Uncertain Linear Constraints T. D. Chuong and V. Jeyakumar ∗ † 6 January 26, 2016 1 0 2 n a J Abstract 5 2 This paper studies, for the first time, a bilevel polynomial program whose constraints ] involve uncertain linear constraints and another uncertain linear optimization problem. In C the case of box data uncertainty, we present a sum of squares polynomial characterization O of a global solution of its robust counterpart where the constraints are enforced for all re- . h alizations of the uncertainties within the prescribed uncertainty sets. By characterizing a t solutionoftherobustcounterpartofthelower-level uncertainlinearprogramunderspectra- a m hedral uncertainty using a new generalization of Farkas’ lemma, we reformulate the robust bilevel program as a single level non-convex polynomial optimization problem. We then [ characterize a global solution of the single level polynomial program by employing Puti- 1 nar’s Positivstellensatz of algebraic geometry under coercivity of the polynomial objective v function. Consequently, weshowthattherobustglobaloptimalvalueofthebilevelprogram 4 6 is thelimitof a sequenceof values of Lasserre-typehierarchy of semidefinitelinear program- 4 ming relaxations. Numerical examples are given to show how the robust optimal value of 6 the bilevel program can be calculated by solving semidefinite programming problems using 0 the Matlab toolbox YALMIP. . 1 0 Key words. Bilevel Programming, uncertain linear constraints, robust optimization, 6 global polynomial optimization. 1 : v i X r 1 Introduction a The bilevel optimization problems arise when two independent decision makers, ordered within a hierarchical structure, have conflicting objectives. They appear as hierarchical decision-making problems, suchasriskmanagementandeconomicplanningproblems, inengineering, governments and industries [1,6,8,9,15]. The commonly used bilevel optimization techniques (see [8,12, 36,37] and other references therein) assume perfect information (that is, accurate values for the input quantities or system parameters), despite the reality that such precise knowledge is rarely available in hierarchical decision-making problems. The data of these problems are often uncertain (that is, they are not known exactly at the time of the decision) due to estimation ∗School of Mathematics and Statistics, University of New South Wales, Sydney NSW 2052, Australia; email: [email protected]. Research was supported by the UNSW VC’s Postdoctoral fellowship program. †School of Mathematics and Statistics, University of New South Wales, Sydney NSW 2052, Australia; email: [email protected]. Research was partially supported by a grant from the Australian Research Council 1 t. d. chuong and v. jeyakumar 2 errors, prediction errors or lack of information. Consequently, the development of optimization methodologies which are capable of generating robust optimal solutions that are immunized against data uncertainty, such as the deterministic robust optimization techniques, has become more important than ever in mathematics, commerce and engineering [3,5,17]. Yet, such a mathematical theory and the associated methods for bilevel optimization in the face of data uncertainty do not appear to be available in the literature. In this paper we study, for the first time, the following bilevel polynomial program that finds robust global optimal solutions under bounded data uncertainty: min f(x,y) (x,y) IRm IRn ∈ × subject to a˜ x+˜b y c˜, (a˜ ,˜b ,c˜) U , i = 1,...,l, ⊤i ⊤i i i i i i ≤ ∀ ∈ y Y(x) := argmin c x+d z c x+aˆ z ˆb , (aˆ ,ˆb ) U , j = 1,...,q . ∈ z∈IRn{ ⊤0 e ⊤0 | ⊤j ⊤j ≤ j ∀ j j ∈ j } It is the robust counterpart of the bilevel polynomial program with uncertain linear constraints b min f(x,y) (x,y) IRm IRn ∈ × subject to a˜ x+˜b y c˜, i = 1,...,l, ⊤i ⊤i i ≤ y argmin c x+d z c x+aˆ z ˆb , j = 1,...,q , ∈ z∈IRn{ ⊤0 ⊤0 | ⊤j ⊤j ≤ j } where the constraint data (a˜ ,˜b ,c˜),i = 1,...,l and (aˆ ,ˆb ),j = 1,...,q are uncertain and i i i j j they belong to the prescribed bounded uncertainty sets U ,i = 1,...,l and U ,j = 1,...,q, i j respectively. The vectors c IRm,d IRn,c IRm,j = 1,...,q are fixed and f(x,y) is a 0 0 j ∈ ∈ ∈ polynomial. Note that, in the robust counterpart, the uncertain linear constraints are enforced e b for all realizations of the uncertainties within the uncertainty sets. The robust counterpart is, therefore, a worst-case formulation in terms of deviations of the data from their nominal values. The bilevel program is a class of hard optimization problems even for the case where all the functions are linear and are free of data uncertainty [2]. A general approach for studying bilevel optimization problems is to transform them into single level optimization problems [6,8,12]. The resulting single level optimization problems are generally non-convex constrained optimiza- tion problems. It is often difficult to find global optimal solutions of non-convex optimization problems. However, Putinar Positivstellensatz [33] together with Lasserre type semidefinite re- laxations allows us to characterize global optimal solutions and find the global optimal value of a non-convex optimization involving polynomials. The reader is referred to [21–23,25–27] for related recent work on single level convex and non-convex polynomial optimization in the literature. In this paper we make the following key contributions to bilevel optimization. Characterizations of lower-level robust solutions & spectrahedral uncertainty. • We first present a complete dual characterization for a robust solution of the lower-level uncertain linear program in the general case of bounded spectrahedral uncertainty sets U ,j = 1,2,...,q. It is achieved by way of proving a generalization of the celebrated j non-homogeneous Farkas’ lemma [14,16] for semi-infinite linear inequality systems whose dual statement can be verified by solving a semidefinite linear program. A special vari- able transformation paves the way to formulate the dual statement in terms of linear ma- trix inequalities. The spectrahedral uncertainty set possesses a broad spectrum of convex uncertainty sets that appear in robust optimization. It includes polyhedra, balls and el- lipsoids [34,35], for which dual characterizations of robust optimality of uncertain linear programs are already available in the literature [3–5]. t. d. chuong and v. jeyakumar 3 A sum ofsquares polynomial characterization ofrobust global optimal solutions. • In the case of box data uncertainty in both upper and lower level constraints and the objective polynomial is coercive, we derive a sum of squares polynomial characterization of robust global solution of the bilevel program by first transforming the bilevel program into a single level non-convex polynomial program using the dual characterization of solution of the lower level program and then employing the Putinar Positivstellensatz [33]. It is derived under a suitable Slater-type regularity condition. A numerical example is given to show that our characterization may fail without the regularity condition. In the Appendix, we show how a numerically checkable characterization of robust feasible solution can be obtained in the case of ball data uncertainty in the constraints. Related recent work on global bilevel polynomial optimization in the absence of data uncertainty can be found in [20,23]. Convergence of SDP relaxations to the robust global optimal value. Finally, using • our sum of squares characterization of robust global optimality together with Lasserre type semidefinite relaxations, we show how robust global optimal value can be calculated by solving a sequence of semidefinite linear programming relaxations. We prove that the values of the Lasserre type semidefinite relaxations converge to the global optimal value of the bilevel polynomial problem. We provide numerical examples to illustrate how the optimal value can be found using the Matlab toolbox YALMIP. The outline of the paper is as follows. Section 1 presents a generalization of the Farkas lemma and a dual characterization for robust global optimality of the lower level program of the bilevel problem. Section 3 develops characterizations for robust solution of uncertain bilevel problems in the case of box data uncertainty in the constraints. Section 4 provides results on finding the optimal value of the bilevel problem via semidefinite programming relaxations. Appendix presentsnumericallycheckable characterizationsofrobustfeasiblesolutionsofthebilevel program in the case of ball data uncertainty. 2 Lower Level LPs under Spectrahedral Uncertainty In this section, we present a robust non-homogeneous Farkas’s lemma for an uncertain linear inequality system. Let us first recall some notation and preliminaries. The notation IRn signifies the Euclidean space whose norm is denoted by for each n IN := 1,2,... . An element x IRn is written k·k ∈ { } ∈ as a column vector, but it is sometimes convenient to write the components of a vector in a row instead of a column. The inner product in IRn is defined by x,y := x y for all x,y IRn. The ⊤ topological closure of a set Ω IRn is denoted by clΩ. Thehorigiin of any space is de∈noted by 0 but we may use 0 for the ori⊂gin of IRn in situations where some confusion might be possible. n As usual, convΩ denotes the convex hull of Ω, while coneΩ := IR convΩ stands for the convex + conical hull of Ω 0 , where IR := [0,+ ) IR. A symmetric (n n) matrix A is said to be n + positive semi-defi∪ni{te,}denoted by A 0, w∞hene⊂ver x Ax 0 for all ×x IRn. ⊤ (cid:23) ≥ ∈ The classical version of Farkas’s Lemma for a semi-infinite inequality system can be found in [16, Theorem 4.3.4]. Lemma 2.1. (Non-homogeneous Farkas’ lemma– [16, Theorem 4.3.4]) Let (b,r),(s ,p ) IRn j j ∈ × IR, where j varies in an arbitrary index set J. Suppose that the system of inequalities s x p for all j J (2.1) ⊤j ≤ j ∈ t. d. chuong and v. jeyakumar 4 has a solution x IRn. Then, the following two properties are equivalent: ∈ (i) b x r for all x satisfying (2.1); ⊤ ≤ (ii) (b,r) clcone (0 ,1) (s ,p ) j J . n j j ∈ { ∪ | ∈ } Consider an uncertain linear inequality system, x IRn, aˆ x ˆb , j = 1,...,q, (2.2) ∈ ⊤j ≤ j where (aˆ ,ˆb ),j = 1,...,q are uncertain and they belong to the uncertainty sets U ,j = 1,...,q. j j j The uncertainty sets are given by b s s U := (a0 + uiai,b0 + uibi) u := (u1,...,us) U , j = 1,...,q, j j j j j j j | j j j ∈ j n Xi=1 Xi=1 o b where ai IRn,bi IR,i = 0,1,...,s,j = 1,...,q are fixed and U is a spectrahedron [34,35] j ∈ j ∈ j described by s U := u := (u1,...,us) IRs A0 + uiAi 0 , j = 1,...,q, (2.3) j j j j ∈ | j j j (cid:23) i=1 (cid:8) X (cid:9) and Ai,i = 0,1,...,s,j = 1,...,q, are symmetric (p p) matrices. j × Now, consider the affine mappings a : IRs IRn and b : IRs IR,j = 1,...,q given j j → → respectively by s s a (u ) := a0 + uiai, b (u ) := b0 + uibi for u := (u1,...,us) IRs (2.4) j j j j j j j j j j j j j ∈ i=1 i=1 X X with ai IRn,bi IR,i = 0,1,...,s,j = 1,...,q fixed as above. Then, the robust counterpart j ∈ j ∈ of the uncertain system (2.2), x IRn, aˆ x ˆb , (aˆ ,ˆb ) U , j = 1,...,q, ∈ ⊤j ≤ j ∀ j j ∈ j can be expressed equivalently as b x IRn, a (u ) x b (u ), u U , j = 1,...,q. (2.5) j j ⊤ j j j j ∈ ≤ ∀ ∈ From now on, the sets U ,j = 1,...,p in (2.3) are assumed to be compact. Note that the j spectrahedra (2.3) are closed and convex sets, and they possess a broad spectrum of infinite convex sets, such as polyhedra, balls, ellipsoids and cylinders [34,35], that appear in robust optimization. In the next theorem, by employing a variable transformation together with Lemma 2.1, we derive a generalized non-homogeneous Farkas’s lemma, which provides a numerically tractable certificate for nonnegativity ofanaffinefunction over theuncertain linear inequality system (2.5). This nonnegativity can be checked by solving a feasibility problem of a semi-definite linear pro- gram and it plays an important role in characterizing robust solutions of the bilevel polynomial optimization problem. For recent work on generalized Farkas’ lemma for uncertain linear in- equality systems, see [14,18,19]. t. d. chuong and v. jeyakumar 5 Theorem 2.2. (Generalized Farkas’ Lemma with numerically checkable dual condi- tion) Let (℘,r) IRn IR. Assume that the cone C := cone a (u ),b (u ) u U ,j = j j j j j j ∈ × | ∈ 1,...,q is closed. Then, the following statements are equivalent: (cid:8)(cid:0) (cid:1) (i) x(cid:9) IRn, a (u ) x b (u ), u U , j = 1,...,q = ℘ x r 0; j j ⊤ j j j j ⊤ ∈ ≤ ∀ ∈ ⇒ − ≥ (ii) λ0 0,λi IR,j = 1,...,q,i = 1,...,s such that ∃ j ≥ j ∈ q s q s ℘+ (λ0a0 + λiai) = 0, r (λ0b0 + λibi) 0 j j j j − − j j j j ≥ j=1 i=1 j=1 i=1 X X X X s and λ0A0 + λiAi 0,j = 1,...,q. j j j j (cid:23) i=1 X Proof. [(i) = (ii)] Assume that (i) holds. Putting ⇒ C˜ := cone (0 ,1) a (u ),b (u ) u U ,j = 1,...,q , (2.6) n j j j j j j ∪ | ∈ n o we first prove that C˜ is closed. Consi(cid:0)der a sequenc(cid:1)e x x , where x C˜,k IN. Then, q ∗k → ∗ ∗k ∈ ∈ x := λk(0 ,1)+ λk a (uk),b (uk) with λk 0,j = 0,1,...,q and uk U ,j = 1,...,q. We ∗k 0 n j j j j j j ≥ j ∈ j j=1 assert that the seqPuence(cid:0) λk,k IN is(cid:1)bounded. Otherwise, by taking a subsequence if necessary 0 ∈ q λk we may assume that λk + as k . By setting, z˜ := j a (uk),b (uk) for sufficiently 0 → ∞ → ∞ k∗ λk j j j j j=1 0 large k IN, we see that z˜ C for such k IN and z˜ = x∗kP (0(cid:0),1) 0 (0(cid:1),1) as k . ∈ k∗ ∈ ∈ k∗ λk − n → − n → ∞ 0 In addition, since C is closed, it follows that (0 ,1) C. This means that there exist µ¯ 0 n j − ∈ ≥ and u¯ U ,j = 1,...,p such that j j ∈ q q 0 = µ¯ a (u¯ ), 1 = µ¯ b (u¯ ). (2.7) n j j j j j j − j=1 j=1 X X Besides, by (i), a (u ) x b (u ) for all u U and j = 1,...,q, it follows that j j ⊤ j j j j ≤ ∈ q q µ¯ a (u¯ ) x µ¯ b (u¯ ). j j j ⊤ j j j ≤ j=1 j=1 X X This together with (2.7) entails that 0 1, which is absurd, and hence the sequence λk,k IN must be bounded. Then, we may assum≤e−without loss of generality that λk λ 0 as0k ∈ . q 0 → 0 ≥ → ∞ Similarly, by setting z := λk a (uk),b (uk) , we obtain that z C for all k IN and k∗ j j j j j k∗ ∈ ∈ j=1 z x λ (0 ,1) C as kP (cid:0). Therefore, we(cid:1)find µ 0 and u U ,j = 1,...,p such that k∗ → ∗− 0 n ∈ → ∞ j ≥ j ∈ j q x λ (0 ,1) = µ a (u ),b (u ) . ∗ 0 n j j j j j − j=1 X (cid:0) (cid:1) This shows that x C˜, and consequently, C˜ is closed. ∗ ∈ Now, invoking Lemma 2.1, we conclude that ( ℘, r) clC˜ = C˜. − − ∈ t. d. chuong and v. jeyakumar 6 Then, there exist λ 0, µ 0, and u := (u1,...,us) U ,j = 1,...,q such that 0 ≥ j ≥ j j j ∈ j q s q s ℘ = µ a0 + uiai , r = λ + µ b0 + uibi . − j j j j − 0 j j j j j=1 i=1 j=1 i=1 X (cid:0) X (cid:1) X (cid:0) X (cid:1) Using the variable transformations, λ0 := µ 0 and λi := µ ui IR,j = 1,...,q,i = 1,...,s, j j ≥ j j j ∈ we see that q s q s ℘+ (λ0a0 + λiai) = 0, r +λ + (λ0b0 + λibi) = 0. j j j j 0 j j j j j=1 i=1 j=1 i=1 X X X X q The later equality means that r (λ0b0 + s λibi) = λ 0. − − j j i=1 j j 0 ≥ j=1 P P Let j 1,...,q be arbitrary. The relation u U ensures that A0 + s uiAi 0. We ∈ { } j ∈ j j i=1 j j (cid:23) will verify that P s λ0A0 + λiAi 0. (2.8) j j j j (cid:23) i=1 X Indeed, if λ0 = 0, then λi = 0 for all i = 1,...,s, and hence, (2.8) holds trivially. If λ0 = 0, then j j j 6 s s λi s λ0A0 + λiAi = λ0 A0 + jAi = λ0 A0 + uiAi 0, j j j j j j λ0 j j j j j (cid:23) i=1 i=1 j ! i=1 ! X X X showing (2.8) holds, too. Consequently, we obtain (ii). [(ii) = (i)] Assume that (ii) holds. It means that there exist λ0 0,λi IR,j = 1,...,q,i = ⇒ j ≥ j ∈ q q 1,...,s such that ℘ + (λ0a0 + s λiai) = 0, r (λ0b0 + s λibi) 0 and λ0A0 + j j i=1 j j − − j j i=1 j j ≥ j j j=1 j=1 P P q P P s λiAi 0,j = 1,...,q. By letting λ := r (λ0b0 + s λibi), we obtain that λ 0 i=1 j j (cid:23) 0 − − j j i=1 j j 0 ≥ j=1 aPnd that P P q s q s ℘ = (λ0a0 + λiai), r = λ + (λ0b0 + λibi), (2.9) − j j j j − 0 j j j j j=1 i=1 j=1 i=1 X X X X and s λ0A0 + λiAi 0, j = 1,...,q. (2.10) j j j j (cid:23) i=1 X Let j 1,...,q be arbitrary. We claim by (2.10) that if λ0 = 0, then λi = 0 for all ∈ { } j j i = 1,...,s. Suppose, on the contrary, that λ0 = 0 but there exists i 1,...,s with λi0 = 0. j 0 ∈ { } j 6 In this case, (2.10) becomes s λiAi 0. Let u¯ := (u¯1,...,u¯s) U . It follows by definition i=1 j j (cid:23) j j j ∈ j that A0 + s u¯iAi 0 and j i=1 j j (cid:23) P P s s s A0 + (u¯i +tλi)Ai = A0 + u¯iAi +t λiAi 0 for all t > 0. j j j j j j j j j (cid:23) ! i=1 i=1 i=1 X X X t. d. chuong and v. jeyakumar 7 This guarantees that u¯ +t(λ1,...,λs) U for all t > 0, which contradicts the fact that U is a j j j ∈ j j compact set. So, our claim must be true. Next, take uˆ := (uˆ1,...,uˆs) U and define u˜ := (u˜1,...,u˜s) with j j j ∈ j j j j uˆi if λ0 = 0, j j u˜ij := λij if λ0 = 0. (λ0j j 6 Then, it can be checked that s A0 + s uˆiAi if λ0 = 0, A0 + u˜iAi = j i=1 j j j j j j 1 (λ0A0 + s λiAi) if λ0 = 0, i=1 (λ0j jPj i=1 j j j 6 X P which shows that A0 + s u˜iAi 0, and so, u˜ U . We now deduce from (2.9) that j i=1 j j (cid:23) j ∈ j P q s s ( ℘, r) =λ (0 ,1)+ λ0 a0 + u˜iai,b0 + u˜ibi − − 0 n j j j j j j j j=1 i=1 i=1 X (cid:0) X X (cid:1) q =λ (0 ,1)+ λ0 a (u˜ ),b (u˜ ) , 0 n j j j j j j=1 X (cid:0) (cid:1) which shows that ( ℘, r) C˜, (2.11) − − ∈ where C˜ is defined as in (2.6). To prove (i), let x IRn be such that a (u ) x b (u ) for all u U ,j = 1,...,q. Invoking j j ⊤ j j j j ∈ ≤ ∈ Lemma 2.1 again, we conclude by (2.11) that ℘ x r or equivalently, ℘ x r, which ⊤ ⊤ − ≤ − ≥ ✷ completes the proof of the theorem. In the setting of b := 0,j = 1,...,q, and r := 0, the above result collapses into the so-called j semi-infinite Farkas’s lemma given in [24, Theorem 2.1]. The following example shows that Theorem 2.2 may fail without the assumption that the set C is closed. Here, we consider the case of q := 1 for the purpose of simplicity. Example 2.3. (The importance of the closed cone regularity) Let U := u := (u1,u2) ∈ IR2 A0 + 2 uiAi 0 , where | i=1 (cid:23) (cid:8) P 1(cid:9) 0 0 0 0 1 0 0 0 A0 := 0 1 0 , A1 := 0 0 0 , A2 := 0 0 1 · 0 0 1 ! 1 0 0 ! 0 1 0 ! Let a : IR2 IR2 and b : IR2 IR be affine mappings defined respectively by a(u) := a0 + → → 2 uiai and b(u) := b0 + 2 uibi for u := (u1,u2) IR2 with a0 := (0,1),a1 := (1,0),a2 := i=1 i=1 ∈ (0,1) IR2,b0 = b1 = b2 := 0 IR. P ∈ P∈ In this setting, it is easy to see that U = u := (u1,u2) IR2 (u1)2 + (u2)2 1 . Let { ∈ | ≤ } x := (x1,x2) IR2 be such that a(u) x b(u) for all u := (u1,u2) U. It means that x satisfies ⊤ ∈ ≤ ∈ u1x1 +(u2 +1)x2 0, u := (u1,u2) U. (2.12) ≤ ∀ ∈ t. d. chuong and v. jeyakumar 8 Now, choose x¯ := (0, 1),℘ := (1,0) and r := 0. We see that x¯ satisfies (2.12) and ℘ x¯ r. It ⊤ − ≥ means that we have the condition (i) of Theorem 2.2. However, condition (ii) of Theorem 2.2 fails. Indeed, assume on the contrary that there exist λ0 0 and λ1,λ2 IR such that ≥ ∈ ℘+λ0a0 +λ1a1 +λ2a2 = (0,0), r (λ0b0 +λ1b1 +λ2b2) 0, λ0A0 +λ1A1 +λ2A2 0. − − ≥ (cid:23) It reduces to the following expression λ1 = 1, λ2 = λ0, (λ0)2 (λ1)2 +(λ2)2, − − ≥ which is absurd. Consequently, the conclusion of Theorem 2.2 fails to hold. The reason is that the cone C := cone (u1,u2 + 1,0) (u1)2 + (u2)2 1 is not closed. To see this, just take the sequence | ≤ z := (1, 1,0) = k(1, 1 ,0) C for k IN. It is clear that z z := (1,0,0) as k , and k (cid:8) k k k2 ∈ ∈ (cid:9) k → 0 → ∞ z / C. 0 ∈ We now provide some sufficient criteria which guarantee that the cone C in Theorem 2.2 is closed. Proposition 2.4. (Sufficiency for the closed cone regularity condition) Assume that one of the following conditions holds: (i) U ,j = 1,...,q are bounded polyhedra (i.e., polytopes); j (ii) U ,j = 1,...,q are compact and the Slater condition holds, i.e., there is xˆ IRn such that j ∈ a (u ) xˆ < b (u ), u U , j = 1,...,q. (2.13) j j ⊤ j j j j ∀ ∈ Then, the cone C := cone a (u ),b (u ) u U ,j = 1,...,q is closed. j j j j j j | ∈ (cid:8)(cid:0) (cid:1) (cid:9) Proof. For each j 1,...,q , define an affine map F : IRs IRn IR by j ∈ { } → × F (u ) := [a (u ),b (u ) . j j j j j j (cid:3) Then, F (U ) := F(u ) = [a (u ),b (u ) , and so it follows that j j j j j j j uj Uj uj Uj ∈ ∈ S S (cid:3) q C = cone F (U ) . j j ! j=1 [ (i) Since U is a polytope for j 1,...,q , the set F (U ) is a polytope as well (see j j j ∈ { } e.g., [7, Proposition 1.29]), and so F (U ) is the convex hull of some finite set (see e.g., j j [7, Theorem 1.26]). It means that for each j 1,...,q , there exists k IN and j (ai,bi) IRn IR,i = 1,...,k such that F (U )∈={conv (a}i,bi) i = 1,...,k∈ . It fol- j j ∈ × j j j { j j | j} q lows that conv F (U ) = conv (ai,bi) i = 1,...,k , j = 1,...,q is a polytope. So, j j { j j | j } j=1 ! S t. d. chuong and v. jeyakumar 9 q q C = cone F (U ) = cone conv F (U ) is a polyhedral cone, and hence, is closed j j j j j=1 ! " j=1 !# (see e.g., [32,SProposition 3.9]). S (ii) Since U is compact and F is affine (andthus, continuous) for j 1,...,q , the set F (U ) j j j j ∈ { } q is a compact set. It entails that F (U ) is a compact set as well. Note further that F (U ) j j j j j=1 ! is a convex set for each j 1,...,Sq due to the convexity of U (see e.g., [32, Proposition 1.23]). j ∈ { } Assume now that the Slater condition (2.13) holds. We claim that q 0 / conv F (U ) . (2.14) n+1 j j ∈ ! j=1 [ q If this is not the case, then there exist u¯ U ,µ¯ 0,j = 1,...,q with µ¯ = 1 such that j j j j ∈ ≥ j=1 P q 0 = µ¯ a (u¯ ),b (u¯ ) . (2.15) n+1 j j j j j j=1 X (cid:2) (cid:3) Moreover, we get by (2.13) that q q µ¯ a (u¯ ) xˆ < µ¯ b (u¯ ). j j j ⊤ j j j j=1 j=1 X X It together with (2.15) gives a contradiction, and so, (2.14) holds. According to [16, Proposi- q tion 1.4.7], we conclude that the cone C = cone F (U ) is closed. ✷. j j j=1 ! S As an application of Theorem 2.2, we obtain a complete dual characterization of robust opti- mality of the uncertain lower level linear program of the bilevel polynomial problem. Let x IRm and consider the uncertain lower level linear program (LNP), given by ∈ min c x+d z c x+a (u ) z b (u ), j = 1,...,q , (LNP) z IRn{ ⊤0 ⊤0 | ⊤j j j ⊤ ≤ j j } ∈ where u U , j = 1,...,q with the parameter sets U := (u1,...,us) IRs A0+ s uiAi j ∈ j j j j ∈ | j i=1 j j (cid:23) 0 , j = 1,...,q, as in (2.3), and the affine mappings a : IRs IRn,b : IRs IR,j = 1,...,q j (cid:8) → j → P are given respectively by a (u ) := a0 + s uiai,b (u ) = b0 + s uibi as in (2.4), as well as (cid:9) j j j i=1 j j j j j i=1 j j c IRm,d IRn,c IRm,j = 1,...,q fixed. 0 0 j ∈ ∈ ∈ P P Following the robust optimization approach [3,18,19,25], the robust counterpart of the prob- lem (LNP) is defined by min c x+d z c x+a (u ) z b (u ), u U , j = 1,...,q . (RLNP) z IRn{ ⊤0 ⊤0 | ⊤j j j ⊤ ≤ j j ∀ j ∈ j } ∈ As usual, we say that y IRn is a robust solution of the problem (LNP) (see e.g., [3]) if ∈ it is an optimal solution of the problem (RLNP), i.e., y Y(x) := argmin c x + d z c x+a (u ) z b (u ), u U , j = 1,...,q . ∈ z∈IRn{ ⊤0 ⊤0 | ⊤j j j ⊤ ≤ j j ∀ j ∈ j } t. d. chuong and v. jeyakumar 10 The next result establishes a characterization for robust solutions of the uncertain linear opti- mization problem (LNP). Theorem 2.5. (Characterization for robust solutions of problem (LNP)) Let x IRm, ∈ and let the cone C(x) := cone a (u ),b (u ) c x u U ,j = 1,...,q be closed. Then, j j j j − ⊤j | j ∈ j y Y(x) if and only if ∈ (cid:8)(cid:0) (cid:1) (cid:9) y IRn,c x+a (u ) y b (u ), u U , j = 1,...,q and ∈ ⊤j j j ⊤ ≤ j j ∀ j ∈ j λ0 0,λi IR,j = 1,...,q,i = 1,...,s such that ∃ j ≥ j ∈ (I) q q d0 + (λ0ja0j + si=1λijaij) = 0, −d⊤0y − (λ0jb0j −λ0jc⊤j x+ si=1λijbij) ≥ 0 j=1 j=1 and λP0A0 + sPλiAi 0,j = 1,...,q. P P j j i=1 j j (cid:23) P Proof. Let y Y(x). This means that ∈ y IRn, c x+a (u ) y b (u ), u U , j = 1,...,q ∈ ⊤j j j ⊤ ≤ j j ∀ j ∈ j and d y d z for all z IRn satisfying c x+a (u ) z b (u ), u U , j = 1,...,q. (2.16) ⊤0 ≤ ⊤0 ∈ ⊤j j j ⊤ ≤ j j ∀ j ∈ j Letting r := d y, (2.16) amounts to the assertion that for each ⊤0 s s z IRn, (a0 + uiai) z (b0 c x)+ uibi, u := (u1,...,us) U ,j = 1,...,q ∈ j j j ⊤ ≤ j − ⊤j j j ∀ j j j ∈ j i=1 i=1 X X = d z r. ⇒ ⊤0 ≥ Since the cone C(x) is closed, applying Theorem 2.2, we find λ0 0,λi IR,j = 1,...,q,i = j ≥ j ∈ 1,...,s such that q s q s d + (λ0a0 + λiai) = 0, r (λ0b0 λ0c x+ λibi) 0 0 j j j j − − j j − j ⊤j j j ≥ j=1 i=1 j=1 i=1 X X X X s and λ0A0 + λiAi 0,j = 1,...,q. j j j j (cid:23) i=1 X So, we obtain (I). Conversely, assume that (I) holds. Then, there exists y IRn such that ∈ c x+a (u ) y b (u ), u U , j = 1,...,q. ⊤j j j ⊤ ≤ j j ∀ j ∈ j This means that y IRn is a feasible point of problem (RLNP). Let z IRn be such that ∈ ∈ c x+a (u ) z b (u ), u U , j = 1,...,q or equivalently, ⊤j j j ⊤ ≤ j j ∀ j ∈ j s s (a0 + uiai) z (b0 c x)+ uibi, u := (u1,...,us) U ,j = 1,...,q. j j j ⊤ ≤ j − ⊤j j j ∀ j j j ∈ j i=1 i=1 X X By (I), there exist λ0 0,λi IR,j = 1,...,q,i = 1,...,s such that j ≥ j ∈ q s q s d + (λ0a0 + λiai) = 0, d y (λ0b0 λ0c x+ λibi) 0 0 j j j j − ⊤0 − j j − j ⊤j j j ≥ j=1 i=1 j=1 i=1 X X X X s and λ0A0 + λiAi 0,j = 1,...,q. j j j j (cid:23) i=1 X