ebook img

Optimal Schedules for Monitoring Anytime Algorithms PDF

44 Pages·2017·0.36 MB·English
by  
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Optimal Schedules for Monitoring Anytime Algorithms

Optimal Schedules for Monitoring Anytime Algorithms Lev Finkelstein and Shaul Markovitch Computer Science Department Technion, Haifa 32000 Israel lev,[email protected] Abstract Monitoring anytime algorithms can signi(cid:12)cantly improve their performance. This workdealswith the problem of o(cid:11)-line construction of monitoringschedules. We study a modelwherequeriesaresubmittedtothemonitoredprocessinordertodetectsatisfaction of a given goal predicate. The queries consume time from the monitored process, thus delaying the time of satisfying the goal condition. We present a formal model for this class of problems and provide a theoretical analysis of the class of optimal schedules. We then introduceanalgorithmforconstructingoptimalmonitoringschedules andprove its correctness. We continue with distribution-based analysis for common distributions, accompanied by experimental results. We also provide a theoretical comparison of our methodology with existing monitoring techniques. 1 Introduction B D C A Figure 1: An example of the test scheduling problem: At which points should the robot stop in order to test communication? Assume that two stations, A and B, attempt to communicate using laser-based transmis- sion. The two stations do not have visual contact and thus cannot establish direct communi- cation. Adecides tosendareceiver-transmitter robotC upthe hill, as illustrated inFigure1. Therobot can initiate communication with the two stations starting frompoint D, whichhas visual contact with both stations. The robot must stop in order to establish communication. 1 Ifit stopsat a point lower than D, it will not beable to communicate with B. However, since measurements of factors such as the robot’s speed and position cannot be ascertained with precision, the time requiredfor the robot to arrive at D can beevaluated only approximately. Therefore we preprogram the robot to stop at various points and test for communicability. Each test requires a constant time (cid:28). Our goal is to generate a test schedule that minimizes the total time required to establish communication. One possibility is to program the robot to stop after a time long enough to guarantee with high probability that the robot has passed point D. The problem with this approach is that on the average the robot will waste a lot of time traveling beyond D. An alternative approachistoprogramtherobottostopveryoftenandperformitstest. Whilethisapproach allows the robot to detect the reception area at an earlier time, the total time required to establish communication is still large due to the overhead of the tests. It seems that the correct approach lies somewhere between these two extremes. But is it possible to compute a test schedule that guarantees, on average, a minimal total time? Another example of the test scheduling problem is a PROLOG interpreter. Assume that theinterpreterprocessesacomplexqueryando(cid:11)erssolutionstotheuserduringtheexecution. The user visually examines each solution and responds either with a semicolon to continue the process or with a period to stop it. The time that the system spends waiting for the human response adds to the total execution time. Assume that we can estimate the number of solutions to the query and the time required to generate all of them1. Assume also that the interpreter is extended to allow presentation of more thanone solution at a time andthat there is a speci(cid:12)c solution that the user is looking for. What policy provides the minimal expected total time for processing the query? A third example is taken from the (cid:12)eld of computational learning [16]. Assume that the goal of a concept learner is to PAC-learn a concept, i.e., with probability 1 (cid:14) to infer a (cid:0) hypothesis with a misclassi(cid:12)cation probability of less than (cid:15). Assume that we know how to compute the minimal number of required examples based on (cid:15) and (cid:14). Assume also that the learner is allowed to ask a weak form of equivalence queries [16], i.e., to ask the teacher whether our current hypothesis is correct2. Assuming that the cost of a query is constant, which policy would minimize the total learning time? What do the above three examples have in common? An agent executes a process with the purpose of satisfying a given goal predicate. (cid:15) (cid:3) (cid:3) If the goal predicate is satis(cid:12)ed at time t , then it is also satis(cid:12)ed at any time t> t . (cid:15) The process can be queried at any time whether or not the goal predicate has been (cid:15) satis(cid:12)ed. During the query execution, the process is halted. (cid:15) The goal of the agent is to minimize the total time spent on the process, including the (cid:15) time spent for the queries, until the goal predicate is known to be satis(cid:12)ed. 1Ledeniov and Markovitch [17, 18] used similar information to increase the e(cid:14)ciency of a PROLOG in- terpreter by subgoal reordering. Such information can be learned by proving training queries. A learner can infer,forexample, thattheaveragenumberofsolutionstoaqueryoftheclassparent(var,const) isabout2. 2The regular equivalencequeries requirethat a negative replybe accompanied bya counterexample. 2 Thegoalofthisresearchistodevelopalgorithmsthatdesignanoptimalquerypolicybased on the statistical characteristics of the process. We begin by de(cid:12)ning a formal framework for query-scheduling algorithms. The framework assumes a given statistical pro(cid:12)le which describestheprobabilityofthegoalconditiontobesatis(cid:12)edasafunctionoftime. Thispro(cid:12)le issimilartoprobabilisticperformance pro(cid:12)les[9],restrictedtoBooleanqualityvalues. Wethen describe a sequence of intuitive query-scheduling algorithms and analyze their strengths and weaknesses. We continue with a general algorithm for an o(cid:11)-line calculation of an optimal query schedule and prove its optimality. We follow with distribution-based analysis that specializes the algorithm for uniform, exponential and normal distributions. This analysis is accompanied by solutions of the three example problems given above, including a formal analysis and an experimental evaluation using simulated data. The idea of monitoring has received little attention within the AI research community, despite the fact that monitoring the state of an algorithm can signi(cid:12)cantly a(cid:11)ect its per- formance. Monitoring is a subtopic of metalevel reasoning [22, 23] and has been studied primarily in the context of anytime algorithms [5, 10] and contract algorithms [24, 27]. The potential bene(cid:12)t of monitoring is to save computational resources of the monitored process. Monitoring itself, however, also carries a cost. This brings up the interesting question of whenand how monitoring shouldbe performedto optimize the tradeo(cid:11) between its costs and bene(cid:12)ts. Monitoring decisions can be therefore viewed as a kind of type II rationality [7], and the di(cid:11)erence between the performance with and without monitoring corresponds to the concept of intrinsic utility [22]. Dean and Boddy [5, 2] have worked on a more complicated setup with a sequence of anytime algorithms. They assumed that no run-time monitoring is taking place and con- centrated on the problem of (cid:12)nding a (cid:12)xed resource allocation for each algorithm before it starts. Theycallthistypeofmonitoringdeliberation scheduling. Themaininputusedintheir works are performance pro(cid:12)les [25, 2] that measure the tradeo(cid:11) between solution quality and computation time. Horvitz [12] studied on-line monitoring extensively in the context of various application domains such as reformulation of belief networks [13, 3], automated theorem proving [14], and others. In the proposed models, the process stops when the expected bene(cid:12)t of halting is higher than the expected bene(cid:12)t of continuing computation. The domains described in these works have a higher degree of uncertainty than the model proposed here, allowing only myopic analysis of the tradeo(cid:11)s involved. In the Protos system [11] Horvitz has extended the myopic horizon of EVC analysis by using a lookahead to a (cid:12)xed depth. This scheme avoids some of the errors caused by myopic analysis. Russell and Wefald [22, 21] describe a model of rational heuristic search. They propose an anytime algorithm for evaluating the expected utility of node expansion. The algorithm includes a stopping criterion enhanced by a monitoring procedure which tests the stopping criterion every (cid:12)xed number of node expansions. This is an instance of the class of problems introduced above. A detailed analysis of this approach is given in Section 8. The latest works of Zilberstein and Hansen [27, 8, 9] provide a theoretical framework for a wide range of monitoring problems, using a model with multiple-value quality and a high degree of uncertainty. Section 8 analyzes their work and compares it with our approach. The rest of the paper is organized as follows. Section 2 describes intuitive approaches to the monitoring problem. Section 3 formulates the general framework used in this work. Sections 4 and 5 describe algorithms for generating optimal schedules. Section 6 contains distribution-speci(cid:12)c analysis and o(cid:11)ers solutions to the three problems above, along with 3 experimental evaluation on simulated data. Section 7 shows results for a problem using real data. Section 8 discusses related work and Section 9 presents our conclusions. 2 Intuitive approaches Formanyproblemslikethosedescribedabove,ahumancanproduceacommon-sensestrategy. Assumethatwearefacingsuchaschedulingproblemwithaquerycostof(cid:28), andthatwehave anupperboundT onthe time by whichthe goal isreached. In thissection we present several intuitive strategies and show their weaknesses. The output of all the proposed methods will be a sequence of time points at which queries should be submitted. There are two possible methods for representing a schedule. One is to specify the internal run time of the process (which does not include the query processing time). In that case the point of view expressed would be that of the process. The other method is to specify the total elapsed time, thus expressing the point of view of an external observer. A schedule represented by the (cid:12)rst method as t ;t ;:::;t is equivalent to t ;t +(cid:28);:::;t +(n 1)(cid:28) 1 2 n 1 2 n h i h (cid:0) i in the second method. In this paper we adopt the (cid:12)rst method for representing schedules. Note, however, that regardless of the representation chosen, our goal here is to minimize the total elapsed time. 2.1 The query-at-the-end strategy Thesimplestandtherefore most commonstrategy is toqueryonce whenthe allocated time T isexhausted. Suchastrategyalways requiresatotal timeT+(cid:28). Thisapproachisproblematic Input: The maximum allowed time T. (cid:15) Output: T . (cid:15) h i Figure 2: The query-at-the-end algorithm. when the expected time for satisfying the goal predicate is much less than T. For example, most of the classi(cid:12)cation learning algorithms accept a set of examples andprocess them all to get a classi(cid:12)er3. Since learning time is often greater than testing time, and since the required quality may be achieved with a much smaller set of examples, the query-at-the-end algorithm may produce sub-optimal behavior. 2.2 The query-every-(cid:1)t strategy The problem with the former approach was the possible late detection of the goal condition. An alternative approach is to submit a query every (cid:1)t time units, where (cid:1)t can be as small as desirable. When (cid:1)t = T, we get the query-at-the-end strategy. The other extreme is to query after each atomic operation of the algorithm. Such a policy will solve the problem of late detection of the goal condition. However, if query cost (cid:28) is high, then a small (cid:1)t will be 3A notable exception is that of the windowing-based strategies such as those proposed by Quinlan [20] and by Fuernkranz [6]. There, a hypothesis is generated based on a portion of the examples. The learning is continuedonlyif the classi(cid:12)er is notof the desired quality. 4 Input: The maximum allowed time T, between- (cid:15) queries interval (cid:1)t. Output: (cid:1)t;2(cid:1)t;:::; T (cid:1)t . (cid:15) h (cid:1)t i l m Figure 3: The query-every-(cid:1)t algorithm detrimental since the bene(cid:12)t of an early detection of the goal criterion will be outweighed by the added costs of the queries. This approach is used, for example, by PROLOG interpreters, which ask for user con- (cid:12)rmation after each solution is found. A less extreme approach is taken by Internet search engines, which usually return results to the user in chunks of 10 or 25. 2.3 The query-best-n-times strategies Since querying at the end carries the danger of late detection and querying after each atomic operation carries the risk of highcumulative querycost, it seems reasonable to usethe former strategy with an optimal number of queries. If we are given a distribution function F(t) over the time when the goal predicate is satis(cid:12)ed, we can (cid:12)nd the number of queries N that minimizes the expected total time. The algorithm implementing this approach, which we call QBN , is shown in Figure 4. Such a strategy, however, will be especially ine(cid:11)ective t Input: Maximal allowed time T. (cid:15) Algorithm: (cid:15) 1. Denote = T; 2T;:::; (n(cid:0)1)T;T . Tn hn n n i 2. Perform global minimization by n of the expected elapsed time of the processa. 3. Set N to be the optimal value of n. 4. Return T ; 2T;:::;T . hN N i aMore formally, we minimize E(Tn). E will be de(cid:12)nedinEquation (3) inthe following section. Figure 4: The QBN strategy t when the probability of the goal predicate being satis(cid:12)ed is not uniformly distributed over [0;T]. In such cases a schedule with non-equal intervals can yield much better results than the optimal equal-step schedule. If, for example, this distribution is skewed towards T, then it is reasonable to query more often when approaching T. Thiscasecanbehandledbyanotherstrategywhichequalizestheprobabilitythatthegoal predicatewillbesatis(cid:12)edbetweeneachtwosubsequentqueries,i.e.,F(ti) F(ti(cid:0)1)=F(T)=n. (cid:0) The strategy, which we call QBN , is described in Figure 5. One problem with the above F approaches is their inability to handle tasks that are not limited in time. Another problem is that the schedules produced using these methods are not optimal. In the following sections 5 Input: Maximal allowed time T. (cid:15) Algorithm: (cid:15) 1. Denote = F(cid:0)1 F(T) ;F(cid:0)1 2F(T) ;:::;F(cid:0)1 (n(cid:0)1)F(T) ;T . Tn n n n 2. Perform globaDl min(cid:16)imizat(cid:17)ion by(cid:16)n of th(cid:17)e expected(cid:16)elapsed tim(cid:17)e oEf the process. 3. Set N to be the optimal value of n. 4. Return F(cid:0)1 F(T) ;F(cid:0)1 2F(T) ;:::;T N N D (cid:16) (cid:17) (cid:16) (cid:17) E Figure 5: The QBN strategy F we propose a methodology for constructing optimal schedules which can also handle time- unlimited tasks. 3 A framework for o(cid:11)-line query scheduling In this section we formalize the intuitive description of the query-scheduling problem given in the introduction. Let be a set of states. Let t be a time variable with non-negative S real values. Let be a random process such that each realization (trajectory) A(t) of A A represents a mapping from + to . R S Let G : 0;1 be a goal predicate, where 0 corresponds to False and 1 corresponds S ! f g to True. We say that is monotonic over G if and only if for each trajectory A(t) of the A A function G (t)= G(A(t)) is a non-decreasing function. Under the above assumptions, G (t) A A is a step function with at most one discontinuity point. G (t) describes the behavior of the A goal preddicate as a function of time for a particular realization of the random process. d This scheme resembles the one used in anytime algodrithms. The goal predicate can be viewed as a special case of the quality measurement used in anytime algorithms, and the requirement for its non-decreasing value is a standard requirement of these algorithms. The trajectories of correspond to conditional performance pro(cid:12)les [28, 27]. However, the na- A ture of the problem requires that we use a cost function u(t) instead of the utility function commonly used in the anytime algorithms literature. We assume that u is a monotonic non-decreasing function. Let be monotonic over G. The de(cid:12)nitions above show that the behavior of G for each A trajectory A(t) of can be described by a single point t , the (cid:12)rst point after which the A;G A goal predicate is true, i.e, t = inf t G (t) = 1 . If G (t) is always 0, we say that t is A;G t A A A;G f j g b not de(cid:12)ned. Therefore, we can de(cid:12)ne a random variable (cid:16) = (cid:16)A;G, which for each trajectory A(t) of with t de(cid:12)nedb, correspondds to t . d b A;G A;G A The behavior of (cid:16) can be described by its distribution function F(t). At the points where F(t) is di(cid:11)erentbiable, we use the probabilitybdensity f(t)= F0(t). It is important to note that in practice not each trajectory of leads to goal predicate A satisfaction even after in(cid:12)nitely large time. That means that the set of trajectories where t is unde(cid:12)ned is not necessarily of measure zero. That is why we de(cid:12)ne the probability of A;G b 6 success p as the probability of A(t) with t de(cid:12)ned 4. A;G For some problems, a time limit T on the running time of the process is given. We call such problems time-limited. Otherwise wbe call the problems time-unlimited and de(cid:12)ne T to be . 1 De(cid:12)nition 1 A query is a procedure that, when applied at time t, performs the following actions: 1. Suspends process A. 2. Computes the goal predicate at t. 3. If G (t) = 0 and t< T, the query resumes the algorithm. Otherwise it is stopped. A The timde during which the algorithm has been suspended is denoted by (cid:28), and the cost of additional resources required for a single query application is denoted by C. We assume that C is expressed in the same units as u(t). In the current model we assume both (cid:28) and C to be non-negative constants. De(cid:12)nition 2 Wede(cid:12)neaschedule asanon-decreasingsequenceoftimepoints t ;t ;:::;t 1 2 n T h i (or t ;t ;:::;t ;::: for the in(cid:12)nite case). 1 2 k h i A schedule is used in the following way: At each time point t in the schedule a query is i T applied to the process starting from t . If the goal predicate is satis(cid:12)ed or t T, the process 1 i (cid:21) stops. Otherwise the process resumes. The whole procedure stops either when the process is stopped by the query or (in the case of (cid:12)nite schedules) t is passed. n Our framework assumes that satisfying the goal predicate is useful only if it is detected by a query. Therefore we require that at least one element of a schedule for the time-limited case will not be less than T. This implies t T. In addition, from the de(cid:12)nition of query n (cid:21) 1 given above, tn(cid:0)1 < T (otherwise the process would always stop after tn(cid:0)1). The above observations lead to the following constraints over schedules for time-unlimited problems: t0 = 0 t1 t2 ::: tn(cid:0)1 < T tn < : (1) (cid:20) (cid:20) (cid:20) (cid:20) (cid:20) 1 De(cid:12)nition 3 We de(cid:12)ne the stopping time of schedule with respect to process realization (cid:3) (cid:3) T (cid:3) A as the (cid:12)rst point t , such that either G (t ) = 1 or t T. If no t satis(cid:12)es this A 2(cid:3)T (cid:21) 2 T condition, we say that t = . 1 d From the above de(cid:12)nition it follows that the cost u ( ) of schedule for process realization A (cid:3) T T A with a stopping point t = t is k u ( ) = u(t +k(cid:28))+kC: (2) A k T Note that u is not necessarily linear and therefore the above expression cannot be replaced by u(t )+k(u((cid:28))+C). Let = t ;t ;:::;t be a (cid:12)nite schedule5. Let us denote by t k 1 2 n 0 T h i the start time of the process, i.e., t = 0. Let F be a distribution function over (cid:16) and p be 0 4Anotherwaytoexpressthepossibilitythattheprocesswillnotstopatallistousepro(cid:12)lesthatapproach 1(cid:0)pwhen t!1. Weprefertouse pexplicitlybecause, inorderforF tobeadistributionfunction,itmust satisfy limt!1F(t)=1. 5The case of in(cid:12)niteschedules will be analyzed later. 7 the probability of success. The probability of the goal predicate being satis(cid:12)ed in the time segment from ti(cid:0)1 to ti is equal to p(F(ti) F(ti(cid:0)1)). The cost associated with this event is (cid:0) u(t +i(cid:28))+iC. The probability of the goal predicate being satis(cid:12)ed after t is 1 pF(t ), i n n (cid:0) and the associated cost is u(t +n(cid:28))+nC. Therefore, the expected cost of schedule with n T respect to F and p is Eu( )= Eu(t ;:::;t ) = 1 n T n p (u(ti +i(cid:28))+iC)(F(ti) F(ti(cid:0)1)) +(1 pF(tn))(u(tn +n(cid:28))+nC): (3) " (cid:0) # (cid:0) i=1 X In the future we denote Eu( ) by E( ). Sometimes it will be more convenient to use an T T alternative formulation of (3) which can be obtained by a simple regrouping of terms: n(cid:0)1 Eu(t ;:::;t ) = u(t +n(cid:28))+nC p (u(t +(i+1)(cid:28)) u(t +i(cid:28))+C)F(t ): (4) 1 n n i+1 i i (cid:0) (cid:0) i=1 X Our goal is to (cid:12)nd a schedule with minimal expected cost. That means that we must choose a number n and a time schedule = t ;:::;t , such that E( ) will be minimal. 1 n T h i T Thus, we must minimize (3) under the constraints given in (1). De(cid:12)nition 4 A schedule is optimal with respect to n if it minimizes the value of E( ) n T T under the following constraints: t0 = 0 t1 t2 ::: tm(cid:0)1 < T tm < (cid:20) (cid:20) (cid:20) (cid:20) (cid:20) 1 with m n. We call the corresponding value of E the optimal expected value for n, and (cid:20) denote it by En . opt The global optimal expected value, E , is de(cid:12)ned as inf En . If there exists n such opt n opt f g that the schedule realizes E , i.e. E( ) = E , we call a global optimal schedule n opt n opt n T T T and denote it by . opt T A schedule is de(cid:12)ned to be (cid:15)-optimal if E( ) E < (cid:15). opt T T (cid:0) If F is di(cid:11)erentiable, we can rewrite (3) in another form: n ti Eu(t ;:::;t ) = p (u(t +i(cid:28))+iC)f(t)dt+(1 pF(t ))(u(t +n(cid:28))+nC): (5) 1 n i n n i=1Zti(cid:0)1 (cid:0) X The form above is the speci(cid:12)c case of the equation n ti Eu(t ;:::;t ) = p u(t;t ;i;(cid:28);C)f(t)dt+(1 pF(t ))u(t ;t ;n;(cid:28);C); (6) 1 n i n n n i=1Zti(cid:0)1 (cid:0) X corresponding to the case when u can depend on t itself, for example, when the penalty is set for missing the exact moment when the goal predicate holds. A lower limit on the expected schedule cost (which determines an upper limit on the possible savings) is obtained from (5) by setting (cid:28) = 0 and C = 0. tn E(t) p u(t)f(t)dt+(1 pF(t ))u(t ): (7) n n (cid:21) Zt0 (cid:0) This case represents a pure o(cid:11)-line control, where queries use no resources. In the following section we present an algorithm for (cid:12)nding an (cid:15)-optimal schedule for time-limited problems. Section 5 describes a similar algorithm for time-unlimited problems. 8 4 An optimal scheduling algorithm for time-limited problems Inthissection wepresentanalgorithm for(cid:12)ndingan(cid:15)-optimal schedule. We start byproving necessary conditions for schedule optimality and continue with a theorem about su(cid:14)cient conditions for the existence of a global optimal schedule. We then specify a method for (cid:12)nding the (cid:12)rst element of a globally optimal schedule and a recursive formula to construct the rest of the sequence. We present an algorithm that combines the recursive formula with a standard single-variable optimization method and prove that this algorithm is guaranteed to (cid:12)nd an (cid:15)-optimal schedule. 4.1 Properties of optimal schedules In the analysis below we assume that F and u have (cid:12)rst derivatives and u is a monotonic increasing function. In the Appendix we will show how to weaken these assumptions. In addition, we assume that either (cid:28) or C is not zero6. We also assume that the probability of success, p, is positive. If it is zero, then there is no sense in querying the process at all. Our last assumption is that F(t) is strictly smaller than F(T) for each t < T. Otherwise, 0 0 there exists t < T such that F is constant over the segment [t ;T], and there is no sense in 0 querying after t, which means that condition (1) is too strong. 4.1.1 Necessary conditions for schedule optimality Before we proceed to our main theorem, we prove three properties of optimal schedules. Lemma 1 Let = t ;:::;t be an optimal schedule. Then the following conditions hold: 1 n T h i t = t for i = 0;:::;n 1 (8) i i+1 6 (cid:0) F(t ) = F(t ) for i= 0;:::;n 1 (9) i i+1 6 (cid:0) t = T: (10) n Intuitively, the lemma means that if a goal predicate cannot be satis(cid:12)ed between two time points, then there is no need to query at both points. Proof: We (cid:12)rst want to show how eliminating a single point from a schedule a(cid:11)ects 0 the expected cost. Let = t1;t2;:::;ti(cid:0)1;ti+1;tn be a schedule obtained from by T h i T eliminating t . By (3) we can see that the di(cid:11)erence between the expected costs of these i schedules can be written as 0 E( ) E( ) = T (cid:0) T p [(u(ti +i(cid:28))+iC)(F(ti) F(ti(cid:0)1))+(u(ti+1 +(i+1)(cid:28))+(i+1)C)(F(ti+1) F(ti)) (cid:1) (cid:0) (cid:0) (cid:0) (u(ti+1 +i(cid:28))+iC)(F(ti+1) F(ti(cid:0)1))+ (cid:0) n (u(tj +j(cid:28)) u(tj +(j 1)(cid:28))+C)(F(tj) F(tj(cid:0)1))]: (11) (cid:0) (cid:0) (cid:0) j=i+2 X From the assumptions about F(t) given in the beginning of this subsection and the condition tn(cid:0)1 < T tn of (1), it immediately follows that F(tn(cid:0)1) < F(tn). This proves (9) for the (cid:20) case of i =n 1. (cid:0) 6Otherwise no global optimal schedule exists (since any given schedule can be improved by adding new queries). 9 Assume now that there exists 1 i n 1 such that F(ti(cid:0)1) = F(ti) 7. Let us choose (cid:20) (cid:20) (cid:0)0 the largest i satisfying this condition and let be with t eliminated. Using the fact that i T T F(ti(cid:0)1)= F(ti), we obtain from (11) that 0 E( ) E( )= T (cid:0) T p [(u(t +(i+1)(cid:28)) u(t +i(cid:28))+C)(F(t ) F(t )) i+1 i+1 i+1 i (cid:1) (cid:0) (cid:0) (cid:0) n (u(tj +j(cid:28)) u(tj +(j 1)(cid:28))+C)(F(tj) F(tj(cid:0)1))]= (cid:0) (cid:0) (cid:0) j=i+2 X n p (u(tj +j(cid:28)) u(tj +(j 1)(cid:28))+C)(F(tj) F(tj(cid:0)1))]: (12) (cid:1) (cid:0) (cid:0) (cid:0) j=i+1 X We see that u is an increasing function, either C or (cid:28) is positive, and F(tn) > F(tn(cid:0)1); 0 Therefore, E( ) E( ) > 0. In other words, eliminating t improves the schedule, which i T (cid:0) T contradicts the optimality of . This ends the proof of (9). (8) follows immediately from (9). T Let us now show that t = T. Indeed, by (1) we know that t T. By (3) we see that n n (cid:21) the part of E( ) a(cid:11)ected by t can be written as: n T p(u(tn +n(cid:28))+nC)(F(tn) F(tn(cid:0)1))+(1 pF(tn))(u(tn +n(cid:28))+nC)= (cid:0) (cid:0) (u(tn +n(cid:28))+nC)(1 pF(tn(cid:0)1)): (13) (cid:0) Due to the fact that u(t) is an increasing function, we immediately obtain that if t >T then n substituting T for t will decrease E( ). This proves the last part of the lemma. 2 n T Corollary 1 The following equation follows immediately from (7) and (10). T E(t) p u(t)f(t)dt+(1 pF(T))u(T): (14) (cid:21) Zt0 (cid:0) The following theorem provides a set of tight constraints over optimal schedules. Theorem 1 (The main theorem) Let = t ;:::;t be an optimal schedule with respect 1 n T h i to n. Then for each i = 1;:::;n 1 the following equation holds: (cid:0) u(ti+1 +(i+1)(cid:28)) u(ti+i(cid:28))+C F(ti) F(ti(cid:0)1) (cid:0) = (cid:0) : (15) u0(t +i(cid:28)) F0(t ) i i Proof: Since is optimal for n, it minimizes Eu(t ;:::;t ) over the polyhedral de(cid:12)ned 1 n T by (1) with borders speci(cid:12)ed by the equations ti(cid:0)1 = ti. According to (10), the optimization is performed over n 1 variables t1;:::;tn(cid:0)1. By (8) ti(cid:0)1 = ti. Therefore, based on the (cid:0) 6 di(cid:11)erentiability of Eu(t ;:::;t ) 8, the following equations hold in the points of minimum: 1 n dE = 0 fori = 1;:::;n 1: (16) dt (cid:0) i By the di(cid:11)erentiation of (3), we obtain 0 0 pu(ti +i(cid:28))(F(ti) F(ti(cid:0)1)) p(u(ti+1 +(i+1)(cid:28))+(i+1)C)F (ti)+ (cid:0) (cid:0) 0 p(u(t +i(cid:28))+iC)F (t )= 0: (17) i i 7In order to use (11) as is, we shift the value of i by1. 8E is di(cid:11)erentiable dueto the di(cid:11)erentiability of F andu. 10

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.