ebook img

M/M/$\infty$ queues in semi-Markovian random environment PDF

0.25 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview M/M/$\infty$ queues in semi-Markovian random environment

M/M/∞ QUEUES IN SEMI-MARKOVIAN RANDOM ENVIRONMENT 8 0 0 B. D’AURIA 2 n Abstract. In this paper we investigate an M/M/∞ queue whose pa- a J rameters depend on an external random environment that we assume to be a semi-Markovian process with finite state space. For this model 5 1 we show a recursive formula that allows to compute all the factorial moments for the number of customers in the system in steady state. ] The used technique is based on the calculation of the raw moments of R the measure of a bidimensional random set. Finally the case when the P randomenvironmenthasonlytwostatesisdeeperanalyzed. Weobtain . an explicit formula to compute theabove mentioned factorial moments h t when at least one of the two states has sojourn time exponentially dis- a tributed. m [ 2 v 1. Introduction 2 4 TheM/M/∞ queueis one of the simplestmodelin queueingtheory. This 8 1 is due to the joint situation to have a memory-less arrival process and an 0 infinite set of servers that allows customers to behave independently from 7 each other. This suddenly stops to be true after introducing some correla- 0 / tion between customers. In this paper we achieve that by introducing an h independent random environment that modulates the system parameters, t a i.e. the arrival rate and the server speeds. Queues with variable service m and arrival speeds arise naturally in practice and therefore many classi- : v cal works can be found. Most of the results deal with the single server i queue, see for example Takine (2005), Ozawa (2004), Sengupta (1990) and X references therein. Neuts (1981) analyzed the M/M/1 queue as well as r a the M/M/C queue in random environment by using the matrix-geometric approach while Takine and Sengupta (1997) looked at the infinite server queue when only the arrival process was subject to a Markovian modula- tion. Theinfiniteserverqueueinrandomenvironmenthasthenbeenstudied by Keilson and Servi (1993), Baykal-Gursoy and Xiao (2004) and D’Auria (2007) in the special case when the random environment is Markovian and has only two states. 1991 Mathematics Subject Classification. 60K25, 60K37, 60D05. Key words and phrases. M/M/∞ queues,random environment,factorial moments. Part of this research took place while the author was still post-doc at EURANDOM, Eindhoven,TheNetherlands. 1 2 B.D’AURIA InO’Cinneide and Purdue(1986)theauthorslookedatthecasewhenthe environment is given by a finite state Markov process and for this case they showed how to compute the factorial moments for the number of customers in the system in steady state. Here we extend their analysis to the case of a semi-Markovian random environment. This extension is interesting as it makes the model more attractive for application purposes. Indeed, despite its simplicity, the M/M/∞ system is often used to model pure delay systems, such as highways, satellite links or long communication cables, or to approximate the behavior of multi server systems. When these kinds of systems are subject to external influences, suchasdaytimechangingrates,itisthenhelpfultolookatextendedmodels, such as the one proposed in this work, in order to analyze or predict their behaviors. The methodology we use follows the technique developed in D’Auria (2007). It consists on representing the stationary and isolate M/G/∞ sys- tem as a Poisson process on R2 and by computing the number of customers inthesystembymeasuringadeterministicsetaccordingtothepointprocess measure (see also Resnick (1987, §3.3) and D’Auria and Resnick (2006)). In thiscontexttherandomenvironmentcanbeexpressedasarandommodula- tionofthesetandinthespecialcaseofexponentialdistributedservicetimes its measure can be derived by solving a system of stochastic equations, see relation (4.5) below. We use this relation to compute the factorial moments for the number of customers in the system at steady state. 2. Model description To start, we define the random environment {Γ(u),u ∈ R} as a semi- Markov chain with values in the finite state space E = {1,...,K}. We assume that the sojourn time in the state k ∈ E, denoted as T , is an inde- k pendent positive random variable whose distribution function has Laplace transform denoted by τk(s) := E[e−sTk]. In the following we show that the Laplacetransformistheonly informationweneedtocomputethemoments. When the sojourn time in state k ∈ E expires, the environment jumps to state j ∈ E with probability p . Denoting by P := {p } the rout- kj kj k,j∈E ing matrix that we assume irreducible and with no loss of generality with p = 0, we can define the reverse routing matrix kk (2.1) Q := Π−1P†Π, where P† denotes the transpose of the matrix P, Π := diag(~π) and ~π is the stationary distribution of the Markov chain generated by P (see Br´emaud, 1999, §6.1). We assume that when the environment is in state k ∈E customers arrive according to a Poisson rate λ ≥ 0. Each of them brings an independent k request of service, σ, that is exponentially distributed with rate µ. All servers work at constant speed β = µ /µ ≤ 1. To avoid trivial cases we k k assume that µ,β,λ > 0 where λ := max λ and β := max β . k∈E k k∈E k M/M/∞ QUEUES IN SEMI-MARKOVIAN RE 3 By the results in D’Auria (2007) the system is stable and we are allowed to study its stationary regime. We then look at the system at time 0 and we count the number of customers still in the system. We order them according to their arrival times {uh}h∈Z with uh < uh+1 and u−1 < 0 ≤ u0, and we denote by G(σ) := 1−e−µσ, σ > 0, the common exponential distribution function of the {σh}h∈Z. The h-th customer, h < 0, will be in the system at time 0 iff its service time, σ , is bigger than the work done by the server it has occupied dur- h ing the time interval [u ,0). We denote this quantity by F (u ) and, as h Γ h the subscript shows, it is a random quantity that depends on the random environment Γ. Its value can be computed in the following way, 0 (2.2) F (u) := β dt, u ≤ 0. Γ Γ(t) Zu Denoting by N the number of customers in stationary regime we have that it is given by (2.3) N = 1{σ > F (u )}, h Γ h h<0 X where1{·} is the indicator functionof theset {·}. Itis helpfulto rewritethe numerable collection of indicator functions appearing in expression (2.3) in the following equivalent way 1{σ > F (u )} = δ (A ) h Γ h (uh,σh) Γ where δ denotes a Dirac delta measure with center (u,σ) ∈ R×R+ and (u,σ) the set A ⊂ R−×R+ is given by Γ (2.4) A := {(u,σ) ∈R−×R+ : σ > F (u)}. Γ Γ Thisalternativeformulationallowsthedecouplingofthesequence{(uh,σh)}h∈Z and the function F (u) both depending on the realization of the environ- Γ ment Γ in the computation of the quantity N. Indeed we can express the stationary number of customers in the system in the following way (2.5) N = δ (A )= N (A ), (uh,σh) Γ Γ Γ h<0 X where (2.6) N := δ Γ (uh,σh) h∈Z X is a point process which locates one Dirac delta measure at each arrival point {(uh,σh)}h∈Z. For the theoretical background and definition of point processesseeDaley and Vere-Jones(1988,§7.1)orResnick(1987,§3.1). The subscript Γ stays to denote that N depends on the random environment Γ by the sequence of arrival times {uh}h∈Z. Indeed given a realization γ of the process Γ, the sequence ({uh}h∈Z|Γ = γ) belongs to an inhomogeneous Poisson process with intensity rate λ , u ∈ R. By Proposition 3.8 in γ(u) 4 B.D’AURIA Resnick (1987) it follows that N |Γ = γ is still a Poisson process, now on Γ R×R+, with intensity measure λ (A) := E[N (A)|Γ = γ] = λ duG(dσ), A ⊂ R×R+. γ Γ γ(u) ZA Finally N is a doubly stochastic Poisson process or, more briefly, a Cox Γ process (see Daley and Vere-Jones, 1988, §8.5), i.e. a Poisson process whose intensity measure is itself random and given by (2.7) λ (A) := E[N (A)|Γ] = λ duG(dσ), A⊂ R×R+. Γ Γ Γ(u) ZA Itis well know that the fididistributions of a Cox process are of mixed Pois- son type (see Daley and Vere-Jones, 1988, Corollary 8.5.II), or equivalently that for any set A ⊂ R×R+ (2.8) N (A) ∼ Po(|A| ) Γ Γ is a Poisson random variable whose parameter is itself random with value |A| = E[N (A)|Γ]. |A| can be geometrically interpreted as the measure of Γ Γ Γ the set A according to the measure λ (·), i.e. |A| = λ (A). Γ Γ Γ From relations (2.5) and (2.8) we finally get that (2.9) N ∼ Po(|A | ), Γ Γ a mixed Poisson random variable with random parameter |A | = λ (A ). Γ Γ Γ Γ Figure1showsanexampleofrealization wheretherandomenvironmenthas Figure 1. Example of realization. K = 5states: thedotsarethecentersoftheDiracdeltasofthepointprocess N , while the piecewise linear function F (u) denotes the lower bound of Γ Γ theset of integration A . Thecustomers present in thesystem at time 0 are Γ then the ones whose dots fall in the set A ; in the shown example N = 2. Γ M/M/∞ QUEUES IN SEMI-MARKOVIAN RE 5 Example 2.1. The easiest case is when the environment process is constant, K = 1, so that the system reduces to a classical M/M/∞ queue. In this case the set A is deterministic, given by {(u,σ) : u < 0, σ > β|u|)}. From Γ equation (2.7) we get 0 λ ∞ λ λ |A | = λG(dσ)du = 1−G(u)du = E[σ]= Γ Γ β β βµ Z−∞Zσ>β|u| Z0 and we obtain the known information that N ∼ Po( λ ), i.e. the stationary βµ number of customer in the system is Poisson distributed. 3. Computing the factorial moments Beforebeginningtocomputethefactorialmomentsoftherandomvariable N, it is worthwhile to review some basic results about the different kinds of moments and their relations with the various generating functions. A good reference about the following relations especially in connection with point processes is Daley and Vere-Jones (1988), Chapter 5. Given a random variable X, we denote by ψ (s) := E[esX] its moment X generating function and by φ (z) := E[zX] its probability generating func- X tion. (i) The factorial moment of order i of X, f is defined as X ∞ f(i) := E Xi = nip , X n n=0 (cid:2) (cid:3) X wherep = Pr{X = n}andni := n(n−1)···(n−i+1)isthefallingfactorial. n Itcanbedirectly computedbythei-thderivative oftheprobability generat- (i) (i) ing function, i.e. f = lim φ (z). Knowingthe factorial moments of X X z→1 X it is then easy to compute its moments, in the sequel called raw moments to distinguishthem fromthefactorial ones. Indeed,by taking theexpectations on both sides of the following known equivalence (Abramowitz and Stegun, 1964) n Xn = S(i)Xi, n i=0 X whereS(i) is a Stirling Number of the Second Kind, we obtain the following n relation between the n-th moment of X, m(n) := E[Xn] with m(0) := 1, and X X the factorial moments of order i≤ n, n (3.1) m(n) = S(i)f(i). X n X i=0 X Thereverserelation isobtainedbyusingtheStirlingNumbersoftheFirst Kind, s(n) (see Abramowitz and Stegun, 1964), that satisfy the following i 6 B.D’AURIA known relation i Xi = s(n)Xn, i n=0 X so that, taking the expectations of both sides, finally we get i (3.2) f(i) = s(n)m(n). X i X n=0 X It is interesting to notice that relation (3.1) comes directly from usingthe fact that ψ (s) = φ (es) and that m(n) = lim ψ(n)(s). Indeed, X X X s→0 X dn n limψ(n)(s)= lim φ (es)= S(i)φ(i)(1), s→0 X s→0dsn X n X i=0 X wherein the last equation we used Fa´a di Bruno’s formula for theexpansion ofderivativesofordernforcompositionoffunctions(seeAbramowitz and Stegun, 1964) and the fact that lim dn es = 1. s→0 dsn A random variable X is called mixed Poisson when there exists a d non-negative random variable Y such that X = Po(Y), or equivalently d d (X|Y = y) = Po(y), where the operator = denotes equality in distribution. In the case X were a mixed Poisson random variable we would have that φ (z) =ψ (z−1), X Y so that taking the derivatives of order n, we get (n) (n) (n) limφ (z) = limψ (z−1) = limψ (s), X Y Y z→1 z→1 s→0 or, in other words, that the factorial moments of X are directly the raw moments of Y, (n) (n) f = m , X Y and the latter often are easier to compute. Thisisexactlywhathappensinourcasewhere,asshownbyrelation(2.9), N is a mixed Poisson and that is why we are interested into its factorial moments rather then directly its raw moments. Indeed we have that the following relation holds (n) (n) (3.3) f = m , N |AΓ|Γ andourtask reducestothecomputation of theraw moments of themeasure of the random set A . Γ 4. Computing the raw moments of |A | Γ Γ In this section we compute the raw moments of the measure of the set A , defined in (2.4), when measured by the random intensity measure λ , Γ Γ defined in (2.7). We use a fixed point technique and to this aim we look at a modified environment process, Γ , that is the Palm version of the process 0 Γ, i.e. we assume that at time 0 it has a transition. We denote by k ∈ E M/M/∞ QUEUES IN SEMI-MARKOVIAN RE 7 the last state it has assumed before 0, i.e. k := Γ (0−), and by T its 0 k corresponding sojourn time. While, as depicted in Figure 1, for the process Γ the sojourn time in the last state before 0 would be given by a residual sojourn time, for the process Γ it is distributed as any other sojourn time 0 corresponding to the same state. We define by A := (A |Γ (0−) = k), 0k Γ0 0 k ∈ E, the set A conditioned to the event that the last state occupied by Γ0 the environment before 0 is the state k, and we call |A | its measure, i.e. 0k |A |:= (λ (A )|Γ (0−) = k). 0k Γ0 Γ0 0 Figure 2. Decomposition of A as C ∪T A . 03 03 T3,β3T3 05 Figure 2 shows an example of the set A when k = 3, together with 0k its decomposition in the set C and the set T A . To this we have 0k T3,β3T3 05 defined by C the restriction of the set A up the last transition of the 0k 0k process Γ before time 0, i.e. 0 (4.1) C := A ∩{(x,y) ∈ R−×R+||x| < T }, 0k 0k k |C |:= (λ (C )|Γ (0−)= k) 0k Γ0 0k 0 and by T A the (−s,t)-translated version of the set A, i.e. s,t (4.2) T A := {(x,y) ∈ R2|(x+s,y−t)∈ A}. s,t We denote by j ∈ E the state of the environment before the last transition before time 0, i.e. j := Γ (−T−), so that, −T being a regeneration point 0 k k fortheprocessΓ ,wehave theindependenceofthesets C andT A 0 0k Tk,βkTk 0j conditioned to the values of the states before and after the transitions, i.e. j and k. β T = F (−T ) is the exact amount of work thenon-empty servers k k Γ0 k have done during the time interval [−T ,0) being in state k. k By noticing that the set T A has measure equal in distribution Tk,βkTk 0j to |T A | := (λ (T A )|Γ (0−) = j), we can write down the 0,βkTk 0j Γ0 0,βkTk 0j 0 8 B.D’AURIA following set of stochastic equations K d (4.3) |A | = |C |+ 1{k ← j}|T A |, 0k 0k 0,βkTk 0j j=1 X where the indicator function 1{k ← j} selects the backward state transition of the environment from the state k to the state j; this would happen, according to definition (2.1), with probability q . kj Thanks to the fact that along the vertical axis the measure function is given by G that is exponential we have that the following result holds: Lemma 4.1. Given the transformation T , defined in (4.2), we have that s,t (4.4) |T A | = e−µt|A | , 0,t Γ Γ Γ Γ for any random set A : Γ → B(R−×R+). Γ Proof. By using definition (2.7) we have λ (T A )= λ due−µσdσ Γ 0,t Γ Γ(u) ZT0,tAΓ = e−µt λ due−µ(σ−t)dσ = e−µtλ (A ). Γ(u) Γ Γ ZT0,tAΓ (cid:3) By using Lemma 4.1, equation (4.3) simplifies in the following K (4.5) |A |=d |C |+e−µβkTk 1{k ← j}|A |, 0k 0k 0j j=1 X that is the starting point to prove the following main result: Theorem 4.2. Let usdefine m~(n) ∈RK as the column vector whose k-th co- 0 (n) (n) ordinate isthen-thmomentoftherandom variable|A |, i.e. m := m 0k 0k |A0k| then the following relation holds n n (4.6) (−1)i Rn−iB m~(i) = 0, i n 0 i=0 (cid:18) (cid:19) X whereR := diag(ρ ), ρ := λ /µ andthematrixB := diag τ−1(nµ ) −Q. k k k k n k k The matrix B , n > 0, is invertible and therefore it is possible to express the n (cid:0) (cid:1) (n) (i) n-thmoment vectorm~ interms of the previousones, m~ , i = 0,...,n−1, 0 0 in the following way n−1 n (4.7) m~(n) = (−1)n−1−i B−1Rn−iB m~(i). 0 i n n 0 i=0 (cid:18) (cid:19) X M/M/∞ QUEUES IN SEMI-MARKOVIAN RE 9 Proof. We firstcompute the values of the variable |C |in thefollowing way 0k T k |C | =λ e−µkxdx = ρ (1−e−µkTk). 0k k k Z0 Then substituting its value in equation (4.5), it gives K (4.8) |A | =d ρ (1−e−µkTk)+e−µkTk 1{k ← j}|A |, 0k k 0j j=1 X that can be rewritten as K (4.9) |A |−ρ =d 1{k ← j}(|A |−ρ )e−µkTk. 0k k 0j k j=1 X We denote by ψ0k(s):= E es|A0k| the moment generating function of |A0k| sothat applyingtheexponential functiontobothmembersofequation (4.9) (cid:2) (cid:3) previously multiplied by s and then taking the expectation, we obtain K ψ (s)e−sρk = E q es(|A0j|−ρk)e−µkTk 0k kj   j=1 X   K = E q ψ (se−µkTk)e−sρke−µkTk . kj 0j   j=1 X   Last expression can be written in matrix form in the following way (4.10) e−sRψ~ (s) = E e−sRT(Qψ~ )(sT) , 0 0 h i whereT := diag(e−µkTk)andwherewithnotation~v(W),withWadiagonal matrix, we denote a vector whose k-th component is v (w ). We use then k kk the following matrix formulas for derivatives n n (4.11) D(n)[e−sW~v(s)] = (−1)n−i e−sWWn−iD(i)[~v(s)], i i=0 (cid:18) (cid:19) X and (4.12) D(n)[~v(sW)] = Wn~v(n)(sW), to compute the n-th derivative of both sides of equation (4.10) so that n n (−1)n−i e−sRRn−iψ~(i)(s)= i 0 i=0 (cid:18) (cid:19) X n n = E (−1)n−i e−sRTRn−iTn−iD(i)[Qψ~ (sT)] 0 i " # i=0 (cid:18) (cid:19) X n n = E (−1)n−i e−sRTRn−iTn(Qψ~(i))(sT) . i 0 " # i=0 (cid:18) (cid:19) X 10 B.D’AURIA Remembering that m~(n) = lim ψ~(n)(s) and taking the limit of last ex- 0 s→0 0 pression as s → 0, we get n n n n (4.13) (−1)n−i Rn−im~(i) = E (−1)n−i Rn−iTnQm~(i) . i 0 i 0 " # i=0 (cid:18) (cid:19) i=0 (cid:18) (cid:19) X X Multiplying on the left side by (−1)−nE[Tn]−1, the last expression can be easily rearranged in n n (4.14) (−1)−i Rn−i[E[Tn]−1−Q]m~(i) = 0, i 0 i=0 (cid:18) (cid:19) X that gives the result. The invertibility of the matrix B for n > 0 comes n from Lemma A.2. (cid:3) It is remarkable that it is possible to express equation (4.6) in terms of the forward transition chain P. The result is contained in the following corollary whose proof comes from simple matrix computations. Corollary 4.3. A result similar to equation (4.6) is valid for the row vector † †(n) (n) m~ := m~ , that involves the matrix P instead of the matrix Q, i.e. 0 0 (cid:16) (cid:17) n n (4.15) (−1)i m~†(i)ΠB′ Rn−i = 0, i 0 n i=0 (cid:18) (cid:19) X where B′ := diag τ−1(nµ ) − P. The matrix B′ is non-singular when n k k n n> 0. (cid:0) (cid:1) Given the raw moments of the |A |, we can successively compute the Γ0 moments of the measures of the sets A := (A |Γ(0) = k), k ∈ E. Following k Γ (n) (n) previous definitions we define m := m . Similarly to equation (4.3) we k |Ak| have the following equation K (4.16) |Ak|=d |Ck∗|+ 1{k ← j}|T0,βkTk∗A0j|, j=1 X with |Ck∗| = ρk(1 − e−µkTk∗). Tk∗ refers to a residual sojourn time of the environment in state k ∈ E. We define by τ∗(s) the Laplace transform of k the distribution function of T∗ and it is related to the one of T , τ (s), by k k k the relation τ∗(s) = τ¯ (1−τ (s))/s, with τ¯ := E[T ]−1. For the vector of k k k k k raw moments m~(n) the following theorem holds. Theorem 4.4. The vector m~(n) satisfies the following relation with the vec- (n) tor m~ 0 n n (4.17) (−1)i Rn−i[m~(i)−E m~(i)] = 0, i n 0 i=0 (cid:18) (cid:19) X

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.