ebook img

Further Properties of Wireless Channel Capacity PDF

0.27 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Further Properties of Wireless Channel Capacity

1 Further Properties of Wireless Channel Capacity Fengyou Sun and Yuming Jiang Abstract Futurewirelesscommunication callsforexploration ofmoreefficientuseofwirelesschannel capacitytomeettheincreasing demand on higher data rate and less latency. However, while the ergodic capacity and instantaneous capacity of a wireless channel have been extensively studied, they are in many cases not sufficient for use in assessing if data transmission over the channel meets the quality of service (QoS) requirements. To address this limitation, we advocate a set of wireless channel capacity concepts, namely “cumulative capacity”, “maximum cumulative capacity”, “minimum cumulative capacity”, and “range of cumulative capacity”, and for each, study itsproperties by taking into consideration the impact of the underlying dependence structure of the corresponding stochastic process. Specifically, their cumulative distribution function (CDFs) are investigated 6 extensively,wherecopulaisadoptedtoexpressthedependencestructures.Resultsconsideringbothgenericandspecificdependence 1 structures are derived. In particular, in addition to i.i.d., a specially investigated dependence structure is comonotonicity, i.e, the 0 time series of wireless channel capacity are increasing functions of a common random variable. Appealingly, copula can serve 2 as a unifying technique for obtaining results under various dependence assumptions, e.g. i.i.d. and Markov dependence, which r are widely seen in stochastic network calculus. Moreover, some other characterizations of cumulative capacity are also studied, p including moment generating function, Mellin transform, and stochastic service curve. With these properties, we believe QoS A assessment of data transmission over the channel can be further performed, e.g. by applying analytical techniques and results of the stochastic network calculus theory. 4 ] I. INTRODUCTION T In future wireless communication, there will be a continuing wireless data explosion and an increasing demand on higher I . data rate and less latency. It has been depicted that the amount of IP data handled by wireless networks will exceed 500 s c exabytesby2020,theaggregatedatarateandedgerate willincreaserespectivelyby1000 and100 from4Gto 5G,andthe [ × × round-trip latency needs to be less than 1ms in 5G [1]. Evidently, it becomes more and more crucial to explore the ultimate 2 capacitythata wireless channelcanprovideand to guaranteepluralisticquality ofservice (QoS) forseamless user experience. v Informationtheoryprovidesa frameworkforstudyingtheperformancelimits in communicationandthe mostbasic measure 9 ofperformanceischannelcapacity,i.e.,themaximumrateofcommunicationforwhicharbitrarilysmallerrorprobabilitycanbe 7 achieved[2].Duetothetimevariantnatureofawirelessfadingchannel,itscapacityovertimeisgenerallyastochasticprocess. 9 Todate,wirelesschannelcapacityhasmostlybeenanalyzedforitsaveragerateintheasymptoticregime,i.e.,ergodiccapacity, 0 0 or at one time instant/short time slot, i.e., instantaneous capacity. For instance, the first and second order statistical properties . of instantaneous capacity have been extensively investigated, e.g. in [3], [4]. However, such properties of wireless channel 2 0 capacity are ordinarily not sufficient for use in assessing if data transmission over the channel meets its QoS requirements. 5 Thiscalls for studyingotherpropertiesof wirelesschannelcapacity,which can be moreeasily used for QoSanalysis. To meet 1 this need constitutes the objective of this paper. : v Specifically, we advocate in this paper a set of (new) concepts for wireless channel capacity and study their properties. i These concepts include “cumulative capacity”, “maximum cumulative capacity”, “minimum cumulative capacity”, and “range X of cumulativecapacity”.Theyrespectivelyreferto the cumulatedcapacityovera time period,the maximumandthe minimum r of such capacity within this period, and the gap between the maximum and the minimum. a Among these (new) concepts, the wireless channel cumulative capacity of a period is essentially the amount of data transmission service that the wireless channel provides (if there is data for transmission) [5] or is capable of providing (ifthereisnodatafortransmission)[6]in thisperiod.Fortheformer,theconceptiscloselyrelatedto the(cumulative)service process concept that has been widely used in the stochastic network calculus literature, e.g. in [5]–[14]. In particular, in these works when charactering the cumulative service process using server models of stochastic network calculus and/or applying the cumulative service process concept to QoS analysis, some special assumptions on the dependence structure of the process are often considered, such as independence [6]–[8] and Markov property [10], [12], [13]. In addition, we introduce “maximum cumulative capacity”, “minimum cumulative capacity” and “range of cumulative capacity” that are new but we believe are also crucial concepts for analyzing QoS performance of wireless channels. This is motivatedby the fact that, even with the CDF (i.e. full characteristics) or its boundsof the cumulativecapacity known,it may still be difficult to perform QoS analysis of the channel. (One can easily observe this difficulty by assuming fluid traffic input and trying to find backlog bounds from queueing analysis of the channel. See e.g. [6]). As a special case of these concepts, forward-lookingand backward-lookingvariationsof them are also defined, which turn outto be useful in differentapplication scenarios. Fortheinvestigation,unlikemostexistingworkinthestochasticnetworkcalculusliterature,thepresentpapermainlyfocuses directlyonthecumulativedistributionfunctions(CDFs)ofthecorrespondingprocessesofthese(new)concepts.Fortheirother characterizations, e.g. moment generating function [7], Mellin transform [15], and stochastic service curve [6], a number of resultsarealsoreportedforcumulativecapacitytoexemplifyhowsuchpropertiesmaybeanalyzed,butthisisnotfocused.An 2 underlying reason is that a random variable is fully characterized by its CDF. To unify the investigation for each concept, we introducecopulaasatechniquetoaccountforthevariousdependencestructuresimpliedbydifferentpossiblepropertiesofthe process. For instance, besides a process with i.i.d. increments, a Markov process has also been proved to have a dependence structure [16], [17]. In additionto such dependenceassumptionsfor which many results in the stochastic network calculus are available, we use comonotonicityas a dependencestructure when there is a strong time dependencein the channel, i.e., when the time series of instantaneous wireless channel capacities can be represented as increasing functions of a common random variable.Moreover,genericresultsunderarbitrarydependencestructuresarealso obtainedwhenonlythe marginaldistribution functions are assumed to be known. To remark,the idea of takingadvantageof specific dependencestructuresin analysiscan be foundin the stochastic network calculusliterature,e.g.,independentincrements[6]–[8]andMarkovproperty[10],[12],[13].However,suchdiversedependence structures are investigated separately, without a unified technique. In addition, the literature investigation mainly focuses on the stochasticservicecurvecharacterizationofthe cumulativeservice processandonapplyingit toQoSperformanceanalysis. Little has directly focused on the probability distribution function characteristics of the cumulative capacity. Moreover, to the best of our knowledge, there is no previous work focusing on the probabilistic distribution function characteristics of the maximum,the minimumand the range of cumulative capacity of a wireless channel.In [18], the conceptof copula is brought intostochasticnetworkcalculus,whichisappliedtoconsiderthedependenceinthesuperpositionpropertyofarrivals.Different from[18],ourfocusisthecumulativecapacityprocessesthatarerelatedtothecumulativeserviceprocess,whilenotthearrival process. In addition, we use copula to feature the dependence structures in the considered cumulative capacity processes. The contributionsof this work1 are several-fold.(1) Severalconcepts for studying wireless channelcapacity are introduced, namely “cumulative capacity”, “maximum cumulative capacity”, “minimum cumulative capacity” and “range of cumulative capacity”. For the latter three concepts, we originally introduce their forward-looking and backward-looking versions and highlight the fundamental difference between a forward-looking concept and its backward-looking counterpart. (2) A copula technique is introduced to unify the analysis under differencedependence conditions. (3) The probability distribution function characteristics of the processes corresponding to the introduced concepts are investigated, and various exact solutions or bounds on the probability distribution functions are derived. (4) Other characteristics, such as moment generating function, Mellin transform and stochastic service curve, of the cumulative capacity process are investigated to exemplify how such characteristics of the processes corresponding to the introduced concepts can be studied. Based on these properties, QoS assessment of data transmission over a wireless channel can be further performed by, e.g., exploiting the stochastic network calculus theory (see e.g. [6]). The remainder of this paper is structured as follows. The fundamental concepts of wireless channel capacity, including instantaneous capacity, cumulative capacity, maximum cumulative capacity, minimum cumulative capacity and the range of cumulative capacity, are first introduced in Sec. II. Also in Sec. II, preliminaries for later analysis, including those on copula, non-Grangercausalityandchangeofmeasure,aredescribed.InSec.III,copulaiselaboratedasaunifyingtechniqueforanalysis of cumulative capacity processes under different dependence structures. The probability distribution function characteristics, particularly the CDF, for cumulative capacity and maximum/minimum cumulative capacity are analyzed respectively in Sec. IV and Sec. V. Other characterizations of cumulative capacity are elaborated in Sec. VI. Finally, the paper is concluded and future work is discussed in Sec. VII. II. FUNDAMENTALCONCEPTS ANDPRELIMINARIES A. Fundamental Concepts 1) Instantaneous Capacity: Consider a wireless channel. We assume discrete time t=1,2,..., and that the instantaneous capacity [19] or mutual information [20] C(t) of the channel at time t can be expressed as a function of the instantaneous SNR γ at this time [3]: t C(t)=log (g(γ )). (1) 2 t Forsingleinputsingleoutput(SISO)channels,ifCSIisonlyknownatthereceiver,theinstantaneouscapacityorthemutual information of the channel, assuming flat fading, can be expressed as C(t)=log (1+γ h(t)2), (2) 2 | | where h(t) is a stochastic process describing the fading behavior, h(t) denotes the envelope of h(t), γ = P/N W denotes 0 | | the average received SNR per complex degree of freedom,P is the average transmission power per complex symbol, N /2 is 0 the power spectral density of AWGN, and W is the channel bandwidth. In the literature, PDF or CDF of the instantaneouscapacity is available for varioustypes of channels, e.g. Rayleigh channel [21], Rice channel [22], Nakagami-m channel [23], Suzuki channel [24], and more [3]. Specifically, the CDF of the Rayleigh channel instantaneous capacity is expressed as [21] F (r) = 1 e (2r 1)/γ. (3) C(t) − − − 1Thisisstillanon-goingwork,subjecttosignificantrevisions.Thecontributionsaresummarizedonthison-lineversionandwillbesignificantlyextended whenmoreresultsareadded. 3 2) Cumulative Capacity: We define the cumulative capacity through period (s,t] as t S(s,t) C(i), (4) ≡ i=s+1 X where C(i) is the instantaneous capacity at time i. 3) Maximum Cumulative Capacity: We define the maximum cumulative capacity in period (0,t] as k S(0,t) sup S(j,k)= sup C(i) , (5) ≡   1 j k t 1 j k t ≤ ≤ ≤ ≤ ≤ ≤ Xi=j   where C(i) is the instantaneous capacity at time i. Fixing j =1 in (5), we obtain a forward-looking of the maximum cumulative capacity, i.e., k −→S(0,t) sup S(0,k)= sup C(i) , (6) ≡1 k t 1 k t ! ≤ ≤ ≤ ≤ Xi=1 while fixing k =t in (5), we obtain a backward-looking of the maximum cumulative capacity, i.e., t ←S−(0,t) sup S(j,t)= sup C(i) . (7) ≡   1 j t 1 j t ≤ ≤ ≤ ≤ Xi=j   4) Minimum Cumulative Capacity: We define the minimum cumulative capacity in period (0,t] as k S(0,t) inf S(j,k)= inf C(i) , (8) ≡1 j k t 1 j k t  ≤ ≤ ≤ ≤ ≤ ≤ i=j X   where C(i) is the instantaneous capacity at time i. Similarly, fixing j =1 in (8), we obtain a forward-looking of the minimum cumulative capacity, i.e., k S(0,t) inf S(0,k)= inf C(i) , (9) −→ ≡1≤k≤t 1≤k≤t i=1 ! X and fixing k =t in (8), we obtain a backward-looking of the minimum cumulative capacity, i.e., t S(0,t) inf S(j,t)= inf C(i) . (10) ≡1 j t 1 j t  ←− ≤ ≤ ≤ ≤ i=j X   5) Range of Cumulative Capacity: We define the range of cumulative capacity in period (0,t] as R(0,t) S(0,t) S(0,t). (11) ≡ − The range can also have variations based on the selection of forward-looking and backward-looking expressions of the maximum and minimum cumulative capacity, e.g, forward-looking −→R(0,t) −→S(0,t) −→S(0,t) and backward-looking ≡ − R(0,t) S(0,t) S(0,t) ≡ − −→ −→ −→ Remark 1. S(0,t) is essentially a partial sum process in probability theory. When studying the maxima of partial sums in probability theory, the forward-looking versions of S(0,t) and R(0,t) are typically focused [25], [26]. This is probably due to that the studied probability problems are often forward-looking in nature, e.g., at time 0, making decision based on possible happenings in the future time t. However, for QoS analysis of communication networks, backward-looking is more important,e.g.when decidingdelayat time t, we haveto look backwardsto see howmuchservice the follow hasexperienced. As directly seen from their definitions, the forward-looking maximum cumulative capacity is fundamentally different from its backward-lookingcounterpart.Asaconsequence,resultsinprobabilitytheoryfortheprocessofmaximaofpartialsumsshould be used with care when they are applied to queueing analysis. In general, under time reversibility assumption, the results for the forward-looking definitions may be extended for application to the backward-looking definitions. The range describes the gap between the maximum and the minimum, and hence the smaller the range, the closer are the two. In particular, when the range is small or approaching 0, the maximum becomes the minimum, hence should also equal the cumulative capacity. Range may be used as a measure to characterize the tightness between an upper bound and a lower bound on the cumulative capacity. The idea of using a gap between analytical bounds themselves or between the bounds and the exact results to characterize the tightness or accuracy of the obtained boundshas been used in e.g. Information Theory to investigate / characterize the information capacity of a channel [27]. In the context of stochastic network calculus, such an idea was exploited in [28] to study the accuracy of the obtained bounds. 4 B. Preliminaries 1) Copula: Copula is a well-knownconceptfor dependencemodelingby decouplingthe joint distribution functioninto the dependence structure and marginal behavior. Definition 1. [29], [30] A d-dimensional copula is a distribution function on [0,1]d with standard uniform marginal distributions. It is equivalentto say that a copula is any function C :[0,1]n [0,1], which has the following three properties: [29], [30] → (1) C(u ,...,u ) is increasing in each component u . 1 d i (2) C(1,...,1,u ,1,...,1)=u for all i 1,...,d , u [0,1]. i i i (3) For all (a1,...,ad),(b1,...,bd) ∈ [0,∈1]{d with ai}≤ bi∈, 2i1=1... 2id=1(−1)i1+...+idC(u1,i1,...,ud,id) ≥ 0, where u =a and u =b for all j 1,...,d . j,1 j j,2 j ∈{ } P P The significance of copulas in studying the joint distribution functions is summarized by the Sklar’s theorem, which shows thatalljointdistributionfunctionscontaincopulasandcopulasmaybeusedinconjunctionwithmarginaldistributionfunctions to construct joint distribution functions. Theorem 1. [29] Let F be a joint distribution function with marginals F ,...,F . Then there exists a copula C :[0,1]d 1 d [0,1] such that, for all x ,...,x in R¯ =[ , ] → 1 d −∞ ∞ F(x ,...,x )=C(F (x ),...,F (x )). (12) 1 d 1 1 d d If the marginals are continuous, then C is unique; otherwise C is uniquely determined on RanF RanF ... RanF , 1 2 d where RanF = F (R¯) denotes the range of F . Conversely, if C is a copula and F ,...,F a×re univar×iate d×istribution i i i 1 d functions, then the function F is a joint distribution function with marginals F ,...,F . 1 d 2) Non-Granger Causality: Non-Granger causality is a concept initially introduced in econometrics and refers to a multi- variate dynamic system in which each variable is determined by its own lagged values and no further information is provided by the lagged values of the other variables. This concept has a direct copula expression. Assumption 1. [31] The copula function Ctj Ft1 ,Ft2 ,...,Ftj ,...,Ft1 ,Ft2 ,...,Ftj (13) X1 X1 X1 Xm Xm Xm (cid:16) (cid:17) admits the hierarchical representation Ctj Gtj Ft1 ,Ft2 ,...,Ftj ,...,Gtj Ft1 ,Ft2 ,...,Ftj , (14) X1 X1 X1 X1 Xm Xm Xm Xm (cid:16) (cid:16) (cid:17) (cid:16) (cid:17)(cid:17) where Gtj (u1,u2,...,uj), i=1,2,...,m are copula functions. Xi i i i Denote the running minimum and maximum of X up to time t respectively as i j m (t ) = min X (t );t t t , (15) i j i k 1 k j { ≤ ≤ } M (t ) = max X (t );t t t (16) i j i k 1 k j { ≤ ≤ } and define Ftj (B ) = P(m (t )>B ), (17) mi i i j i Ftj (B ) = P(M (t ) B ). (18) Mi i i j ≤ i Proposition 1. [31] Assume P(X (t ) B ,...,X (t ) B )=Ctn Ftn(B ),...,Ftn(B ) , (19) 1 n ≤ 1 k n ≤ k X1 1 Xk k andthatthecopulafunctionallows the hierarchicalrepresentationunde(cid:0)rAssumption1. Then the(cid:1)copulafunctionrepresenting the dependence structure among the running maximum (minimum) at time t is the same copula function (survival copula n function) representing dependence among the levels at the same time, namely, P(M (t ) B ,...,M (t ) B ) = Ctn Ftn (B ),...,Ftn (B ) , (20) 1 n ≤ 1 k n ≤ k M1 1 Mk k P(m (t )>B ,...,m (t )>B ) = Ctn(cid:0)Ftn (B ),...,Ftn (B )(cid:1) . (21) 1 n 1 k n k m1 1 mk k (cid:16) (cid:17) 5 3) ChangeofMeasure: Considerastochasticprocesses Z withaPolishstatespaceE andsamplepathsintheSkorokhod t spaceD =D([0, ),E)equippedwith thenaturalfiltratio{n }F andtheBorelσ-field F. Fortwo processesrepresented t t 0 by probability mea∞sures P, P on (D,F), it is interesting to l{ook}fo≥r a likelihood ratio process L , such that t { } P=E[L ,A], A F , (22) e t ∈ t i.e., the restriction of P to (D,F ) is absolutely continuous w.r.t. the restriction of P to (D,F ) [32] [33]. t e t Proposition 2. [32] Let F be the natural filtration on D, F the Borel σ-field and P a given probability measure on (D,F). e { t}t≥0 (i) If L is a nonnegativemartingale w.r.t. ( F ,P) such that EL =1, then there exists a unique probability t t 0 t t measu{re P} ≥on F such that (22) holds. { } (ii) Conversely, if for some probability measure P and some F -adapted process L (22) holds, then L is a t t t 0 t nonnegatieve martingale w.r.t. ( F ,P) such that EL =1. { } ≥ { } t t { } e Theorem 2. [32] Let L , P be as in Proposition 2(i). If τ is a stopping time and G F , G τ < , then t τ { } ∈ ⊆{ ∞} 1 e P(G)=E ;G . (23) L (cid:20) τ (cid:21) More generally, if the waiting time process W 0 is F -meeasurable, then E[W;τ < ]=E[W/L ;τ < ]. τ τ ≥ ∞ ∞ Corollary 1. [32] Let L , P be as in Proposition 2(i), and let τ be a stopping time with P(τ < )=1. Then a necessary { t} e ∞ and sufficient condition that EL =1 is that P(τ < )=1. τ ∞ e Definition 2. [32] Assume that Z is Markov w.r.t. the natural filtration F on D and define L to be a multiplicative functional if Lt is adapted to F{ ta}nd e t { t} t { } L =L (L θ ) (24) t+s t s t · ◦ P -a.s. for all x, s, t, where θ is the shift operator. The precise meaning of this is the following: being F -measurable, L x t t t has the form L =ϕ ( Z ) for some mapping ϕ :D[0,t] [0, ), and then L θ =ϕ ( Z ). t t u 0 u t t s t s t+u 0 u s { } ≤ ≤ → ∞ ◦ { } ≤ ≤ Theorem 3. [32] Let Z be Markov w.r.t. the natural filtration F on D, let L be a nonnegative martingale with t t t { } { } { } E L =1 for all x, t and let P be the probability measure given by P (A) = E [L ;A]. Then the family P defines x t x x x t x x E a time-homogeneous Markov process if and only if L is a multiplicative functional. A multiplicative func{tion}al∈L with t t { } { } E L =1 for all x, t is a marteingale. e e x t A Markov additive process is defined as a bivariate Markov process X = (J ,S ) where J is a Markov process t t t t { } { } { } with state space E and the increments of S are govenrened by J in the sense that t t { } { } E[f(S S )g(J )F ]=E [f(S )g(J )]. (25) t+s− t t+s | t Jt,0 s s In discrete time, a Markov additive process is specified by the measure-valued matrix (kernel) F(dx) whose ijth element is the defective probability distribution F (dx)=P (J =j,Y dx), (26) ij i,0 1 1 ∈ where Y =S S . An alternativedescriptionis in terms of the transitionmatrix P=(p ) (herep =P (J =j)) n n n 1 ij i,j E ij i 1 − − ∈ and the probability measures F (dx) H (dx)=P(Y dxJ =i,J =j)= ij . (27) ij 1 0 1 ∈ | p ij We denote the E E matrix F[θ] with ijthe elementF(ij)[θ]=: eθxF(ij)(dx). By Perron-Frobeniustheory,the matrix F[θ] × has a positive real eigenvalue with maximal absolute value eκ(θ) and the corresponding right eigenvector h(θ) = (h(θ)) , i.e., F[θ]h(θ) =eκ(θ)h(θ). Thebexponential change of mbeasure corRresponding to θ is then given by i ib∈E b P = e−κ(θ)∆−h(1θ)F[θ]∆h(θ), (28) eθx Hij(dxe) = Hij(dbx), (29) H [θ] ij where ∆h(θ) is the diagonal matrix with theeh(iθ)eθx on thebdiagonal, in particular, pij =e−κ(θ)pijh(jθ)/hθi, and Hij[θ] is the normalizing constant. The likelihood ratio is h(θ)(J ) e b Ln = h(θ)(Jn)e−θSn+nκ(θ), (30) 0 which is a mean-one martingale [32]. 6 III. COPULA:A UNIFYING TECHNIQUE FORANALYSIS A. Bounds of Dependence Structures For a random vector X=(X ,...,X ), where the marginal distribution functions F X are known but the dependence 1 n i i ∼ between the components is unspecified. Denote the sharp lower and upper Fre´chet bounds over all dependence structures as M (t) := sup P ( n X t);X F ,1 i n , m (t) := inf P( n X <t);X F ,1 i n , and denote n { i=1 i ≤ i ∼ i ≤ ≤ } n { i=1 i i ∼ i ≤ ≤ } M+(t):=1 m (t), m+(t):=1 M (t). The sharp Fre´chet bounds for the case n=2 were derived in [34], [35], and has n − n P n − n P been extended to n 3, namely the standard bounds [36]–[38]. ≥ 1) Standard Bounds: Theorem 4. [36]–[38] Let X F , 1 i d. Then, for any s R, we have that i i ∼ ≤ ≤ ∈ d d d max sup F (u ) (d 1),0 P X s min inf F (u ) ,1 , (31) i i i i i (u∈U(s)(Xi=1 )− − )≤ Xi=1 ≤ !≤ (u∈U(s)(Xi=1 ) ) where (s)= u=(u ,...,u ) Rd : d u =s . U 1 d ∈ i=1 i n o Remark 2. ForthefastcomputationofstPandardbounds,a numericalmethodisdescribedin[39], while ananalyticalmethod is described in [40]. 2) Dual Bounds: The standard bounds are not sharp for n 3 and an improvement can be obtained based on duality ≥ theorems. Specifically, M+(t) and m+(t) have the following dual counter parts [37], [38] n n n n n M+(s) = inf g dF ;g bounded,1 i n with g (x ) 1 X , (32) n ( i i i ≤ ≤ i i ≥ [s,+∞) i!) i=1Z i=1 i=1 X X X n n n m+(s) = sup f dF ;f bounded,1 i n with f (x ) 1 X . (33) n ( i i i ≤ ≤ i i ≤ [s,+∞) i!) i=1Z i=1 i=1 X X X While the dualrepresentationsare difficultto evaluatein general,they allow to establish goodboundsobtainedby choosing admissible piecewise linear dual functions in the dual problem. Theorem 5. [37], [38] Let X F and F =1 F be the survival function of F . Then, for any s R, we have i i i i i ∼ − ∈ M+(s) D(s)= inf min ni=1 usi−Pj6=iuj Fi(t)dt,1 , (34) n ≤  s n u  u∈U(s) P R− i=1 i  m+(s) d(s)= sup max  ni=1 usi−PPj6=iujFi(t)dt n+1,0 , (35) n ≥  s n u −  u∈U(s) P R− i=1 i  where U(s)={u∈Rn; ni=1ui <s} and U(s)={u∈Rn; ni=1ui >Ps}.  Letu∈U(s), ni=1uiP=s,thepiecewiselineardualadmissPiblechoicesbecomepiecewiseconstantandthusyieldastandard bound. As a consequence, the dual bounds improve the corresponding standard bounds [37], [38]. However, the calculation P of the dual bounds requires to solve an n-dimensional optimization problem which typically will be possible only for small values of n. For the homogeneous case, F =F, 1 i n, a simplified expression can be obtained with a one-dimensional i ≤ ≤ problem that can be solved in any dimension. Theorem 6. [37] Let F =...=F =:F be distribution functions on R . Then for any s 0 it holds that 1 n + ≥ s (n 1)u n − − F(t)dt M+(s) D(s)= inf min u ,1 , (36) n ≤ u<s/n ( R s−nu ) s (n 1)u n − − F(t)dt m+(s) d(s)= sup max u n+1,0 . (37) n ≥ u>s/n ( R s−nu − ) Theorem 7. [41] Let F =...=F =:F be distribution functions on R . Then for any s 0 it holds that 1 n + ≥ s (n 1)r − − F(x)dx m (t) 1 n inf r . (38) n ≥ − r∈[0,s/n)R s−nr 7 Theinfimumin(38)canbeeasilycalculatednumericallybyfindingthezeroderivativepointsofitsargumentinthespecified interval. Note that s (n 1)r − − F(x)dx lim 1 n r =nF(s/n) n+1, (39) r→s/n( − R s−nr ) − which means that (38) is greater or equal than the standard lower bound.Moreover,the dual boundsin (38) have been proved to be sharp under some general distributional assumptions [42]: (A1) Attainment condition: There exists some a<s/n such that s (n 1)t b n − − F(x)dx n F(x)dx D(s)= inf t = a , (40) t<s/n R s−nt Rb−a where b=s (n 1)a and a =F 1(1 D(s)) a. ∗ − − − − ≤ (A2) Mixability condition: The conditional distribution of (X X a ) is n-mixable on (a,b). 1 1 ∗ | ≥ (A3) Ordering condition: For all y b it holds that ≥ s y (n 1)(F(y) F(b)) F(a) F − . (41) − − ≤ − n 1 (cid:18) − (cid:19) Theorem 8. [42] Under the attainment condition (A1), the mixing condition (A2) and the ordering condition (A3), the dual bound is sharp, that is s (n 1)t b n − − F(x)dx n F(x)dx M+(s)=D(s)= inf t = a . (42) n t<s/n R s−nt Rb−a The first order conditions for the optimization in the attainment assumption (A1) at t=a imply that b n F(x)dx a =F(a)+(n 1)F(b), (43) b a − R − where b = s (n 1)a and a = F 1(1 D(s)) a, and it provides a clue to calculate the basic point a and, hence, the ∗ − − − − ≤ dual bound D(s). Having calculated a one can easily check the second order condition f(a) (n 1)2f(b) 0, (44) − − ≥ which is necessary to guarantee that a is a point of minimum for (A1). At this point, the sharpness of the dual bound D(s) can be obtained from a different set of assumptions [42]: 1) Continuous distribution functions F having a positive and decreasing density f on (a , ) satisfy assumptions (A2) and ∗ ∞ (A3). 2) ContinuousdistributionfunctionsF havinga concavedensityf on the interval(a,b) satisfy the mixingassumption(A2). In order to obtain sharpness of the dualbound D(s) for these distributions, conditions(A1) and (A3) have to be checked numerically. B. Example Dependence Structures 1) Comonotonicity: Definition 3. [43] The set A Rn is said to be comonotonic if for any x y or y x holds, where x y denotes the ⊆ ≤ ≤ ≤ componentwise order, i.e., x y for all i=1,2,...,n. i i ≤ Definition 4. [43] A random vector X =(X ,...,X ) is said to be comonotonic it has a comonotonic support. 1 n From the definition, we can conclude that comonotonicity is a very strong positive dependency structure. Indeed, if x and y are elements of the (comonotonic) support of X, i.e., x and y are possible outcomes of X, then they must be ordered componentwise. This explains why the term comonotonic (common monotonic) is used [43]. Theorem 9. [43] A random vector X = (X ,X ,...,X ) is comonotonic if and only if one of the following equivalent 1 2 n conditions holds: (1) X has a comonotonic support. (2) For all x=(x ,x ,...,x ), we have 1 2 n F (x)=min F (x ),F (x ),...,F (x ) . (45) X { X1 1 X2 2 Xn n } (3) For U Uniform(0,1), we have ∼ X =d (FX−11(U),FX−21(U),...,FX−n1(U)). (46) (4) There exist a random variable Z and non-decreasing functions f (i=1,2,...,n), such that i d X =(f (Z),f (Z),...,f (Z)). (47) 1 2 n 8 2) Independence: Theorem 10. [29], [30] Random variables with continuous distributions are independent if and only if their dependence structure is given by d C(u ,...,u )= u . (48) 1 d i i=1 Y 3) Markovian: The Markov propertyis a pure dependencepropertythat can be formulatedexclusivelyin terms of copulas, asaconsequence,startingwitha Markovprocess,amultitudeofotherMarkovprocessescanbeconstructedbyjustmodifying the marginal distributions [16], [17]. Definition 5. [16], [44], [45] Assume two bivariate copulas A(u,t) and B(t,v): the product operator is defined as ∗ 1 ∂A(u,t)∂B(t,v) A B(u,v) dt. (49) ∗ ≡ ∂t ∂t Z0 Assume an m-dimensional copula A and an n-dimensional copula B: the start operator ⋆ is defined as um ∂A(u ,...,u ,t)∂B(t,u ,...,u ) A⋆B(u1,u2,...,um+n 1) 1 m−1 m+1 m+n−1 dt. (50) − ≡Z0 ∂t ∂t Theorem 11. [16], [44], [45] A real valued stochastic process X is a Markov process of first order if and only if for all t positive integers n and for all t ,...,t satisfying t <t , k=1,...,n 1, 1 n k k+1 − C =C ⋆C ⋆...⋆C , (51) t1,...,tn t1,t2 t2,t3 tn−1,tn where C is the copula of X ,...,X and C is the copula of X and X . t1,...,tn t1 tn tk,tk+1 tk tk+1 A strictly stationaryMarkovprocessischaracterizedbyassumingC C, i andF F, i.Thiskindofprocess Xi−1,Xi ≡ ∀ Xi ≡ ∀ can be constructed for any given C and F, with D C(u,v)= ∂C(u,v), by setting 1 ∂u 1 FXi(t)= D1C(w,F(t+F−1(w)))dw ≡G(t), (52) Z0 and u C (u,v)= D C(w,F(G 1(t)+F 1(w)))dw. (53) Xi−1,Xi 1 − − Z0 Definition 6. [17] Assume that, for x [0,1]k the distribution C(x,.) is absolutely continuous with respect to the measure ∈ generated by some copula A on [0,1]l. We denote by C (x,y) (a version of) theRadon-Nikodym derivative, ,A C(x,dy)=C (x,y)A(dy). (54) ,A The subscript “, A” indicates that we take the derivative with respect to the second set of arguments (y). Accordingly, we define the derivative C (x,y) of C with respect to a k-dimensional copula B by B, C(dx,y)=C (x,y)B(dx), (55) B, provided that for given y the measure generated by C(.,y) is absolutely continuouswith respect to the measure generated by B. C (x,y) and C (x,y) are called derivative of the copula C(x,.) resp. C(.,y) with respect to the copula A resp. B. ,A B, Definition 7. [17] Let A be a (k+m)-dimensional copula, B be a (m+l)-dimensionalcopula and C be a m-dimensional C(.) copula such that the derivatives A and B are well-defined. The operator ⋆ is defined by ,C C, z C(.) (A ⋆ B)(x,y)= A (x,r) B (r,y)C(dr), (56) ,C C, · Z0 provided that the integral exists for all x, y, z. Theorem 12. [17] The n-dimensional process X is a Markov process if and only if for all t < t < ... < t the copula 1 2 p Ct1,...,tp of (Xt1,...,Xtp) satisfies Ct2(.) Ct3(.) Ctp−1(.) Ct1,...,tp =Ct1,t2 ⋆ Ct2,t3 ⋆ ... ⋆ Ctp−1,tp. (57) Given a Markov family Cst, s,t T, s < t, we can define finite dimensional copulas Ct1,...,tp and combine these with ∈ an arbitrarily specified flow Ft(x)=(FXt1(x1),...,FXtn(xn)) of marginal one-dimensionaldistributions FXti to obtain finite dimensional distributions P X <x ,...,X <x =Ct1,...,tp F (x ),...,F (x ) . (58) t1 1 tp p t1 1 tp p (cid:0) (cid:1) (cid:0) (cid:1) 9 By applying Kolomogorov’s construction theorem for stochastic processes there exists a Markov process X with the given copulas and marginal distributions. Remark 3. The copula representation of Markov processes of order k has been extended in [46], [47], and the construction of Markov processes through increment aggregation is elaborated in [45]. Moreover, the results for the one-dimensionalcase in [16] are extended to the general multivariate setting in [17]. In addition, the relationship between Markov property and martingale is investigated in [44]–[46]. Theorem 13. [44], [45] Let X = X be a Markov process and set Y =X X . X is a martingale if and only if: i i 0 i i i 1 { }≥ − − (1) F has finite mean for every i; Yi (2) for i≥1, 01FY−i1(v)dD1CXi−1,Yi(u,v)=0, ∀u∈[0,1]. Proposition 3. [44], [R45] Any process for which the distribution of increments is symmetric and the copula between the increments and the levels is symmetric (around the first coordinate) is a martingale. By definition, (X,Z) is a martingale with respect to FX,Z iff i 0 ∀ ≥ E[X X X ,Z ] = 0, (59) i+1 i i i − | E[Z Z X ,Z ] = 0. (60) i+1 i i i − | Let∆X =X X ,∆Z =Z Z andA (u,v,w,λ)bethecopulafunctionoftherandomvector(X ,Z ,∆X ,∆Z ) i i+1 i i i+1 i i,i+1 i i i i − − with F , F , F , F the correspondingmarginal cdf. Set a (u,v,w,1) the density of the copula A (u,v,w,1) Xi Zi ∆Xi ∆Zi i,i+1 i,i+1 and a (u,v,1,w) the density of the copula A (u,v,1,w). i,i+1 i,i+1 Theorem 14. [45] The bivariate Markov process (X,Z) is a martingale with respect to the filtration FX,Z iff: (1) F and F have finite mean for every i; ∆Xi ∆Zi (2) for every i, u,v [0,1], ∀ ∈ 1 F∆−X1i(w)ai,i+1(u,v,w,1)dw = 0, (61) Z0 1 F∆−Z1i(w)ai,i+1(u,v,1,w)dw = 0. (62) Z0 IV. ANALYSIS OF CUMULATIVE CAPACITY A. General Results 1) Exact Expression: The CDF of the cumulative capacity can be derived from the joint distribution of the channel gain magnitude, i.e., F (x)= f (h ,h ,...,h )dH. (63) S(s,t) H s+1 s+2 t ZS(s,t)= ti=s+1log2(1+γ|hi|2)≤x P For example, the single-integral form PDF and CDF of the multivariate generalized Rician distribution are expressed as [48] f (h ,h ,...,h ) = ∞ tm2−1 exp( (t+S2))I (2S√t) N 1 H 1 2 N Zt=0 Sm−1 − m−1 k=1(λ2kσk2t)m2−1 Y 1 h2+λ2σ2t σ2λ2t hmexp k k k I h k k dt, (64) ×Ω2k k (cid:18)− 2Ω2k (cid:19) m−1 kpΩ2k ! F (h ,h ,...,h ) = ∞ tm2−1 exp( (t+S2))I (2S√t) N 1 Q √t σk2λ2k, hk dt, (65) H 1 2 N Zt=0 Sm−1 − m−1 k=1" − m pΩk Ωk!# Y where Ω2 =σ2 1−λ2k and S2 = m (m2 +m2 ). k k 2 l=1 1l 2l According to the transform of random vector that is elaborated in Theorem 15, the CDF can also be expressed as (cid:16) (cid:17) P F (x)= f (c ,c ,...,c )dC, (66) S(s,t) C s+1 s+2 t ZS(s,t)= ti=s+1ci≤x P where f (y)=f (C 1(y))J (y) and C(y)=log (1+γ H(y)2). C H − | 0 | 2 | | Theorem 15. [49] Let g : Rn Rn be one-to-one and assume that h = g 1 is continuous. Assume that on an open set − → Rn h is continuously differentiable with Jacobian J(y). Define J :Rn Rn by 0 V ⊆ → J(y), y J (y)= ∈V (67) 0 (0, y c, ∈V 10 where c is the set complement (in Rn) of . Suppose random vector X has pdf f (x) (with respect to Lebesgue measure) X V V with nonzero mass in h( c), i.e., P X h( c) = f (x)dx=0. Then the pdf of Y =g(X) is given by V { ∈ V } c X V R f (g 1(y))J(y), y fY(y)=fX(g−1(y))|J0(y)|=(0X, − | | y ∈Vc. (68) ∈V In addition, the exact formula can also be expressed with copula. For a d-dimensional random vector X, denote µ the C measure on [0,1]d implied by the copula C, that is µ (B) := P[U B] for any Borel measurable B [0,1]d and U C. C DefineZ = d ω x ,letA betheconvexsetA = x Rd : ∈d ω x z ,andA bethelinea⊆rboundaryconn∼ected to A , i.e., A i==1 xi i Rd : Z d ω x =z . Z { ∈ i=1 i i ≤ } ∗Z Z P∗Z { ∈ i=1 i i } P Proposition 4. [50] P F (z)=µ (B )= cdλ, (69) Z C Z ZBZ where c is the density of C, B =ϕ(A ) [0,1]d, and ϕ:Rd [0,1]d,(x ,...,x ) (F (x ),...,F (x )). Z Z 1 d 1 1 d d ⊆ → 7→ Proposition 5. [50] Assume that f >0 on R for i=1,...,d, then i (1) the set B =ϕ(A ) is given by B = u [0,1]d :(u ,...,u ) (0,1)d 1,u =τ (u ,...,u ) , where τz :(0,1)d−1z∗→(0,1)∗z,u7→Fd(ωzd −z∗ di=−{11 ωωd∈iFi−1(ui)); 1 d−1 ∈ − d z 1 d−1 } (2) the set B is given as B = u [0,1]d :(u ,...,u ) (0,1)d 1,0<u τ (u ,...,u ) ; z z { ∈P 1 d−1 ∈ − d ≤ z 1 d−1 } (3) for z <z , B (B ; 1 2 z1 z2 (4) τ (u) is a strictly decreasing function in each component of u; z (5) B is path-connected and Bǫ :=B [ǫ,1 ǫ]d is start-shaped for ǫ>0 with center (ǫ,...,ǫ). z z z ∩ − Remark 4. Algorithms for numerical calculation of F (z) are elaborated in [50]–[52]. Z 2) Bounds: In the following, instead of trying to find accurate representation of F , we turn to investigate bounds on S(s,t) it. With Theorem 4, it is easily verified that the CDF of the channel cumulative capacity satisfies the following inequalities: Fl (r) F (r) Fu (r), (70) S(s,t) ≤ S(s,t) ≤ S(s,t) where t Fu (r) inf F (r ) , (71) S(s,t) ≡ t " C(i) i # ri=r i=Xs+1 1 i=Ps+1 t + Fl (r) sup F (r ) (t s 1) . (72) S(s,t) ≡ t ri=r"i=Xs+1 C(i) i − − − # i=Ps+1 The improved results with dual bounds can also been obtained following Theorem 5 to Theorem 8. B. Special Cases 1) Comonotonicity: The copula of a distribution function contains all the dependence information, specifically for a comonotonicdependencestructure, i.e., comonotonicrandomvariablesare increasing functionsof a commonrandom variable [39], the joint distribution function of the instantaneous capacity is expressed as [41], [43] F(C(s+1),...,C(t))=min F(C(s+1)),...,F(C(t)) , (73) { } and the CDF of the cumulative capacity is expressed as [41], [43] m (x)= dmin F(C(s+1)),...,F(C(t)) . (74) t s − Z ti=s+1C(i)<x { } P In the special case that all marginal distribution functions are identical F F , comonotonicity of C(i) is equivalent C(i) C ∼ to saying that C(s+1) = C(s+2),...,= C(t) holds almost surely [43], in other words, the capacity depends only on the initial state and keeps constant afterward, i.e., x F (x)=F . (75) S(s,t) C t s (cid:18) − (cid:19)

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.