ebook img

DTIC ADA439709: A Parallel Virtual Queue Structure for Active Queue Management PDF

8 Pages·0.45 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview DTIC ADA439709: A Parallel Virtual Queue Structure for Active Queue Management

T R R ECHNICAL ESEARCH EPORT A Parallel Virtual Queue Structure for Active Queue Management by Jia-Shiang Jou, John. S. Baras CSHCN TR 2003-4 (ISR TR 2003-9) (cid:1)(cid:2)(cid:3)(cid:4)(cid:5)(cid:3)(cid:6)(cid:7)(cid:3)(cid:8)(cid:4)(cid:9)(cid:10)(cid:8)(cid:4)(cid:11)(cid:12)(cid:7)(cid:3)(cid:13)(cid:13)(cid:14)(cid:7)(cid:3)(cid:4)(cid:12)(cid:6)(cid:15)(cid:4)(cid:16)(cid:17)(cid:18)(cid:8)(cid:14)(cid:15)(cid:4)(cid:5)(cid:10)(cid:19)(cid:19)(cid:20)(cid:6)(cid:14)(cid:21)(cid:12)(cid:7)(cid:14)(cid:10)(cid:6)(cid:4)(cid:22)(cid:3)(cid:7)(cid:23)(cid:10)(cid:8)(cid:24)(cid:25)(cid:4)(cid:14)(cid:25)(cid:4)(cid:12)(cid:4)(cid:22)(cid:26)(cid:11)(cid:26)(cid:27)(cid:25)(cid:28)(cid:10)(cid:6)(cid:25)(cid:10)(cid:8)(cid:3)(cid:15)(cid:4)(cid:5)(cid:10)(cid:19)(cid:19)(cid:3)(cid:8)(cid:21)(cid:14)(cid:12)(cid:13)(cid:4)(cid:11)(cid:28)(cid:12)(cid:21)(cid:3) (cid:5)(cid:3)(cid:6)(cid:7)(cid:3)(cid:8)(cid:4)(cid:12)(cid:13)(cid:25)(cid:10)(cid:4)(cid:25)(cid:20)(cid:28)(cid:28)(cid:10)(cid:8)(cid:7)(cid:3)(cid:15)(cid:4)(cid:18)(cid:17)(cid:4)(cid:7)(cid:2)(cid:3)(cid:4)(cid:29)(cid:3)(cid:28)(cid:12)(cid:8)(cid:7)(cid:19)(cid:3)(cid:6)(cid:7)(cid:4)(cid:10)(cid:9)(cid:4)(cid:29)(cid:3)(cid:9)(cid:3)(cid:6)(cid:25)(cid:3)(cid:4)(cid:30)(cid:29)(cid:31)(cid:29) !(cid:4)(cid:14)(cid:6)(cid:15)(cid:20)(cid:25)(cid:7)(cid:8)(cid:17)!(cid:4)(cid:7)(cid:2)(cid:3)(cid:4)(cid:11)(cid:7)(cid:12)(cid:7)(cid:3)(cid:4)(cid:10)(cid:9)(cid:4)"(cid:12)(cid:8)(cid:17)(cid:13)(cid:12)(cid:6)(cid:15)!(cid:4)(cid:7)(cid:2)(cid:3)(cid:4)#(cid:6)(cid:14)$(cid:3)(cid:8)(cid:25)(cid:14)(cid:7)(cid:17) (cid:10)(cid:9)(cid:4)"(cid:12)(cid:8)(cid:17)(cid:13)(cid:12)(cid:6)(cid:15)(cid:4)(cid:12)(cid:6)(cid:15)(cid:4)(cid:7)(cid:2)(cid:3)(cid:4)%(cid:6)(cid:25)(cid:7)(cid:14)(cid:7)(cid:20)(cid:7)(cid:3)(cid:4)(cid:9)(cid:10)(cid:8)(cid:4)(cid:11)(cid:17)(cid:25)(cid:7)(cid:3)(cid:19)(cid:25)(cid:4)&(cid:3)(cid:25)(cid:3)(cid:12)(cid:8)(cid:21)(cid:2)’(cid:4)(cid:1)(cid:2)(cid:14)(cid:25)(cid:4)(cid:15)(cid:10)(cid:21)(cid:20)(cid:19)(cid:3)(cid:6)(cid:7)(cid:4)(cid:14)(cid:25)(cid:4)(cid:12)(cid:4)(cid:7)(cid:3)(cid:21)(cid:2)(cid:6)(cid:14)(cid:21)(cid:12)(cid:13)(cid:4)(cid:8)(cid:3)(cid:28)(cid:10)(cid:8)(cid:7)(cid:4)(cid:14)(cid:6)(cid:4)(cid:7)(cid:2)(cid:3)(cid:4)(cid:5)(cid:11)(cid:16)(cid:5)(cid:22) (cid:25)(cid:3)(cid:8)(cid:14)(cid:3)(cid:25)(cid:4)(cid:10)(cid:8)(cid:14)((cid:14)(cid:6)(cid:12)(cid:7)(cid:14)(cid:6)((cid:4)(cid:12)(cid:7)(cid:4)(cid:7)(cid:2)(cid:3)(cid:4)#(cid:6)(cid:14)$(cid:3)(cid:8)(cid:25)(cid:14)(cid:7)(cid:17)(cid:4)(cid:10)(cid:9)(cid:4)"(cid:12)(cid:8)(cid:17)(cid:13)(cid:12)(cid:6)(cid:15)’ (cid:1)(cid:2)(cid:3)(cid:4)(cid:5)(cid:6)(cid:7)(cid:2)(cid:4)(cid:4)(cid:8)(cid:7)(cid:7)(cid:9)(cid:10)(cid:11)(cid:11)(cid:12)(cid:12)(cid:12)(cid:13)(cid:6)(cid:5)(cid:14)(cid:13)(cid:15)(cid:16)(cid:17)(cid:13)(cid:2)(cid:17)(cid:15)(cid:11)(cid:18)(cid:19)(cid:20)(cid:18)(cid:21)(cid:11) Report Documentation Page Form Approved OMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington VA 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE 3. DATES COVERED 2003 2. REPORT TYPE - 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER A Parallel Virtual Queue Structure for Active Queue Management 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION Defense Advanced Research Projects Agency,3701 North Fairfax REPORT NUMBER Drive,Arlington,VA,22203-1714 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSOR/MONITOR’S ACRONYM(S) 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 12. DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release; distribution unlimited 13. SUPPLEMENTARY NOTES The original document contains color images. 14. ABSTRACT see report 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF 18. NUMBER 19a. NAME OF ABSTRACT OF PAGES RESPONSIBLE PERSON a. REPORT b. ABSTRACT c. THIS PAGE 7 unclassified unclassified unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 1 A Parallel Virtual Queue Structure for Active Queue Management Jia-Shiang Jou and John S. Baras Department of Electrical and Computer Engineering and the Institute for Systems Research University of Maryland, College Park, MD 20742 USA {jsjou, baras}@isr.umd.edu Abstract—The Adaptive RED proposed by Feng et al. [5] is global synchronization phenomenon and low link utilization, shown to have small packet delay and queue length variation many active queue management schemes such as RED are for long-life TCP traffic such as FTP connection with a large proposed.ThebasicideaofREDisrandomlydroppingpackets file size. However, a great portion of Internet traffic is short- to prevent buffer overflow. The dropping probability is a life web and UDP traffic. Most web traffic has a small file size and its TCP session is mainly operated in the slow start phase non-decreasing function of the queue length so that more with a small congestion window size. Since the file size is small, packets will be droppedwhen the queue length is larger. The dropping short-life TCP (and UDP) packets is not very effective connectionatahighflowratehasahigherchancetogetmore in alleviating congestion level at a bottleneck router. From the packets dropped and reduces its rate rapidly. This scheme viewpoint of TCP, one or several packet losses in its slow start controls the flow rate of TCP connections and keeps the phase lead to extra delay for retransmission and even cause TCP timeout. This delay severely degrades the performance of queue length in a desired region. However, some simulation delivering short messages such as web pages and web browsers results [3] have demonstratedthat the performanceof RED is experience a long waiting time even with a high speed network. pretty sensitive to parameter settings. Based on the original WefirstshowthattheAdaptiveREDisvulnerabletotheseshort- idea of RED, there are some modifications such as Stabilized lifeTCPtrafficandproposeavirtualparallelqueuestructureas RED [8], Random Early Marking (REM) [1], Blue [4] and a new active queue management scheme (AQM). The idea is to separate the long-life and short-life (including UDP) traffic into AdaptiveRED[5][6].TheAdaptiveREDschemedynamically two different virtual queues. The first queue is to run the drop- updates the maximal dropping probability according to the tail policy and work for the short-life TCP and UDP packets. exponentially weighted moving average (EWMA) of queue In order to have a small mean delay, the service rate of this length,andmakesitselfmorerobustwithrespecttocongestion drop-tail queue is dynamically determined by its virtual queue level.Basedon[5][6],wearemotivatedtoconsiderUDPand length.The remaining long-lifetraffic is directed to an Adaptive REDvirtualqueue.Eventheavailablebandwidthissharedwith short-lifeTCPconnectionssuchaswebtraffic.Droppingthese the drop-tail queue, the simulation results show that the queue kinds of packets is not only useless for controlling the flow length variation of the RED queue is still located in a desired rate,butalsowastesnetworkresourcesandcausesunnecessary region. Note that both virtual queues share the same physical delay and retransmissions. In addition, the performance of buffer memory. Those packets in the drop-tail queue will not Adaptive RED is severely degraded by these short but bursty be dropped unless the shared buffer is overflowed. This parallel virtual queue structure not only keeps the benefits of RED such connections.WefirstdemonstratethevulnerabilityofAdaptive as high utilization and small delay, but also greatly reduces the RED in this situation and propose a parallel virtual queue packet loss rate at the router. structure for active queue management. Index Terms—Adaptive RED, AQM, Scheduling. II. VULNERABILITYTO WEB-MICE I. INTRODUCTION In this section, we describe a scenario containing short- THE RED (Random Early Detection) scheme has been life TCP (WEB), UDP (CBR) and long-life TCP (FTP) attractinga lotofattentionforactivequeuemanagement traffic. The purpose is to demonstrate that the performance inthenetworkrouter.Inordertohaveasmallqueuingdelay,it of the Adaptive RED scheme is severely degraded by the is desired to have a short mean queue length so that the total short-life web traffic. The network in our experiment has a buffer size in a drop-tail queue can not be large. However, simple dumbbelltopologywith the bottlenecklink bandwidth the Internet traffic is quite bursty so that small buffer routers C=3.0Mbps. One side of the bottleneck consists of 800 encounterahighpacketlossrate.Moreover,theTCPprotocol web clients. Each client sends a web request and has an dramaticallyreduces its flow rate in the congestionavoidance Exponential distribution of think times with mean 50s after phase when packets are lost. After a buffer overflow event in the end of each session. The other side contains 800 web a drop-tail queue, all connections sense packet loss and slow servers, running HTTP 1.1 protocol and having a Pareto downthe transferratesimultaneously.In orderto preventthis file size distribution with parameters (K=2.3Kbyte, α=1.3) (mean10Kbytes).Theround-trippropagationdelayofHTTP Research partially supported by DARPA through SPAWAR, San Diego undercontract No.N66001-00-C-8063. connections is uniformly distributed between (16, 240)ms. 2 Note thatthe meanrate ofthe aggregateweb trafficis around transmission. One packet loss in its slow start phase not only 1.2Mbps.Thereis one CBR traffic sourcewhich periodically causes extra round-triptime to retransmit the droppedpacket, generates a 1Kbytes UDP packet every 50ms. Besides these but also immediately enforces TCP to leave its slow start shortwebconnectionsandUDPtraffic,thereare10persistent phasewithaverysmallslowstartthreshold(ssthresh).When FTP connections sharing the bottleneck link with round-trip the sender senses a packet loss, ssthresh will be reduced propagation delay of 64ms. Figure. 1 shows that Adaptive to min(cwnd,rcv window)/2 and the new cwnd is also RED works well with those FTP connections before the web decreased depending on different TCP versions. (For TCP traffic comes in. At time t=100s, the CBR source and web Reno,thenewwindowsizecwnd=ssthreshandTCPenters servers begin to share the bandwidth. Since the aggregate the Fast Recovery phase. For TCP Tahoe, cwnd = MSS web traffic is very bursty, the queue length of Adaptive RED (maximum segment size) and TCP begins a new slow start deviates dramatically from the desired region. phase.)Sincetheoriginalcongestionwindowisjustbeginning Since most web pages contain one or several very small to increase its size from MSS in its first slow start phase, files, thoseTCP connectionsarealmostoperatedintheirslow one packet loss during the initial several round-trip times start phase during the session life. According to the TCP leads TCP to enter the congestion avoidance phase with a protocol, the congestion control window is just beginning very small ssthresh and cwnd. In the congestion avoidance to increase its size from the initial value and has a low phase, TCP slowly increases cwnd (the increment is about transmission speed. Dropping packets in the slow start phase one MSS per round-trip time) from the current ssthresh. can not efficiently alleviate congestion level at the bottleneck Thus,losingonepacketintheslowstart phasecauses TCPto router. In other words, any blind random dropping/marking take a long time to complete a short message. In addition, policy such as RED is unable to effectively control the since the web traffic is assumed to be small but bursty, congestionlevelwithoutconsideringshort-lifeTCP(andUDP) these web connections usually experience a higher packet traffic.Furthermore,losingoneortwopacketsintheslowstart loss rate (see the web packet loss rate of RED and drop- phase not only causes a very low throughputand extra delay, tail queue in Table III). Note that the default initial value of but also leads to a high probability of connection timeout. ssthresh is 64KB and packet size is 1KB in this paper. We desire to avoid this phenomenon especially for those At a high packet loss rate Pd=0.04 contributed by RED, the shortandtime-constrainedmessages.Inaddition,theAdaptive probability of losing one or more packets in the slow start RED scheme relies on average queue length to determine phase is equal to 1−(1−Pd)64 = 0.9267 (assume packets the dropping probability and control the TCP flow rate. The aredroppedindependently).Therefore,mostTCPconnections extraqueuelengthperturbationcontributedby the burstyweb haveatleastonepacketlossintheirfirstslowstartphase.For traffic makes Adaptive RED increase/decrease its dropping example, assuming that the 15th packet is dropped by RED, probability rapidly. This over-reaction causes a great queue the ssthresh decreases from 64KB to 4KB and the new length variation and poor performance in packet delay and congestion window cwnd is decreased from 8KB to 1KB loss. (Tahoe).Thesituationisgettingworseifonepacketisdropped earlier(inthefirst3round-triptimes).Thecongestionwindow 160 atthismomentissosmallthatthesendermaynothaveenough 140 data packets to trigger the receiver into generating three du- 120 plicate acknowledgements. If packets cannot be recovered by Pkts) 100 this fast recoveryscheme,TCP has to dependon the protocol Queue Length ( 6800 ttdiirmmameerraitfsiocurasluleyarrloblyrylrateirmcgoeevoaeunrtdy.ethvTeehndetes.ldiveMefaroyurledtoevvlaeayrl,ucetohueolfdprtbhoeebainpbcrilroiettaoysceoodfl 40 losingtwoormorepacketsofthesamecongestionwindowin 20 theslowstartphasestillcannotbeignored.Theseeventsalso leadtoahighprobabilityofTCPtimeoutandconnectionreset. 00 50 100 150 200 250 300 350 400 To demonstrate this phenomenon, we consider a small web Time (Sec.) page which has 90 packets to be transferred in a stand alone Fig. 1. Queue length of Adaptive RED: 10 FTP starting at t=0 and 800 environment. There is no other traffic sharing the bandwidth Websand1CBRstartingatt=100s. and packets are dropped artificially. The simulation results in Figure 2 show that TCP takes about 4.81s to complete the whole session if Pb =0.04. In comparison, in the loss free III. TWOVIRTUAL QUEUESSTRUCTURE: REDAND TAIL case, this 90Kbytes file can be delivered within 1.18s. Table POLICY Iliststhemeanandstandarddeviationofthisdeliverydelay(in A. Motivation sec.)withfile sizefrom30K to210K bytes.Sincemostweb Droppingshort-lifeTCP andUDP packetsin a RED queue pages have their size located in this range, the web browser can not effectively alleviate the congestion, wastes network will experience a long response time when the packet loss resources, and causes unnecessary retransmissions and delay. rate is large. As mentioned previously, the performance of Fromtheviewpointofawebbrowser,ashort-lifeTCPsession the Adaptive RED scheme is also severely degraded by this may only need several round-trip times to finish the whole short but bursty traffic. It is natural to have a separate queue 3 80 30KB With this approach, we can keep the benefits of Adaptive 60KB 70 9102K0KBB RED such as high (100%) link utilization. Furthermore, the 150KB 180KB 60 210KB packetloss rate ofthe short-lifeTCP andUDP connectionsis Session Life (Sec)4500 gTbryheeathtpleyasceuklpiemptrilenosasstseeddrabbteaynotdhfwelioddntrhgo,-pll-iatfaregileTprCotPhlircetyrsahafonfildcdsais(slohalnasrgoeedrrebRduuTfcfTeerd). Mean 30 and a more stable average virtual queue length in the RED 20 queue. 10 B. Description 00 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 By following the basic idea of this RED+Tail policy, the Dropping Prob. P d first problem is how to split the long-life traffic from other Fig. 2. Mean delivery delay of small file v.s. dropping probability Pd short-life web traffic at the routers. To this end, the router withfilesize30,60,...,210Kbytes,bandwidth3Mbpsandround-triptime has to knowthe age orelapsed time of each TCP connection. 128ms. Unfortunately, this information is hidden in the TCP header TABLEI which is not available to the IP router. However, one may DELIVERYDELAYOFSMALLFILE:MEANANDSTANDARDDEVIATION roughly estimate the elapsed time by using the following approach: Pd 0.00 0.02 0.04 0.08 30KB 0.88(.0006) 1.60(1.88) 2.74(4.27) 5.88(7.79) • Whenapacketfromanewsource-destinationpair,which 90KB 1.18(.0008) 2.79(2.39) 4.81(3.91) 9.24(6.30) isnotseenbytherouterinthepastT secarrives,wetreat 150KB 1.34(0.0008) 3.51(1.90) 6.51(4.60) 13.38(8.87) it as a new TCP connection and identify this connection as a short-life connection. • Send the new connection packets to the drop-tail queue. deal with the short-life TCP and UDP traffic. Since dropping • Set a counter for the number of packets of this connec- thesepacketscannoteffectivelyalleviatecongestionlevel,but tion. severely increases delivery delay, it would be better to keep • If the cumulative packets number is greater than a them in the queue unless the total buffer (shared with the threshold N, then we believe that the file size is large other queue) is overflown. Hence, the queuing policy of the enoughandthisTCPconnectionalreadyhasleftitsslow first virtual queue is chosen to be drop-tail to minimize the start phase. We redirect the subsequent packets of this packet loss rate. In order to have a short delivery delay for connection to the Adaptive RED queue. web browsers and UDP connections, the service rate C1(t) • Remove the counter if there is no packet arrival in the is changed dynamically according to its virtual queue length last T sec. q1(t). Preliminary simulation results show that this approach suc- The second virtual queue is running the Adaptive RED cessfullypreventsthesmallwebtrafficfromenteringtheRED policy with the long-life TCP connections such as FTP with queue. The probability of false alarm is less than 0.02 in our a large file. Even if the available bandwidth of the second scenario. Since the web traffic has a small file size and a queue is determined by C2(t)=C-C1(t), the Adaptive RED shortsessiontime,thereisnoharmiftheshort-lifeconnection scheme still works well in keeping its virtual queue length packets were misdirected to the RED queue after time T. q2(t) inadesiredregion.Notethattheavailablebandwidthat Figure 3 shows the RED+Tail parallel queue structure in the RED queue is suppressed and controlled by the drop-tail therouter.Let C1(t) andC2(t) denotetheserviceratesofthe queue.Whenthereisaheavyworkloadatthedrop-tailqueue, drop-tail queue and the Adaptive RED queue respectively. In C2(t) decreases quickly. FTP receivers experience a slower order to allocate bandwidth dynamically to both queues and packet arrival rate and send acknowledgementpackets (ACK) assign a desired region of queue length in the Adaptive RED backslowly.Withoutincreasingthedroppingprobabilityatthe queue,wedefinethemaximalthresholdmaxthi andminimal RED queue, the slower ACK arrival rate from the receivers threshold minthi for i = 1,2. The service rates C1(t) and causes FTP senders to reduce their flow rate automatically C2(t) are given by the following algorithm: without shrinking their control window size. On the other • if q1 =0, then C1(t)=0. hand, when the congestion level is alleviated, the RED queue • if 0≤q1 <minth1, then C1(t):=C1min. risecsetiivlelslamrgoereinbanthdewiFdTthP. Sseinrvceer,thtehecotnhgroeustgiohnpuwtionfdoFwTPsizies •• iCf2m(ti)n:t=h1C≤−qC1,1t(hte),n C1(t):=min(Cmaqx1th1, C1max). quickly recovered by a faster arrival rate of ACK packets where C is the link bandwidth. The parameter q1 denotes from the receiver. Since the TCP average throughput [7] [2] the queue length of the drop-tail queue. The constant C1max is proportionalto RTT1√P , and the RTT is equal to the sum preserves the minimum available bandwidth C −C1max for d of the round-trip propagation delay and the queuing delay. the RED queue to prevent FTP connections from timeout. Given the available bandwidth of C2(t), there is a trade off between droppingprobabilityand queuingdelay. We are able IV. SIMULATIONAND COMPARISON toreducethedroppingprobabilityandincreasequeuingdelay In this section, we compare the Adaptive RED and by dynamically adjusting the thresholds of Adaptive RED. RED+TailschemeswithtypicalTCPperformancemetrics.For 4 TABLEIII PERFORMANCEMETRICS Long−life TCP Adaptive RED C2 Policy Loss% DelaySec. RateKB/s Drop−Tail C1 RED+Tail:FTP 2.747 0.184 209.465 Short−life TCP & UDP RED+Tail:WEB 1.278 0.114 144.455 RED+Tail:CBR 0.300 0.109 19.867 Router AdaptRED:FTP 4.149 0.143 217.531 Fig.3. Parallel Virtual QueueStructure forActiveQueueManagement. AdaptRED:WEB 4.514 0.143 137.124 AdaptRED:CBR 3.950 0.141 19.140 DropTail:FTP 1.916 0.349 215.243 the parameters of Adaptive RED, we employ the parameter DropTail:WEB 4.234 0.340 138.983 settings suggested by Floyd [6] (α and β of the AIMD DropTail:CBR 1.550 0.342 19.601 algorithm). We also implemented the RED+Tail scheme in TABLEIV the ns2 simulator. The network topology and scenario were SMALLFILEDELIVERYTIME:MEANANDSTANDARDDEVIATION already described in Section II. Table II lists the parameter settings of the RED+Tail parallel queues and Adaptive RED FileSize 30KB 90KB 150KB queue. Note that the virtual queues of the RED+Tail scheme RED+Tail:WEB 0.88(0.13) 1.15(0.24) 1.64(0.85) AdaptRED:WEB 2.42(1.21) 3.87(1.41) 7.86(4.44) share the total physical buffer size, i.e. the packets in the DropTail:WEB 4.00(1.69) 10.91(4.01) 17.15(4.15) drop-tailvirtualqueuewillnotbedroppedunlessthephysical memoryis full.The AdaptiveRED is set in a ”gentle”mode indicating that the dropping probability between (maxth2, web loss rate is reduced from 4.51% to 1.28% and CBR loss 2maxth2) is linear in (Pmax, 1). rate is reduced from 3.95% to 0.30%. Figure 6 compares the packet delay. The mean queuing TABLEII delay of web and CBR packets in the RED+Tail scheme EXPERIMENTSETTINGS is shortened by increasing the FTP packet delay. The web Queuei BufferSize minthi maxthi α β and CBR packet delay depends on how much bandwidth is i=1 160KB 2KB 30KB - - allocated to the drop-tail queue. We are able to satisfy a i=2 shared 20KB 80KB 0.01 0.9 mean delay requirement for the web and CBR connections Adapt.RED 160KB 20KB 80KB 0.01 0.9 byproperlyadjustingtheparametermaxth1.Forexample,the maxth1oftheRED+Tailschemeissettobe30Kbytessothat The performance for a connection is evaluated by the packet the expectation of mean delay at the drop-tail queue is about loss rate, delay andthroughput.However,we focuson packet 80ms. However, the service rate C1 reaches its maximum loss rate and delay for web (short-TCP) and CBR (UDP) C1max when q1 > minth1. The actual mean delay should connections, and place more concern on throughput for FTP be larger than expected. Our simulation results show that the (long-TCP). We replace the Adaptive RED with RED+Tail mean delay of web and CBR traffic is around 110ms. scheme at the router and redo the experiment of Section II. Figures7and8showthemeanreceiverateandthroughput Therandomseedofthesimulatorisfixedsothattheprocesses of FTP, web and CBR traffic. Both schemes achieve 100% ofwebrequestandfilesizehavethesamesamplepathinboth utilizationofthelinkbandwidth.Due totheunfairbandwidth experiments.Figure4showsthequeuelengthoftheRED+Tail allocationintheRED+Tailscheme,FTPhasaslightlysmaller scheme,whichdemonstratesthatthevirtualqueuelengthq2 is throughput. However, the saved bandwidth allows web burst quitestableandlocatedinthedesiredregionevenaftertheweb to pass through the bottleneck link faster. Table III lists the and CBR traffic beginto share the bandwidthat time t=100s. performance metrics of RED+Tail, Adaptive RED and the When a web burst arrives at the drop-tail queue, the queue traditional drop-tail scheme respectively. length q1 increases rapidly and most bandwidth is allocated Table IV compares the small web file delivery time under to serve the drop-tail queue for a short time. Meanwhile, different schemes. Since the RED+Tail policy has a small the available bandwidth is reduced at the RED queue so that packet loss rate, its delivery time is almost equal to the loss FTP clients experience a long packet interarrival time. As a freecaseinTableI.Ontheotherhand,theAdaptiveREDhas result, FTP servers receive acknowledgement packets slowly a loss rate 4.5%, its delivery time is three times longer. Note and the data rate is automatically reduced without increasing that the drop-tail queue has a similar loss rate (4.2%) with the dropping probability in the RED queue. Since the TCP Adaptive RED for web packets. However, the file delivery average throughput is proportional to 1√ , the actual time of the drop-tail scheme is about 2.5 times longer than RTT P dropping probability at the RED queue is also rdeduced from AdaptiveRED’s.Thisismainlyduetothelongqueuingdelay 4.15% to 2.75% by a longer queuing delay (0.184ms). This (0.34sec) of the drop-tail policy. scheme prevents the over-reaction behavior of RED in the originalAdaptiveRED caseandkeepsthemeanqueuelength V. DYNAMIC THRESHOLDSFOR ADAPTIVE RED q2stillinadesiredregion.Figure5showsthepacketlossrates The Adaptive RED relies on the dropping probability to of FTP, web and CBR connection with Adaptive RED and control the flow rates of TCP connections and keep the RED+Tailschemes.ItisobviousthatRED+Tailprovidesgreat average queue length in a desired region. However, for those improvementinpacketlossforwebandCBRconnections.The applications which have a large file to transfer, the goodput 5 120 Adaptive RED RED+Tail 100 400 400 Pkts) 6800 FTP230000 FTP230000 q (1 40 100 100 20 00 100 200 300 400 00 100 200 300 400 00 50 100 150 200 250 300 350 400 400 400 300 300 100 WEB200 WEB200 100 100 80 Pkts) 60 3000 100 200 300 400 3000 100 200 300 400 q (22400 CBR1200 CBR1200 00 50 100 150 200 250 300 350 400 Time(Sec.) 00 100 200 300 400 00 100 200 300 400 Fig. 4. Queue length of RED+Tail virtual queues: 10 FTP starting at t=0 Fig.7. MeanreceiverateofAdaptiveREDandRED+Tail:10FTPstarting gotovirtualqueue2,and800Webs+1CBRstartingatt=100gotovirtual att=0,and800Webs+1CBRstarting att=100s. queue1. Adaptive RED RED+Tail 15x 104 40 40 Adaptive RED:FTP Adaptive RED:WEB 30 30 Adaptive RED:CBR FTP20 20 ARRdEEaDDp++tiTTvaaeii llR::FWETEDPB:Total 10 10 RREEDD++TTaaiill::CToBtRal 8000 100 200 300 400 8000 100 200 300 400 Kbyes)10 WEB24600000 100 200 300 400 24600000 100 200 300 400 Throughput (5 10 10 R B5 5 C 00 100 200 300 400 00 100 200 300 400 00 50 100 150 Time2 0(0Sec.) 250 300 350 400 Fig.5. PacketlossrateofAdaptiveREDandRED+Tail:10FTPstartingat Fig.8. ThroughputofAdaptiveREDandRED+Tail:10FTPstartingatt=0, t=0,and800Webs+1CBRstarting att=100s. and800Webs+1CBRstarting att=100s. is more important than the packet delay. The packet loss rate is a key factor of connectiongoodput.Since the minimal and minth2+D. maximal thresholds of the Adaptive RED scheme are fixed, • ifP¯d <PL,thenminth2 :=minth2(1−γ),maxth2 := the dropping probability of RED must reach a high value to minth2+D, restrict the flow rates. This high dropping probability causes ¯ where Pd is the average dropping probability obtained by a great portion of retransmission and low goodput. EWMA algorithm and (PU, PL) are the desired region of Inordertocontroltheflowrateandtokeepalowpacketloss dropping probability. Note that the distance D := maxth2− rate, we propose a dynamic threshold algorithm for Adaptive minth2 is a constant and these floating thresholds do not RED scheme to guarantee the average packet loss rate in the change the current slope of dropping probability function. RED queue. Since the average TCP throughput is proportional to •• DifP¯:=d >mPaxUt,ht2he−nmmiinntthh22a:=ndm0i<nthγ2<(11+.γ),maxth2 := dReTlaT1y√aPndd, icnacureseassinTgCtPhetothrreedsuhcoeldistsalflsoowincrareteaswesiththoeutquaehuiignhg packetloss rate.Figure9 is a comparisonofthe queuelength behaviorbetweenfixedanddynamicthresholdschemes.There Adaptive RED RED+Tail 1 1 are20persistentFTPserverssharinga6Mbpsbottlenecklink. FTP0.5 0.5 Another20 FTP servers arriveat time 100s and leave at time 300s. Figure 9 shows that the fixed threshold scheme has a 00 100 200 300 400 00 100 200 300 400 small varying queue length and a large dropping probability 0.4 0.4 (0.05).Incomparisonwiththeprevious,thedynamicthreshold WEB00..23 00..23 schemehasamuchloweraveragedroppingprobability(0.014 0.010 100 200 300 400 0.010 100 200 300 400 withPU=0.02,PL=0.01),butahigherpacketdelay.Notethat both schemes achieve 100% link utilization so that each FTP 0.4 0.4 R0.3 0.3 connection has the same throughput. However, with a much B C0.2 0.2 0.1 0.1 lowerpacketlossrate,thedynamicthresholdschemeachieves 00 100 200 300 400 00 100 200 300 400 ahighergoodput.Thisdynamicthresholdschemeallowsusto Fig. 6. Packet delay of Adaptive RED and RED+Tail: 10 FTP starting at consider the trade off between packet loss and queuing delay t=0,and800Webs+1CBRstarting att=100s. in an Adaptive RED queue. 6 QLen233505000 REFERENCES Fixed Thres. 11205050000 [[21]] mTS..eBAntut.hauIEnradEliEDya.N,TeStow.woLsrolkew,y,.1V5F.(i3xL)e:id4,8ap–no5di3nQ,t2a.0pY0pi1rno..xiRmEaMtio:nAscftoivreTqCuPeubeehmavanioargein- anAQMnetwork. ACMSIGMETRICSPerformanceEvaluation Review, 00 50 100 150 200 250 300 350 400 450 500 29(1):216–225,2001. [3] M.Christiansen,E.Jaffay,D.Otu,andF.D.Smith.TuningREDforweb 350 Dynam. Thres. QLen11223050505000000 MMIAnvaisngtxt..th hLQeLnegn [[45]] toEWWrfaE..faCfiFFcScete.innvCggIenS,,qEDSDu-I.Te.GDKuRCe.a-O3nKm8dMaa7lunnM-r9da,,9lguD,eprm,a.1g5SeDe,nas.th1a91Sa,93lag99hao.n–ar,d1it5ahK0nm,adsn2.g0K0T.G0Ge..c.hSSnhihicnian.l.BRLIenpUoEPrt:roAUce.neMdewiincghcsilgaaossnf 00 50 100 150 200 250 300 350 400 450 500 INFOCOM99,volume3,pages1312–1328,1399. Time (sec.) [6] S. Floyd, R. Gummadi, and S. Shenker. Adaptive RED: An algorithm forincreasing therobustnessofRED,2001. Fig. 9. Average queue length of fixed and dynamic thresholds: 20 FTP [7] T.Ott,J.Kemperman,and M.Mathis. Thestationary behavior ofideal startingatt=0,and20starting att=100s,C=6Mbps. tcpcongestion avoidance ftp://ftp.bellcore.com/pub/tjo/tcpwindow.ps. [8] Teunis J. Ott, T. V. Lakshman, and Larry H. Wong. SRED: Stabilized RED. InProceedingsofINFOCOM,volume3,pages1346–1355,1999. 0.12 P0.1 Fixed Thres. 0000....00002468 00 50 100 150 200 250 300 350 400 450 500 0.12 m. Thres. P00..000.681 PIAnmvsgat..x PP Dyna00..0024 00 50 100 150 200 250 300 350 400 450 500 Time (sec.) Fig. 10. Dropping probability of fixed and dynamic thresholds: 20 FTP startingatt=0,and20starting att=100s,C=6Mbps. VI. CONCLUSIONS In this paper, we first demonstrated the vulnerability of Adaptive RED scheme to bursty traffic and then proposed a parallel virtual queue structure to eliminate unnecessary packet loss. A simple detection algorithm is employed to separate the short-life and long-life TCP connections into different virtual queues. The packet loss rate and mean delay can be greatly reduced by dynamic bandwidth allocation and activequeuemanagementwithaparallelqueuestructure.This scheme combines the advantages of drop-tail and Adaptive RED policies. The simulation results in the study show that this scheme achieves a shorter mean delay for real time applications and keeps a high throughput for the best effort connections as well as greatly reduces the packet loss rate in both queues. This parallel queue structure also provides more degree of freedom to control the router by considering different bandwidth allocation policies and dynamic thresholds for Adaptive RED. Here, the bandwidth allocation policy is a simple function of the current virtual queue length. However, itiswell-knownthatwebtrafficisstronglycorrelatedandhas a long range dependency property. Based on observations of the ”recent past” traffic, the future bandwidth demand of the web traffic was shown to be predictable. In future work, we will consider the optimal bandwidth allocation policy based on the prediction of congestion level.

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.