ebook img

Improving Fairness in Memory Scheduling PDF

53 Pages·2014·0.6 MB·English
by  
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Improving Fairness in Memory Scheduling

Improving Fairness in Memory Scheduling Using a Team of Learning Automata Aditya Kajwe and Madhu Mutyam DepartmentofComputerScience&Engineering, IndianInstituteofTehcnology-Madras June 14, 2014 AdityaKajweandMadhuMutyam (IITM) ImprovingFairnessinMemoryScheduling June14,2014 1/15 Outline 1 Introduction 2 Related Work 3 Our Learning Automata-based Algorithm 4 Experiments 5 Conclusion AdityaKajweandMadhuMutyam (IITM) ImprovingFairnessinMemoryScheduling June14,2014 2/15 - Impacts main memory fairness, throughput & power consumption. Metrics for evaluating a scheduling algorithm - harmonic speedup, execution time, sum-of-IPCs, maximum slowdown, weighted speedup - harmonic speedup = N (cid:80) IPCialone i IPCshared i - Provides a good balance between fairness and system performance Introduction Introduction DRAM scheduling - The order in which memory access requests from the CPU are processed at DRAM. AdityaKajweandMadhuMutyam (IITM) ImprovingFairnessinMemoryScheduling June14,2014 3/15 Metrics for evaluating a scheduling algorithm - harmonic speedup, execution time, sum-of-IPCs, maximum slowdown, weighted speedup - harmonic speedup = N (cid:80) IPCialone i IPCshared i - Provides a good balance between fairness and system performance Introduction Introduction DRAM scheduling - The order in which memory access requests from the CPU are processed at DRAM. - Impacts main memory fairness, throughput & power consumption. AdityaKajweandMadhuMutyam (IITM) ImprovingFairnessinMemoryScheduling June14,2014 3/15 - harmonic speedup = N (cid:80) IPCialone i IPCshared i - Provides a good balance between fairness and system performance Introduction Introduction DRAM scheduling - The order in which memory access requests from the CPU are processed at DRAM. - Impacts main memory fairness, throughput & power consumption. Metrics for evaluating a scheduling algorithm - harmonic speedup, execution time, sum-of-IPCs, maximum slowdown, weighted speedup AdityaKajweandMadhuMutyam (IITM) ImprovingFairnessinMemoryScheduling June14,2014 3/15 - Provides a good balance between fairness and system performance Introduction Introduction DRAM scheduling - The order in which memory access requests from the CPU are processed at DRAM. - Impacts main memory fairness, throughput & power consumption. Metrics for evaluating a scheduling algorithm - harmonic speedup, execution time, sum-of-IPCs, maximum slowdown, weighted speedup - harmonic speedup = N (cid:80) IPCialone i IPCshared i AdityaKajweandMadhuMutyam (IITM) ImprovingFairnessinMemoryScheduling June14,2014 3/15 Introduction Introduction DRAM scheduling - The order in which memory access requests from the CPU are processed at DRAM. - Impacts main memory fairness, throughput & power consumption. Metrics for evaluating a scheduling algorithm - harmonic speedup, execution time, sum-of-IPCs, maximum slowdown, weighted speedup - harmonic speedup = N (cid:80) IPCialone i IPCshared i - Provides a good balance between fairness and system performance AdityaKajweandMadhuMutyam (IITM) ImprovingFairnessinMemoryScheduling June14,2014 3/15 - PAR-BS [5]: processes DRAM requests in batches, and uses the SJF principle within a batch - MORSE [4]: extends Ipek et.al’s learning technique [1] to target arbitrary figures of merit. - MISE [6]: estimates slowdown of each application and accordingly redistributes bandwidth Thread Cluster Memory Scheduling (TCMS) [3] - divides threads into two clusters - latency-sensitive cluster > bandwidth-sensitive cluster - periodically shuffles priority in the bandwidth cluster RelatedWork Related Work - ATLAS [2]: prioritizes threads that have attained the least service AdityaKajweandMadhuMutyam (IITM) ImprovingFairnessinMemoryScheduling June14,2014 4/15 - MORSE [4]: extends Ipek et.al’s learning technique [1] to target arbitrary figures of merit. - MISE [6]: estimates slowdown of each application and accordingly redistributes bandwidth Thread Cluster Memory Scheduling (TCMS) [3] - divides threads into two clusters - latency-sensitive cluster > bandwidth-sensitive cluster - periodically shuffles priority in the bandwidth cluster RelatedWork Related Work - ATLAS [2]: prioritizes threads that have attained the least service - PAR-BS [5]: processes DRAM requests in batches, and uses the SJF principle within a batch AdityaKajweandMadhuMutyam (IITM) ImprovingFairnessinMemoryScheduling June14,2014 4/15 - MISE [6]: estimates slowdown of each application and accordingly redistributes bandwidth Thread Cluster Memory Scheduling (TCMS) [3] - divides threads into two clusters - latency-sensitive cluster > bandwidth-sensitive cluster - periodically shuffles priority in the bandwidth cluster RelatedWork Related Work - ATLAS [2]: prioritizes threads that have attained the least service - PAR-BS [5]: processes DRAM requests in batches, and uses the SJF principle within a batch - MORSE [4]: extends Ipek et.al’s learning technique [1] to target arbitrary figures of merit. AdityaKajweandMadhuMutyam (IITM) ImprovingFairnessinMemoryScheduling June14,2014 4/15

Description:
Aditya Kajwe and Madhu Mutyam (IITM). Improving Fairness .. Thread cluster memory scheduling: Exploiting differences in memory access behavior.
See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.