Data-Intensive Workload Consolidation on Hadoop Distributed File System

Select |




Print


Moraveji, Reza; Taheri, Javid; HosseinyFarahabady, MohammadReza; Babaii Rizvandi, Nikzad; Zomaya, Albert


2012-08-06


Conference Material


IEEE Grid


China


95-103


Workload consolidation, sharing physical resources among multiple workloads, is a promising technique to save cost and energy in cluster computing systems. This paper highlights a few challenges of workload consolidation for Hadoop as one of the current state-of-the-art data-intensive cluster computing system. Through a systematic step-by-step procedure, we investigate challenges for efficient server consolidation in Hadoop environments. To this end, we first investigate the inter-relationship between last level cache (LLC) contention and throughput degradation for consolidated workloads on a single physical server employing Hadoop distributed file system (HDFS). We then investigate the general case of consolidation on multiple physical servers so that their throughput never falls below a desired/predefined utilization level. We use our empirical results to model consolidation as a classic two-dimensional bin packing problem and then design a computationally efficient greedy algorithm to achieve minimum throughput degradation on multiple servers. Results are very promising and show that our greedy approach is able to achieve near optimal solution in all experimented cases.


Workload Consolidation; Hadoop; Throughput Degradation; Last Level Cache; Bin Packing


http://grid2012.meepo.org/


nicta:6003


Moraveji, Reza; Taheri, Javid; HosseinyFarahabady, MohammadReza; Babaii Rizvandi, Nikzad; Zomaya, Albert. Data-Intensive Workload Consolidation on Hadoop Distributed File System.[Conference Material]. 2012-08-06. <a href="http://hdl.handle.net/102.100.100/99853?index=1" target="_blank">http://hdl.handle.net/102.100.100/99853?index=1</a>



Loading citation data...

Citation counts
(Requires subscription to view)