Ensemble prediction of job resources to improve system performance for slurm-based hpc systems

M Tanash, H Yang, D Andresen, W Hsu - Practice and experience in …, 2021 - dl.acm.org
Practice and experience in advanced research computing, 2021dl.acm.org
In this paper, we present a novel methodology for predicting job resources (memory and
time) for submitted jobs on HPC systems. Our methodology based on historical jobs data
(saccount data) provided from the Slurm workload manager using supervised machine
learning. This Machine Learning (ML) prediction model is effective and useful for both HPC
administrators and HPC users. Moreover, our ML model increases the efficiency and
utilization for HPC systems, thus reduce power consumption as well. Our model involves …
In this paper, we present a novel methodology for predicting job resources (memory and time) for submitted jobs on HPC systems. Our methodology based on historical jobs data (saccount data) provided from the Slurm workload manager using supervised machine learning. This Machine Learning (ML) prediction model is effective and useful for both HPC administrators and HPC users. Moreover, our ML model increases the efficiency and utilization for HPC systems, thus reduce power consumption as well. Our model involves using Several supervised machine learning discriminative models from the scikit-learn machine learning library and LightGBM applied on historical data from Slurm.
Our model helps HPC users to determine the required amount of resources for their submitted jobs and make it easier for them to use HPC resources efficiently. This work provides the second step towards implementing our general open source tool towards HPC service providers. For this work, our Machine learning model has been implemented and tested using two HPC providers, an XSEDE service provider (University of Colorado-Boulder (RMACC Summit) and Kansas State University (Beocat)).
We used more than two hundred thousand jobs: one-hundred thousand jobs from SUMMIT and one-hundred thousand jobs from Beocat, to model and assess our ML model performance. In particular we measured the improvement of running time, turnaround time, average waiting time for the submitted jobs; and measured utilization of the HPC clusters.
Our model achieved up to  86% accuracy in predicting the amount of time and the amount of memory for both SUMMIT and Beocat HPC resources. Our results show that our model helps dramatically reduce computational average waiting time  (from 380 to 4 hours in RMACC Summit and from 662 hours to 28 hours in Beocat); reduced turnaround time  (from 403 to 6 hours in RMACC Summit and from 673 hours to 35 hours in Beocat); and acheived up to  100% utilization for both HPC resources.
ACM Digital Library
以上显示的是最相近的搜索结果。 查看全部搜索结果