AIBench training: Balanced industry-standard AI training benchmarking

F Tang, W Gao, J Zhan, C Lan, X Wen… - … Analysis of Systems …, 2021 - ieeexplore.ieee.org
2021 IEEE International Symposium on Performance Analysis of …, 2021ieeexplore.ieee.org
Earlier-stage evaluations of a new AI architecture/system need affordable AI benchmarks.
Only using a few AI component benchmarks like MLPerf alone in the other stages may lead
to misleading conclusions. Moreover, the learning dynamics are not well understood, and
the benchmarks' shelf-life is short. This paper proposes a balanced benchmarking
methodology. We use real-world benchmarks to cover the factors space that impacts the
learning dynamics to the most considerable extent. After performing an exhaustive survey on …
Earlier-stage evaluations of a new AI architecture/system need affordable AI benchmarks. Only using a few AI component benchmarks like MLPerf alone in the other stages may lead to misleading conclusions. Moreover, the learning dynamics are not well understood, and the benchmarks' shelf-life is short. This paper proposes a balanced benchmarking methodology. We use real-world benchmarks to cover the factors space that impacts the learning dynamics to the most considerable extent. After performing an exhaustive survey on Internet service AI domains, we identify and implement nineteen representative AI tasks with state-of-the-art models. For repeatable performance ranking (RPR subset) and workload characterization (WC subset), we keep two subsets to a minimum for affordability. We contribute by far the most comprehensive AI training benchmark suite. The evaluations show: (1) AIBench Training (v1.1) outperforms MLPerf Training (v0.7) in terms of diversity and representativeness of model complexity, computational cost, convergent rate, computation, and memory access patterns, and hotspot functions; (2) Against the AIBench full benchmarks, its RPR subset shortens the benchmarking cost by 64%, while maintaining the primary workload characteristics; (3) The performance ranking shows the single-purpose AI accelerator like TPU with the optimized TensorFlow framework performs better than that of GPUs while losing the latter's general support for various AI models. The specification, source code, and performance numbers are available from the AIBench homepage https://www.benchcouncil.org/aibench-training/index.html.
ieeexplore.ieee.org
以上显示的是最相近的搜索结果。 查看全部搜索结果

Google学术搜索按钮

example.edu/paper.pdf
查找
获取 PDF 文件
引用
References