Autofl: Enabling heterogeneity-aware energy efficient federated learning

YG Kim, CJ Wu - MICRO-54: 54th Annual IEEE/ACM International …, 2021 - dl.acm.org
Federated learning enables a cluster of decentralized mobile devices at the edge to
collaboratively train a shared machine learning model, while keeping all the raw training …

Ensemble neural networks for the development of storm surge flood modeling: A comprehensive review

SK Nezhad, M Barooni, D Velioglu Sogut… - Journal of Marine …, 2023 - mdpi.com
This review paper focuses on the use of ensemble neural networks (ENN) in the
development of storm surge flood models. Storm surges are a major concern in coastal …

Hyperparameter sensitivity in deep outlier detection: Analysis and a scalable hyper-ensemble solution

X Ding, L Zhao, L Akoglu - Advances in Neural Information …, 2022 - proceedings.neurips.cc
Outlier detection (OD) literature exhibits numerous algorithms as it applies to diverse
domains. However, given a new detection task, it is unclear how to choose an algorithm to …

Retiarii: A deep learning {Exploratory-Training} framework

Q Zhang, Z Han, F Yang, Y Zhang, Z Liu… - … USENIX Symposium on …, 2020 - usenix.org
Traditional deep learning frameworks such as TensorFlow and PyTorch support training on
a single deep neural network (DNN) model, which involves computing the weights iteratively …

Nautilus: An optimized system for deep transfer learning over evolving training datasets

S Nakandala, A Kumar - … of the 2022 International Conference on …, 2022 - dl.acm.org
Deep learning (DL) has revolutionized unstructured data analytics. But in most cases, DL
needs massive labeled datasets and large compute clusters, which hinders its adoption …

Earthquake nowcasting with deep learning

GC Fox, JB Rundle, A Donnellan, B Feng - Geohazards, 2022 - mdpi.com
We review previous approaches to nowcasting earthquakes and introduce new approaches
based on deep learning using three distinct models based on recurrent neural networks and …

Understanding and optimizing packed neural network training for hyper-parameter tuning

R Liu, S Krishnan, AJ Elmore, MJ Franklin - … of the Fifth Workshop on Data …, 2021 - dl.acm.org
As neural networks are increasingly employed in machine learning practice, how to
efficiently share limited training resources among a diverse set of model training tasks …

Efficient supernet training using path parallelism

Y Xu, L Cheng, X Cai, X Ma, W Chen… - … Symposium on High …, 2023 - ieeexplore.ieee.org
Compared to conventional neural networks, training a supernet for Neural Architecture
Search (NAS) is very time consuming. Although current works have demonstrated that …

ESEN: Efficient GPU sharing of Ensemble Neural Networks

J Wang, Y Shi, Z Chen, M Wen - Neurocomputing, 2024 - Elsevier
Ensemble neural networks are widely applied in cloud-based inference services due to their
remarkable performance, while the growing demand for low-latency services leads …

[PDF][PDF] Centimani: Enabling Fast AI Accelerator Selection for DNN Training with a Novel Performance Predictor

Z Xie, M Emani, X Yu, D Tao, X He, P Su… - 2024 USENIX Annual …, 2024 - zhen-xie.com
For an extended period, graphics processing units (GPUs) have stood as the exclusive
choice for training deep neural network (DNN) models. Over time, to serve the growing …