C Agarwal, D D'souza… - Proceedings of the IEEE …, 2022 - openaccess.thecvf.com
In machine learning, a question of great interest is understanding what examples are challenging for a model to classify. Identifying atypical examples ensures the safe …
S Li, ECH Ngai, T Voigt - IEEE Transactions on Big Data, 2023 - ieeexplore.ieee.org
Byzantine-robust federated learning aims at mitigating Byzantine failures during the federated training process, where malicious participants (known as Byzantine clients) may …
Dataset distillation extracts a small set of synthetic training samples from a large dataset with the goal of achieving competitive performance on test data when trained on this sample. In …
Z Wang, Y Mao - arXiv preprint arXiv:2110.03128, 2021 - arxiv.org
This paper follows up on a recent work of Neu et al.(2021) and presents some new information-theoretic upper bounds for the generalization error of machine learning models …
As machine learning models are increasingly employed to assist human decision-makers, it becomes critical to communicate the uncertainty associated with these model predictions …
Supervised learning in deep neural networks is commonly performed using error backpropagation. However, the sequential propagation of errors during the backward pass …
H Qin, S Rajbhandari, O Ruwase… - Advances in Neural …, 2021 - proceedings.neurips.cc
Large scale training requires massive parallelism to finish the training within a reasonable amount of time. To support massive parallelism, large batch training is the key enabler but …
V Szolnoky, V Andersson, B Kulcsár… - Advances in Neural …, 2022 - proceedings.neurips.cc
Most complex machine learning and modelling techniques are prone to over-fitting and may subsequently generalise poorly to future data. Artificial neural networks are no different in …
Adaptive gradient methods, eg\textsc {Adam}, have achieved tremendous success in machine learning. Scaling the learning rate element-wisely by a certain form of second …