The shape of learning curves: a review

T Viering, M Loog - IEEE Transactions on Pattern Analysis and …, 2022 - ieeexplore.ieee.org
Learning curves provide insight into the dependence of a learner's generalization
performance on the training set size. This important tool can be used for model selection, to …

Efficient Hyperparameter Optimization with Adaptive Fidelity Identification

J Jiang, Z Wen, A Mansoor… - Proceedings of the IEEE …, 2024 - openaccess.thecvf.com
Abstract Hyperparameter Optimization and Neural Architecture Search are powerful in
attaining state-of-the-art machine learning models with Bayesian Optimization (BO) standing …

Fast and informative model selection using learning curve cross-validation

F Mohr, JN van Rijn - IEEE Transactions on Pattern Analysis …, 2023 - ieeexplore.ieee.org
Common cross-validation (CV) methods like k-fold cross-validation or Monte Carlo cross-
validation estimate the predictive performance of a learner by repeatedly training it on a …

Automated machine learning: past, present and future

M Baratchi, C Wang, S Limmer, JN van Rijn… - Artificial Intelligence …, 2024 - Springer
Automated machine learning (AutoML) is a young research area aiming at making high-
performance machine learning techniques accessible to a broad set of users. This is …

Delegated classification

E Saig, I Talgam-Cohen… - Advances in Neural …, 2024 - proceedings.neurips.cc
When machine learning is outsourced to a rational agent, conflicts of interest might arise and
severely impact predictive performance. In this work, we propose a theoretical framework for …

Masif: Meta-learned algorithm selection using implicit fidelity information

T Ruhkopf, A Mohan, D Deng, A Tornede… - … on Machine Learning …, 2022 - openreview.net
Selecting a well-performing algorithm for a given task or dataset can be time-consuming and
tedious, but is crucial for the successful day-to-day business of developing new AI & ML …

[HTML][HTML] The unreasonable effectiveness of early discarding after one epoch in neural network hyperparameter optimization

R Egele, F Mohr, T Viering, P Balaprakash - Neurocomputing, 2024 - Elsevier
To reach high performance with deep learning, hyperparameter optimization (HPO) is
essential. This process is usually time-consuming due to costly evaluations of neural …

Is One Epoch All You Need For Multi-Fidelity Hyperparameter Optimization?

R Egele, I Guyon, Y Sun, P Balaprakash - arXiv preprint arXiv:2307.15422, 2023 - arxiv.org
Hyperparameter optimization (HPO) is crucial for fine-tuning machine learning models but
can be computationally expensive. To reduce costs, Multi-fidelity HPO (MF-HPO) leverages …

Also for k-means: more data does not imply better performance

M Loog, JH Krijthe, M Bicego - Machine Learning, 2023 - Springer
Arguably, a desirable feature of a learner is that its performance gets better with an
increasing amount of training data, at least in expectation. This issue has received renewed …

A survey of learning curves with bad behavior: or how more data need not lead to better performance

M Loog, T Viering - arXiv preprint arXiv:2211.14061, 2022 - arxiv.org
Plotting a learner's generalization performance against the training set size results in a so-
called learning curve. This tool, providing insight in the behavior of the learner, is also …