This work identifies 18 foundational challenges in assuring the alignment and safety of large language models (LLMs). These challenges are organized into three different categories …
The remarkable progress in deep learning in recent years is largely driven by improvements in scale, where bigger models are trained on larger datasets for longer schedules. To predict …
When two different parties use the same learning rule on their own data, how can we test whether the distributions of the two outcomes are similar? In this paper, we study the …
In this work, we aim to characterize the statistical complexity of realizable regression both in the PAC learning setting and the online learning setting. Previous work had established the …
We present new upper and lower bounds on the number of learner mistakes in thetransductive'online learning setting of Ben-David, Kushilevitz and Mansour (1997). This …
In this work we initiate the study of regression in the universal rates framework of Bousquet et al. Unlike the traditional uniform learning setting, we are interested in obtaining learning …
A Mey, M Loog - IEEE Transactions on Pattern Analysis and …, 2022 - ieeexplore.ieee.org
Semi-supervised learning is the learning setting in which we have both labeled and unlabeled data at our disposal. This survey covers theoretical results for this setting and …
Statistical learning theory is often associated with the principle of Occam's razor, which recommends a simplicity preference in inductive inference. This paper distills the core …
When machine learning is outsourced to a rational agent, conflicts of interest might arise and severely impact predictive performance. In this work, we propose a theoretical framework for …