Label ranking algorithms: A survey

S Vembu, T Gärtner - Preference learning, 2010 - Springer
Label ranking is a complex prediction task where the goal is to map instances to a total order
over a finite set of predefined labels. An interesting aspect of this problem is that it subsumes …

Sampled softmax with random fourier features

AS Rawat, J Chen, FXX Yu… - Advances in Neural …, 2019 - proceedings.neurips.cc
The computational cost of training with softmax cross entropy loss grows linearly with the
number of classes. For the settings where a large number of classes are involved, a …

Bilinear exponential family of MDPs: frequentist regret bound with tractable exploration & planning

R Ouhamma, D Basu, O Maillard - … of the AAAI Conference on Artificial …, 2023 - ojs.aaai.org
We study the problem of episodic reinforcement learning in continuous state-action spaces
with unknown rewards and transitions. Specifically, we consider the setting where the …

Kernel exponential family estimation via doubly dual embedding

B Dai, H Dai, A Gretton, L Song… - The 22nd …, 2019 - proceedings.mlr.press
We investigate penalized maximum log-likelihood estimation for exponential family
distributions whose natural parameter resides in a reproducing kernel Hilbert space. Key to …

Improving dual-encoder training through dynamic indexes for negative mining

N Monath, M Zaheer, K Allen… - … Conference on Artificial …, 2023 - proceedings.mlr.press
Dual encoder models are ubiquitous in modern classification and retrieval. Crucial for
training such dual encoders is an accurate estimation of gradients from the partition function …

A fresh take on stale embeddings: improving dense retriever training with corrector networks

N Monath, W Grathwohl, M Boratko, R Fergus… - arXiv preprint arXiv …, 2024 - arxiv.org
In dense retrieval, deep encoders provide embeddings for both inputs and targets, and the
softmax function is used to parameterize a distribution over a large number of candidate …

EMC: Efficient MCMC Negative Sampling for Contrastive Learning with Global Convergence

CY Yau, HT Wai, P Raman, S Sarkar… - arXiv preprint arXiv …, 2024 - arxiv.org
A key challenge in contrastive learning is to generate negative samples from a large sample
set to contrast with positive samples, for learning better encoding of the data. These negative …

On structured output training: hard cases and an efficient alternative

T Gärtner, S Vembu - Machine Learning, 2009 - Springer
We consider a class of structured prediction problems for which the assumptions made by
state-of-the-art algorithms fail. To deal with exponentially sized output sets, these algorithms …

Statistical computational learning

A Cornuejols, F Koriche, R Nock - A Guided Tour of Artificial Intelligence …, 2020 - Springer
Statistical computational learning is the branch of Machine Learning that defines and
analyzes the performance of learning algorithms using two metrics: sample complexity and …

Active search in intensionally specified structured spaces

D Oglic, R Garnett, T Gärtner - Proceedings of the AAAI Conference on …, 2017 - ojs.aaai.org
We consider an active search problem in intensionally specified structured spaces. The
ultimate goal in this setting is to discover structures from structurally different partitions of a …