Robust preference optimization through reward model distillation

A Fisch, J Eisenstein, V Zayats, A Agarwal… - arXiv preprint arXiv …, 2024 - arxiv.org
Language model (LM) post-training (or alignment) involves maximizing a reward function
that is derived from preference annotations. Direct Preference Optimization (DPO) is a …

Robust Preference Optimization through Reward Model Distillation

A Fisch, J Eisenstein, V Zayats, A Agarwal… - arXiv e …, 2024 - ui.adsabs.harvard.edu
Abstract Language model (LM) post-training (or alignment) involves maximizing a reward
function that is derived from preference annotations. Direct Preference Optimization (DPO) is …