Token-level Direct Preference Optimization

Y Zeng, G Liu, W Ma, N Yang, H Zhang… - arXiv preprint arXiv …, 2024 - arxiv.org
arXiv preprint arXiv:2404.11999, 2024arxiv.org
Fine-tuning pre-trained Large Language Models (LLMs) is essential to align them with
human values and intentions. This process often utilizes methods like pairwise comparisons
and KL divergence against a reference LLM, focusing on the evaluation of full answers
generated by the models. However, the generation of these responses occurs in a token
level, following a sequential, auto-regressive fashion. In this paper, we introduce Token-
level Direct Preference Optimization (TDPO), a novel approach to align LLMs with human …
Fine-tuning pre-trained Large Language Models (LLMs) is essential to align them with human values and intentions. This process often utilizes methods like pairwise comparisons and KL divergence against a reference LLM, focusing on the evaluation of full answers generated by the models. However, the generation of these responses occurs in a token level, following a sequential, auto-regressive fashion. In this paper, we introduce Token-level Direct Preference Optimization (TDPO), a novel approach to align LLMs with human preferences by optimizing policy at the token level. Unlike previous methods, which face challenges in divergence efficiency, TDPO incorporates forward KL divergence constraints for each token, improving alignment and diversity. Utilizing the Bradley-Terry model for a token-based reward system, TDPO enhances the regulation of KL divergence, while preserving simplicity without the need for explicit reward modeling. Experimental results across various text tasks demonstrate TDPO's superior performance in balancing alignment with generation diversity. Notably, fine-tuning with TDPO strikes a better balance than DPO in the controlled sentiment generation and single-turn dialogue datasets, and significantly improves the quality of generated responses compared to both DPO and PPO-based RLHF methods. Our code is open-sourced at https://github.com/Vance0124/Token-level-Direct-Preference-Optimization.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果