Mitigating Hallucinations and Off-target Machine Translation with Source-Contrastive and Language-Contrastive Decoding

R Sennrich, J Vamvas, A Mohammadshahi - arXiv preprint arXiv …, 2023 - arxiv.org
arXiv preprint arXiv:2309.07098, 2023arxiv.org
Hallucinations and off-target translation remain unsolved problems in machine translation,
especially for low-resource languages and massively multilingual models. In this paper, we
introduce methods to mitigate both failure cases with a modified decoding objective, without
requiring retraining or external models. In source-contrastive decoding, we search for a
translation that is probable given the correct input, but improbable given a random input
segment, hypothesising that hallucinations will be similarly probable given either. In …
Hallucinations and off-target translation remain unsolved problems in machine translation, especially for low-resource languages and massively multilingual models. In this paper, we introduce methods to mitigate both failure cases with a modified decoding objective, without requiring retraining or external models. In source-contrastive decoding, we search for a translation that is probable given the correct input, but improbable given a random input segment, hypothesising that hallucinations will be similarly probable given either. In language-contrastive decoding, we search for a translation that is probable, but improbable given the wrong language indicator token. In experiments on M2M-100 (418M) and SMaLL-100, we find that these methods effectively suppress hallucinations and off-target translations, improving chrF2 by 1.7 and 1.4 points on average across 57 tested translation directions. In a proof of concept on English--German, we also show that we can suppress off-target translations with the Llama 2 chat models, demonstrating the applicability of the method to machine translation with LLMs. We release our source code at https://github.com/ZurichNLP/ContraDecode.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果