In-context examples selection for machine translation

S Agrawal, C Zhou, M Lewis, L Zettlemoyer… - arXiv preprint arXiv …, 2022 - arxiv.org
Large-scale generative models show an impressive ability to perform a wide range of
Natural Language Processing (NLP) tasks using in-context learning, where a few examples …

A survey on retrieval-augmented text generation

H Li, Y Su, D Cai, Y Wang, L Liu - arXiv preprint arXiv:2202.01110, 2022 - arxiv.org
Recently, retrieval-augmented text generation attracted increasing attention of the
computational linguistics community. Compared with conventional generation models …

Domain adaptation and multi-domain adaptation for neural machine translation: A survey

D Saunders - Journal of Artificial Intelligence Research, 2022 - jair.org
The development of deep learning techniques has allowed Neural Machine Translation
(NMT) models to become extremely powerful, given sufficient training data and training time …

Dictionary-based phrase-level prompting of large language models for machine translation

M Ghazvininejad, H Gonen, L Zettlemoyer - arXiv preprint arXiv …, 2023 - arxiv.org
Large language models (LLMs) demonstrate remarkable machine translation (MT) abilities
via prompting, even though they were not explicitly trained for this task. However, even given …

Efficient cluster-based k-nearest-neighbor machine translation

D Wang, K Fan, B Chen, D Xiong - arXiv preprint arXiv:2204.06175, 2022 - arxiv.org
k-Nearest-Neighbor Machine Translation (kNN-MT) has been recently proposed as a non-
parametric solution for domain adaptation in neural machine translation (NMT). It aims to …

Improving few-shot performance of language models via nearest neighbor calibration

F Nie, M Chen, Z Zhang, X Cheng - arXiv preprint arXiv:2212.02216, 2022 - arxiv.org
Pre-trained language models (PLMs) have exhibited remarkable few-shot learning
capabilities when provided a few examples in a natural language prompt as demonstrations …

Tim: Teaching large language models to translate with comparison

J Zeng, F Meng, Y Yin, J Zhou - arXiv preprint arXiv:2307.04408, 2023 - arxiv.org
Open-sourced large language models (LLMs) have demonstrated remarkable efficacy in
various tasks with instruction tuning. However, these models can sometimes struggle with …

Chunk-based nearest neighbor machine translation

PH Martins, Z Marinho, AFT Martins - arXiv preprint arXiv:2205.12230, 2022 - arxiv.org
Semi-parametric models, which augment generation with retrieval, have led to impressive
results in language modeling and machine translation, due to their ability to retrieve fine …

Towards robust k-nearest-neighbor machine translation

H Jiang, Z Lu, F Meng, C Zhou, J Zhou… - arXiv preprint arXiv …, 2022 - arxiv.org
k-Nearest-Neighbor Machine Translation (kNN-MT) becomes an important research
direction of NMT in recent years. Its main idea is to retrieve useful key-value pairs from an …

Simple and scalable nearest neighbor machine translation

Y Dai, Z Zhang, Q Liu, Q Cui, W Li, Y Du… - arXiv preprint arXiv …, 2023 - arxiv.org
$ k $ NN-MT is a straightforward yet powerful approach for fast domain adaptation, which
directly plugs pre-trained neural machine translation (NMT) models with domain-specific …