Error analysis prompting enables human-like translation evaluation in large language models: A case study on chatgpt

Q Lu, B Qiu, L Ding, L Xie, D Tao - 2023 - preprints.org
Generative large language models (LLMs), eg, ChatGPT, have demonstrated remarkable
proficiency across several NLP tasks such as machine translation, question answering, text …

Findings of the IWSLT 2022 Evaluation Campaign.

A Anastasopoulos, L Barrault, L Bentivogli… - Proceedings of the 19th …, 2022 - cris.fbk.eu
The evaluation campaign of the 19th International Conference on Spoken Language
Translation featured eight shared tasks:(i) Simultaneous speech translation,(ii) Offline …

Improving neural machine translation by bidirectional training

L Ding, D Wu, D Tao - arXiv preprint arXiv:2109.07780, 2021 - arxiv.org
We present a simple and effective pretraining strategy--bidirectional training (BiT) for neural
machine translation. Specifically, we bidirectionally update the model parameters at the …

Toward efficient language model pretraining and downstream adaptation via self-evolution: A case study on superglue

Q Zhong, L Ding, Y Zhan, Y Qiao, Y Wen… - arXiv preprint arXiv …, 2022 - arxiv.org
This technical report briefly describes our JDExplore d-team's Vega v2 submission on the
SuperGLUE leaderboard. SuperGLUE is more challenging than the widely used general …

Dynamic contrastive distillation for image-text retrieval

J Rao, L Ding, S Qi, M Fang, Y Liu… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
The recent advancement in vision-and-language pretraining (VLP) has significantly
improved the performance of cross-modal image-text retrieval (ITR) systems. However, the …

Divide, conquer, and combine: Mixture of semantic-independent experts for zero-shot dialogue state tracking

Q Wang, L Ding, Y Cao, Y Zhan, Z Lin, S Wang… - arXiv preprint arXiv …, 2023 - arxiv.org
Zero-shot transfer learning for Dialogue State Tracking (DST) helps to handle a variety of
task-oriented dialogue domains without the cost of collecting in-domain data. Existing works …

Improving neural machine translation by denoising training

L Ding, K Peng, D Tao - arXiv preprint arXiv:2201.07365, 2022 - arxiv.org
We present a simple and effective pretraining strategy {D} en {o} ising {T} raining DoT for
neural machine translation. Specifically, we update the model parameters with source-and …

Bag of tricks for effective language model pretraining and downstream adaptation: A case study on glue

Q Zhong, L Ding, K Peng, J Liu, B Du, L Shen… - arXiv preprint arXiv …, 2023 - arxiv.org
This technical report briefly describes our JDExplore d-team's submission Vega v1 on the
General Language Understanding Evaluation (GLUE) leaderboard, where GLUE is a …

Improving simultaneous machine translation with monolingual data

H Deng, L Ding, X Liu, M Zhang, D Tao… - Proceedings of the AAAI …, 2023 - ojs.aaai.org
Simultaneous machine translation (SiMT) is usually done via sequence-level knowledge
distillation (Seq-KD) from a full-sentence neural machine translation (NMT) model. However …

Can Linguistic Knowledge Improve Multimodal Alignment in Vision-Language Pretraining?

F Wang, L Ding, J Rao, Y Liu, L Shen… - arXiv preprint arXiv …, 2023 - arxiv.org
The multimedia community has shown a significant interest in perceiving and representing
the physical world with multimodal pretrained neural network models, and among them, the …