Slotrefine: A fast non-autoregressive model for joint intent detection and slot filling D Wu, L Ding, F Lu, J Xie EMNLP 2020, 2020 | 85 | 2020 |
Localness matters: The evolved cross-attention for non-autoregressive translation L Ding, D Wu, L Wang, D Tao, Z Tu COLING 2020, 2020 | 72* | 2020 |
Improving Neural Machine Translation by Bidirectional Training L Ding, D Wu, D Tao EMNLP 2021, 2021 | 56 | 2021 |
The USYD-JD Speech Translation System for IWSLT 2021 L Ding, D Wu, D Tao IWSLT2021 @ACL (winning system), 2021 | 39* | 2021 |
SLUA: A Super Lightweight Unsupervised Word Alignment Model via Cross-Lingual Contrastive Learning D Wu, L Ding, S Yang, D Tao arXiv 2021, 2021 | 16* | 2021 |
Bridging the Gap Between Clean Data Training and Real-World Inference for Spoken Language Understanding D Wu, Y Chen, L Ding, D Tao arXiv 2021, 2021 | 14 | 2021 |
Original or translated? on the use of parallel data for translation quality estimation B Qiu, L Ding, D Wu, L Shang, Y Zhan, D Tao arXiv preprint arXiv:2212.10257, 2022 | 7 | 2022 |
Meta-task prompting elicits embedding from large language models Y Lei, D Wu, T Zhou, T Shen, Y Cao, C Tao, A Yates arXiv preprint arXiv:2402.18458, 2024 | 5 | 2024 |
Flow Matching for Conditional Text Generation in a Few Sampling Steps V Hu, D Wu, Y Asano, P Mettes, B Fernando, B Ommer, C Snoek Proceedings of the 18th Conference of the European Chapter of the …, 2024 | 2 | 2024 |
UvA-MT’s Participation in the WMT 2023 General Translation Shared Task D Wu, S Tan, D Stap, A Araabi, C Monz Proceedings of the Eighth Conference on Machine Translation (WMT), Singapore …, 2023 | 2 | 2023 |
How Far can 100 Samples Go? Unlocking Zero-Shot Translation with Tiny Multi-Parallel Data D Wu, S Tan, Y Meng, D Stap, C Monz Findings of the Association for Computational Linguistics ACL 2024, 15092-15108, 2024 | | 2024 |
How to Learn in a Noisy World? Self-Correcting the Real-World Data Noise on Machine Translation Y Meng, D Wu, C Monz arXiv preprint arXiv:2407.02208, 2024 | | 2024 |
Neuron Specialization: Leveraging intrinsic task modularity for multilingual machine translation S Tan, D Wu, C Monz arXiv preprint arXiv:2404.11201, 2024 | | 2024 |
How Far Can 100 Samples Go? Unlocking Overall Zero-Shot Multilingual Translation via Tiny Multi-Parallel Data D Wu, S Tan, Y Meng, D Stap, C Monz arXiv preprint arXiv:2401.12413, 2024 | | 2024 |
Beyond Shared Vocabulary: Increasing Representational Word Similarities across Languages for Multilingual Machine Translation D Wu, C Monz EMNLP 2023, 2023 | | 2023 |