The evaluation campaign of the 19th International Conference on Spoken Language Translation featured eight shared tasks:(i) Simultaneous speech translation,(ii) Offline …
L Ding, D Wu, D Tao - arXiv preprint arXiv:2109.07780, 2021 - arxiv.org
We present a simple and effective pretraining strategy--bidirectional training (BiT) for neural machine translation. Specifically, we bidirectionally update the model parameters at the …
This technical report briefly describes our JDExplore d-team's Vega v2 submission on the SuperGLUE leaderboard. SuperGLUE is more challenging than the widely used general …
J Rao, L Ding, S Qi, M Fang, Y Liu… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
The recent advancement in vision-and-language pretraining (VLP) has significantly improved the performance of cross-modal image-text retrieval (ITR) systems. However, the …
Zero-shot transfer learning for Dialogue State Tracking (DST) helps to handle a variety of task-oriented dialogue domains without the cost of collecting in-domain data. Existing works …
L Ding, K Peng, D Tao - arXiv preprint arXiv:2201.07365, 2022 - arxiv.org
We present a simple and effective pretraining strategy {D} en {o} ising {T} raining DoT for neural machine translation. Specifically, we update the model parameters with source-and …
This technical report briefly describes our JDExplore d-team's submission Vega v1 on the General Language Understanding Evaluation (GLUE) leaderboard, where GLUE is a …
Simultaneous machine translation (SiMT) is usually done via sequence-level knowledge distillation (Seq-KD) from a full-sentence neural machine translation (NMT) model. However …
The multimedia community has shown a significant interest in perceiving and representing the physical world with multimodal pretrained neural network models, and among them, the …