Tinybert: Distilling bert for natural language understanding X Jiao, Y Yin, L Shang, X Jiang, X Chen, L Li, F Wang, Q Liu arXiv preprint arXiv:1909.10351, 2019 | 1715 | 2019 |
ERNIE: Enhanced Language Representation with Informative Entities Z Zhang, X Han, Z Liu, X Jiang, M Sun, Q Liu arXiv preprint arXiv:1905.07129, 2019 | 1589 | 2019 |
FILIP: Fine-grained Interactive Language-Image Pre-Training L Yao, R Huang, L Hou, G Lu, M Niu, H Xu, X Liang, Z Li, X Jiang, C Xu arXiv preprint arXiv:2111.07783, 2021 | 452 | 2021 |
Dynabert: Dynamic bert with adaptive width and depth L Hou, Z Huang, L Shang, X Jiang, X Chen, Q Liu Advances in Neural Information Processing Systems 33, 2020 | 273 | 2020 |
Paraphrase generation with deep reinforcement learning Z Li, X Jiang, L Shang, H Li arXiv preprint arXiv:1711.00279, 2017 | 259 | 2017 |
Neural generative question answering J Yin, X Jiang, Z Lu, L Shang, H Li, X Li arXiv preprint arXiv:1512.01337, 2015 | 254 | 2015 |
A ranking approach to keyphrase extraction X Jiang, Y Hu, H Li Proceedings of the 32nd international ACM SIGIR conference on Research and …, 2009 | 207 | 2009 |
BinaryBERT: Pushing the Limit of BERT Quantization H Bai, W Zhang, L Hou, L Shang, J Jin, X Jiang, Q Liu, M Lyu, I King arXiv preprint arXiv:2012.15701, 2020 | 194 | 2020 |
PanGu-: Large-scale Autoregressive Pretrained Chinese Language Models with Auto-parallel Computation W Zeng, X Ren, T Su, H Wang, Y Liao, Z Wang, X Jiang, ZZ Yang, K Wang, ... arXiv preprint arXiv:2104.12369, 2021 | 191 | 2021 |
Affective neural response generation N Asghar, P Poupart, J Hoey, X Jiang, L Mou European Conference on Information Retrieval, 154-166, 2018 | 188 | 2018 |
TernaryBERT: Distillation-aware Ultra-low Bit BERT W Zhang, L Hou, Y Yin, L Shang, X Chen, X Jiang, Q Liu arXiv preprint arXiv:2009.12812, 2020 | 180 | 2020 |
Aligning Large Language Models with Human: A Survey Y Wang, W Zhong, L Li, F Mi, X Zeng, W Huang, L Shang, X Jiang, Q Liu arXiv preprint arXiv:2307.12966, 2023 | 158 | 2023 |
Integrating Graph Contextualized Knowledge into Pre-trained Language Models B He, D Zhou, J Xiao, Q Liu, NJ Yuan, T Xu arXiv preprint arXiv:1912.00147, 2019 | 149 | 2019 |
On position embeddings in bert B Wang, L Shang, C Lioma, X Jiang, H Yang, Q Liu, JG Simonsen International Conference on Learning Representations, 2020 | 133 | 2020 |
NEZHA: Neural Contextualized Representation for Chinese Language Understanding J Wei, X Ren, X Li, W Huang, Y Liao, Y Wang, J Lin, X Jiang, X Chen, ... arXiv preprint arXiv:1909.00204, 2019 | 121 | 2019 |
SYNCOBERT: Syntax-Guided Multi-Modal Contrastive Pre-Training for Code Representation X Wang, FM Yasheng Wang, P Zhou, Y Wan, X Liu, L Li, H Wu, J Liu, ... | 101 | 2021 |
Decomposable Neural Paraphrase Generation Z Li, X Jiang, L Shang, Q Liu arXiv preprint arXiv:1906.09741, 2019 | 98 | 2019 |
Generate & Rank: A Multi-task Framework for Math Word Problems J Shen, Y Yin, L Li, L Shang, X Jiang, M Zhang, Q Liu arXiv preprint arXiv:2109.03034, 2021 | 95 | 2021 |
Comparison of MISR aerosol optical thickness with AERONET measurements in Beijing metropolitan area X Jiang, Y Liu, B Yu, M Jiang Remote Sensing of Environment 107 (1-2), 45-53, 2007 | 88 | 2007 |
Enabling Multimodal Generation on CLIP via Vision-Language Knowledge Distillation W Dai, L Hou, L Shang, X Jiang, Q Liu, P Fung arXiv preprint arXiv:2203.06386, 2022 | 78 | 2022 |