Ernie: Enhanced representation through knowledge integration Y Sun, S Wang, Y Li, S Feng, X Chen, H Zhang, X Tian, D Zhu, H Tian, ... arXiv preprint arXiv:1904.09223, 2019 | 1065 | 2019 |
Ernie 2.0: A continual pre-training framework for language understanding Y Sun, S Wang, Y Li, S Feng, H Tian, H Wu, H Wang Proceedings of the AAAI conference on artificial intelligence 34 (05), 8968-8975, 2020 | 841 | 2020 |
Ernie 3.0: Large-scale knowledge enhanced pre-training for language understanding and generation Y Sun, S Wang, S Feng, S Ding, C Pang, J Shang, J Liu, X Chen, Y Zhao, ... arXiv preprint arXiv:2107.02137, 2021 | 372 | 2021 |
Ernie-vil: Knowledge enhanced vision-language representations through scene graphs F Yu, J Tang, W Yin, Y Sun, H Tian, H Wu, H Wang Proceedings of the AAAI conference on artificial intelligence 35 (4), 3208-3216, 2021 | 364 | 2021 |
Multi-view response selection for human-computer conversation X Zhou, D Dong, H Wu, S Zhao, D Yu, H Tian, X Liu, R Yan Proceedings of the 2016 conference on empirical methods in natural language …, 2016 | 271 | 2016 |
SKEP: Sentiment knowledge enhanced pre-training for sentiment analysis H Tian, C Gao, X Xiao, H Liu, B He, H Wu, H Wang, F Wu arXiv preprint arXiv:2005.05635, 2020 | 248 | 2020 |
Ernie-gen: An enhanced multi-flow pre-training and fine-tuning framework for natural language generation D Xiao, H Zhang, Y Li, Y Sun, H Tian, H Wu, H Wang arXiv preprint arXiv:2001.11314, 2020 | 138 | 2020 |
ERNIE-M: Enhanced multilingual representation by aligning cross-lingual semantics with monolingual corpora X Ouyang, S Wang, C Pang, Y Sun, H Tian, H Wu, H Wang arXiv preprint arXiv:2012.15674, 2020 | 94 | 2020 |
Ernie-vilg 2.0: Improving text-to-image diffusion model with knowledge-enhanced mixture-of-denoising-experts Z Feng, Z Zhang, X Yu, Y Fang, L Li, X Chen, Y Lu, J Liu, W Yin, S Feng, ... Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2023 | 93 | 2023 |
Delving into the devils of bird's-eye-view perception: A review, evaluation and recipe H Li, C Sima, J Dai, W Wang, L Lu, H Wang, J Zeng, Z Li, J Yang, H Deng, ... IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023 | 87 | 2023 |
Ernie 3.0 titan: Exploring larger-scale knowledge enhanced pre-training for language understanding and generation S Wang, Y Sun, Y Xiang, Z Wu, S Ding, W Gong, S Feng, J Shang, Y Zhao, ... arXiv preprint arXiv:2112.12731, 2021 | 71 | 2021 |
Investigating the factual knowledge boundary of large language models with retrieval augmentation R Ren, Y Wang, Y Qu, WX Zhao, J Liu, H Tian, H Wu, JR Wen, H Wang arXiv preprint arXiv:2307.11019, 2023 | 59 | 2023 |
ERNIE-Doc: A retrospective long-document modeling transformer S Ding, J Shang, S Wang, Y Sun, H Tian, H Wu, H Wang arXiv preprint arXiv:2012.15688, 2020 | 59 | 2020 |
Weijie Su, Chenyu Yang, Gao Huang, Bin Li, Lewei Lu, Xiaogang Wang, et al. Ghost in the minecraft: Generally capable agents for open-world enviroments via large language models … X Zhu, Y Chen, H Tian, C Tao arXiv preprint arXiv:2305.17144 2 (3), 5, 2023 | 56 | 2023 |
Ernie-layout: Layout knowledge enhanced pre-training for visually-rich document understanding Q Peng, Y Pan, W Wang, B Luo, Z Zhang, Z Huang, T Hu, W Yin, Y Chen, ... arXiv preprint arXiv:2210.06155, 2022 | 55 | 2022 |
How far are we to gpt-4v? closing the gap to commercial multimodal models with open-source suites Z Chen, W Wang, H Tian, S Ye, Z Gao, E Cui, W Tong, K Hu, J Luo, Z Ma, ... arXiv preprint arXiv:2404.16821, 2024 | 51 | 2024 |
Ernie-search: Bridging cross-encoder with dual-encoder via self on-the-fly distillation for dense passage retrieval Y Lu, Y Liu, J Liu, Y Shi, Z Huang, SFY Sun, H Tian, H Wu, S Wang, D Yin, ... arXiv preprint arXiv:2205.09153, 2022 | 49 | 2022 |
Artificial intelligence for prosthetics: Challenge solutions Ł Kidziński, C Ong, SP Mohanty, J Hicks, S Carroll, B Zhou, H Zeng, ... The NeurIPS'18 Competition: From Machine Learning to Intelligent …, 2020 | 47 | 2020 |
ERNIE-Gram: Pre-training with explicitly n-gram masked language modeling for natural language understanding D Xiao, YK Li, H Zhang, Y Sun, H Tian, H Wu, H Wang arXiv preprint arXiv:2010.12148, 2020 | 45 | 2020 |
Policy learning for domain selection in an extensible multi-domain spoken dialogue system Z Wang, H Chen, G Wang, H Tian, H Wu, H Wang Proceedings of the 2014 Conference on Empirical Methods in Natural Language …, 2014 | 45 | 2014 |