Graph neural networks: A review of methods and applications J Zhou, G Cui, S Hu, Z Zhang, C Yang, Z Liu, L Wang, C Li, M Sun AI Open 1, 57-81, 2020 | 5579 | 2020 |
ERNIE: Enhanced Language Representation with Informative Entities Z Zhang, X Han, Z Liu, X Jiang, M Sun, Q Liu ACL 2019, 2019 | 1566 | 2019 |
Pre-trained models: Past, present and future X Han, Z Zhang, N Ding, Y Gu, X Liu, Y Huo, J Qiu, Y Yao, A Zhang, ... AI Open 2, 225-250, 2021 | 634 | 2021 |
KEPLER: A unified model for knowledge embedding and pre-trained language representation X Wang, T Gao, Z Zhu, Z Zhang, Z Liu, J Li, J Tang TACL, 2019 | 602 | 2019 |
Cpt: Colorful prompt tuning for pre-trained vision-language models Y Yao, A Zhang, Z Zhang, Z Liu, TS Chua, M Sun arXiv preprint arXiv:2109.11797, 2021 | 206 | 2021 |
Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic Trigger F Qi, M Li, Y Chen, Z Zhang, Z Liu, Y Wang, M Sun arXiv preprint arXiv:2105.12400, 2021 | 153 | 2021 |
A unified framework for community detection and network representation learning C Tu, X Zeng, H Wang, Z Zhang, Z Liu, M Sun, B Zhang, L Lin IEEE Transactions on Knowledge and Data Engineering 31 (6), 1051-1065, 2018 | 118 | 2018 |
CPM: A large-scale generative Chinese pre-trained language model Z Zhang, X Han, H Zhou, P Ke, Y Gu, D Ye, Y Qin, Y Su, H Ji, J Guan, F Qi, ... AI Open 2, 93-99, 2021 | 107 | 2021 |
TransNet: Translation-Based Network Representation Learning for Social Relation Extraction. C Tu, Z Zhang, Z Liu, M Sun IJCAI, 2864-2870, 2017 | 89 | 2017 |
Cpm-2: Large-scale cost-effective pre-trained language models Z Zhang, Y Gu, X Han, S Chen, C Xiao, Z Sun, Y Yao, F Qi, J Guan, P Ke, ... AI Open 2, 216-224, 2021 | 80 | 2021 |
Moefication: Transformer feed-forward layers are mixtures of experts Z Zhang, Y Lin, Z Liu, P Li, M Sun, J Zhou Findings of ACL 2022, 2021 | 79* | 2021 |
Red Alarm for Pre-trained Models: Universal Vulnerability to Neuron-Level Backdoor Attacks Z Zhang, G Xiao, Y Li, T Lv, F Qi, Z Liu, Y Wang, X Jiang, M Sun arXiv preprint arXiv:2101.06969, 2021 | 72 | 2021 |
Train No Evil: Selective Masking for Task-guided Pre-training Y Gu, Z Zhang, X Wang, Z Liu, M Sun arXiv preprint arXiv:2004.09733, 2020 | 56 | 2020 |
Cokebert: Contextual knowledge selection and embedding towards enhanced pre-trained language models Y Su, X Han, Z Zhang, Y Lin, P Li, Z Liu, J Zhou, M Sun AI Open 2, 127-134, 2021 | 55 | 2021 |
Open chinese language pre-trained model zoo H Zhong, Z Zhang, Z Liu, M Sun Technical report, 2019 | 54 | 2019 |
Finding Skill Neurons in Pre-trained Transformer-based Language Models X Wang, K Wen, Z Zhang, L Hou, Z Liu, J Li EMNLP 2022, 2022 | 50 | 2022 |
Knowledge Inheritance for Pre-trained Language Models Y Qin, Y Lin, J Yi, J Zhang, X Han, Z Zhang, Y Su, Z Liu, P Li, M Sun, ... arXiv preprint arXiv:2105.13880, 2021 | 46 | 2021 |
A unified understanding of deep nlp models for text classification Z Li, X Wang, W Yang, J Wu, Z Zhang, Z Liu, M Sun, H Zhang, S Liu IEEE Transactions on Visualization and Computer Graphics 28 (12), 4980-4994, 2022 | 35 | 2022 |
Prompt tuning for discriminative pre-trained language models Y Yao, B Dong, A Zhang, Z Zhang, R Xie, Z Liu, L Lin, M Sun, J Wang Findings of ACL 2022, 2022 | 25 | 2022 |
Know what you don't need: Single-Shot Meta-Pruning for attention heads Z Zhang, F Qi, Z Liu, Q Liu, M Sun AI Open 2, 36-42, 2021 | 25 | 2021 |