Parameter-efficient fine-tuning of large-scale pre-trained language models N Ding, Y Qin, G Yang, F Wei, Z Yang, Y Su, S Hu, Y Chen, CM Chan, ... Nature Machine Intelligence 5 (3), 220-235, 2023 | 273 | 2023 |
Openprompt: An open-source framework for prompt-learning N Ding, S Hu, W Zhao, Y Chen, Z Liu, HT Zheng, M Sun arXiv preprint arXiv:2111.01998, 2021 | 229 | 2021 |
Few-nerd: A few-shot named entity recognition dataset N Ding, G Xu, Y Chen, X Wang, X Han, P Xie, HT Zheng, Z Liu arXiv preprint arXiv:2105.07464, 2021 | 182 | 2021 |
Delta tuning: A comprehensive study of parameter efficient methods for pre-trained language models N Ding, Y Qin, G Yang, F Wei, Z Yang, Y Su, S Hu, Y Chen, CM Chan, ... arXiv preprint arXiv:2203.06904, 2022 | 181 | 2022 |
Enhancing chat language models by scaling high-quality instructional conversations N Ding, Y Chen, B Xu, Y Qin, Z Zheng, S Hu, Z Liu, M Sun, B Zhou arXiv preprint arXiv:2305.14233, 2023 | 141 | 2023 |
Prompt-learning for fine-grained entity typing N Ding, Y Chen, X Han, G Xu, P Xie, HT Zheng, Z Liu, J Li, HG Kim arXiv preprint arXiv:2108.10604, 2021 | 125 | 2021 |
Maven-ere: A unified large-scale dataset for event coreference, temporal, causal, and subevent relation extraction X Wang, Y Chen, N Ding, H Peng, Z Wang, Y Lin, X Han, L Hou, J Li, Z Liu, ... arXiv preprint arXiv:2211.07342, 2022 | 26 | 2022 |
Sparse low-rank adaptation of pre-trained language models N Ding, X Lv, Q Wang, Y Chen, B Zhou, Z Liu, M Sun arXiv preprint arXiv:2311.11696, 2023 | 20 | 2023 |
Few-shot classification with hypersphere modeling of prototypes N Ding, Y Chen, G Cui, X Wang, HT Zheng, Z Liu, P Xie arXiv preprint arXiv:2211.05319, 2022 | 6 | 2022 |
Exploring Lottery Prompts for Pre-trained Language Models Y Chen, N Ding, X Wang, S Hu, HT Zheng, Z Liu, P Xie arXiv preprint arXiv:2305.19500, 2023 | 3 | 2023 |
Empowering Private Tutoring by Chaining Large Language Models Y Chen, N Ding, HT Zheng, Z Liu, M Sun, B Zhou arXiv preprint arXiv:2309.08112, 2023 | 2 | 2023 |