关注
Weilin Zhao
Weilin Zhao
在 mails.tsinghua.edu.cn 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
Ptr: Prompt tuning with rules for text classification
X Han, W Zhao, N Ding, Z Liu, M Sun
AI Open 3, 182-192, 2022
4072022
Parameter-efficient fine-tuning of large-scale pre-trained language models
N Ding, Y Qin, G Yang, F Wei, Z Yang, Y Su, S Hu, Y Chen, CM Chan, ...
Nature Machine Intelligence 5 (3), 220-235, 2023
2732023
Openprompt: An open-source framework for prompt-learning
N Ding, S Hu, W Zhao, Y Chen, Z Liu, HT Zheng, M Sun
arXiv preprint arXiv:2111.01998, 2021
2292021
Delta Tuning: A Comprehensive Study of Parameter Efficient Methods for Pre-trained Language Models
N Ding, Y Qin, G Yang, F Wei, Z Yang, Y Su, S Hu, Y Chen, CM Chan, ...
arXiv preprint arXiv:2203.06904, 2022
1812022
Tool learning with foundation models
Y Qin, S Hu, Y Lin, W Chen, N Ding, G Cui, Z Zeng, Y Huang, C Xiao, ...
arXiv preprint arXiv:2304.08354, 2023
1752023
Moderate-fitting as a Natural Backdoor Defender for Pre-trained Language Models
B Zhu, Y Qin, G Cui, Y Chen, W Zhao, C Fu, Y Deng, Z Liu, J Wang, W Wu, ...
Advances in Neural Information Processing Systems 35, 1086-1099, 2022
132022
OpenDelta: A Plug-and-play Library for Parameter-efficient Adaptation of Pre-trained Models
S Hu, N Ding, W Zhao, X Lv, Z Zhang, Z Liu, M Sun
arXiv preprint arXiv:2307.03084, 2023
82023
MiniCPM: Unveiling the Potential of Small Language Models with Scalable Training Strategies
S Hu, Y Tu, X Han, C He, G Cui, X Long, Z Zheng, Y Fang, Y Huang, ...
arXiv preprint arXiv:2404.06395, 2024
72024
Delta tuning: A comprehensive study of parameter efficient methods for pre-trained language models. CoRR, abs/2203.06904, 2022. doi: 10.48550
N Ding, Y Qin, G Yang, F Wei, Z Yang, Y Su, S Hu, Y Chen, CM Chan, ...
arXiv preprint arXiv.2203.06904, 0
7
BMCook: A task-agnostic compression toolkit for big models
Z Zhang, B Gong, Y Chen, X Han, G Zeng, W Zhao, Y Chen, Z Liu, M Sun
Proceedings of the 2022 Conference on Empirical Methods in Natural Language …, 2022
52022
BMInf: An Efficient Toolkit for Big Model Inference and Tuning
X Han, G Zeng, W Zhao, Z Liu, Z Zhang, J Zhou, J Zhang, J Chao, M Sun
Proceedings of the 60th Annual Meeting of the Association for Computational …, 2022
52022
CPET: Effective Parameter-Efficient Tuning for Compressed Large Language Models
W Zhao, Y Huang, X Han, Z Liu, Z Zhang, M Sun
arXiv preprint arXiv:2307.07705, 2023
32023
Unlock Predictable Scaling from Emergent Abilities
S Hu, X Liu, X Han, X Zhang, C He, W Zhao, Y Lin, N Ding, Z Ou, G Zeng, ...
arXiv preprint arXiv:2310.03262, 2023
22023
BurstAttention: An Efficient Distributed Attention Framework for Extremely Long Sequences
S Ao, W Zhao, X Han, C Yang, Z Liu, C Shi, M Sun, S Wang, T Su
arXiv preprint arXiv:2403.09347, 2024
12024
Ouroboros: Speculative Decoding with Large Model Enhanced Drafting
W Zhao, Y Huang, X Han, C Xiao, Z Liu, M Sun
arXiv preprint arXiv:2402.13720, 2024
12024
H3T: efficient integration of memory optimization and parallelism for high-throughput transformer training
Y Wang, X Han, W Zhao, G Zeng, Z Liu, M Sun
Proceedings of the 37th International Conference on Neural Information …, 2023
2023
Predicting Emergent Abilities with Infinite Resolution Evaluation
S Hu, X Liu, X Han, X Zhang, C He, W Zhao, Y Lin, N Ding, Z Ou, G Zeng, ...
arXiv e-prints, arXiv: 2310.03262, 2023
2023
Optimal Rope Extension Via Bayesian Optimization for Training-Free Length Generalization
X Zhang, S Hu, W Zhao, H Wang, X Han, C He, Z Liu, M Sun
Available at SSRN 4765732, 0
系统目前无法执行此操作,请稍后再试。
文章 1–18