关注
Zonghan Yang
Zonghan Yang
在 mails.tsinghua.edu.cn 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
Parameter-efficient fine-tuning of large-scale pre-trained language models
N Ding, Y Qin, G Yang, F Wei, Z Yang, Y Su, S Hu, Y Chen, CM Chan, ...
Nature Machine Intelligence 5 (3), 220-235, 2023
481*2023
Neural machine translation: A review of methods, resources, and tools
Z Tan, S Wang, Z Yang, G Chen, X Huang, M Sun, Y Liu
AI Open 1, 5-21, 2020
1252020
Reducing word omission errors in neural machine translation: A contrastive learning approach
Z Yang, Y Cheng, Y Liu, M Sun
Proceedings of the 57th Annual Meeting of the Association for Computational …, 2019
682019
Chinese poetry generation with a working memory model
X Yi, M Sun, R Li, Z Yang
Proceedings of the Twenty-Seventh International Joint Conference on …, 2018
572018
Unified detoxifying and debiasing in language generation via inference-time adaptive optimization
Z Yang, X Yi, P Li, Y Liu, X Xie
ICLR 2023, 2022
262022
On Robust Prefix-Tuning for Text Classification
Z Yang, Y Liu
International Conference on Learning Representations, 2022
212022
A Closer Look at the Adversarial Robustness of Deep Equilibrium Models
Z Yang, T Pang, Y Liu
Advances in Neural Information Processing Systems, 2022
112022
Interpolation between residual and non-residual networks
Z Yang, Y Liu, C Bao, Z Shi
International Conference on Machine Learning, 10736-10745, 2020
112020
Alternated training with synthetic and authentic data for neural machine translation
R Jiao, Z Yang, M Sun, Y Liu
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 …, 2021
72021
Scaffolding coordinates to promote vision-language coordination in large multi-modal models
X Lei, Z Yang, X Chen, P Li, Y Liu
arXiv preprint arXiv:2402.12058, 2024
62024
OneBit: Towards Extremely Low-bit Large Language Models
Y Xu, X Han, Z Yang, S Wang, Q Zhu, Z Liu, W Liu, W Che
arXiv preprint arXiv:2402.11295, 2024
52024
Improving Adversarial Robustness of DEQs with Explicit Regulations Along the Neural Dynamics
Z Yang, P Li, T Pang, Y Liu
ICML 2023, 2023
3*2023
Restricted orthogonal gradient projection for continual learning
Z Yang, Z Yang, Y Liu, P Li, Y Liu
AI Open 4, 98-110, 2023
32023
ReAct Meets ActRe: When Language Agents Enjoy Training Data Autonomy
Z Yang, P Li, M Yan, J Zhang, F Huang, Y Liu
arXiv preprint arXiv:2403.14589, 2024
2*2024
Towards Unified Alignment Between Agents, Humans, and Environment
Z Yang, A Liu, Z Liu, K Liu, F Xiong, Y Wang, Z Yang, Q Hu, X Chen, ...
arXiv preprint arXiv:2402.07744, 2024
22024
Exploring the Impact of Model Scaling on Parameter-Efficient Tuning
Y Su, CM Chan, J Cheng, Y Qin, Y Lin, S Hu, Z Yang, N Ding, X Sun, ...
The 2023 Conference on Empirical Methods in Natural Language Processing, 2023
2*2023
PANDA: Preference Adaptation for Enhancing Domain-Specific Abilities of LLMs
A Liu, Z Yang, Z Zhang, Q Hu, P Li, M Yan, J Zhang, F Huang, Y Liu
arXiv preprint arXiv:2402.12835, 2024
12024
Adversarial Robust Memory-Based Continual Learner
X Mi, F Tang, Z Yang, D Wang, J Cao, P Li, Y Liu
arXiv preprint arXiv:2311.17608, 2023
12023
Bridging the Gap between Decision and Logits in Decision-based Knowledge Distillation for Pre-trained Language Models
Q Zhou, Z Yang, P Li, Y Liu
ACL 2023, 2023
2023
系统目前无法执行此操作,请稍后再试。
文章 1–19