关注
Sheng Wang
Sheng Wang
PhD Candidate in The University of Hong Kong
在 connect.hku.hk 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
A Cognitive Stimulation Therapy Dialogue System with Multi-Source Knowledge Fusion for Elders with Cognitive Impairment
J Jiang, S Wang, Q Li, L Kong, C Wu
Annual Meeting of the Association for Computational Linguistics (ACL 2023), 2023
5*2023
PRoLoRA: Partial Rotation Empowers More Parameter-Efficient LoRA
S Wang, B Xue, J Ye, J Jiang, L Chen, L Kong, C Wu
Proceedings of the 62nd Annual Meeting of the Association for Computational …, 2024
32024
LoRA Meets Dropout under a Unified Framework
S Wang, L Chen, J Jiang, B Xue, L Kong, C Wu
arXiv preprint arXiv:2403.00812, 2024
32024
A comprehensive study of multilingual confidence estimation on large language models
B Xue, H Wang, R Wang, S Wang, Z Wang, Y Du, B Liang, KF Wong
arXiv preprint arXiv:2402.13606, 2024
32024
MoS: Unleashing Parameter Efficiency of Low-Rank Adaptation with Mixture of Shards
S Wang, L Chen, P Chen, J Dong, B Xue, J Jiang, L Kong, C Wu
arXiv preprint arXiv:2410.00938, 2024
22024
Forewarned is Forearmed: Leveraging LLMs for Data Synthesis through Failure-Inducing Exploration
Q Li, J Gao, S Wang, R Pi, X Zhao, C Wu, X Jiang, Z Li, L Kong
arXiv preprint arXiv:2410.16736, 2024
2024
ProReason: Multi-Modal Proactive Reasoning with Decoupled Eyesight and Wisdom
J Zhou, S Wang, J Dong, L Li, J Gao, L Kong, C Wu
arXiv preprint arXiv:2410.14138, 2024
2024
QSpec: Speculative Decoding with Complementary Quantization Schemes
J Zhao, W Lu, S Wang, L Kong, C Wu
arXiv preprint arXiv:2410.11305, 2024
2024
How Well Do LLMs Handle Cantonese? Benchmarking Cantonese Capabilities of Large Language Models
J Jiang, P Chen, L Chen, S Wang, Q Bao, L Kong, Y Li, C Wu
arXiv preprint arXiv:2408.16756, 2024
2024
Data Augmentation of Multi-turn Psychological Dialogue via Knowledge-driven Progressive Thought Prompting
J Jiang, L Chen, S Wang, L Kong, Y Li, C Wu
arXiv preprint arXiv:2406.16567, 2024
2024
系统目前无法执行此操作,请稍后再试。
文章 1–10