关注
KaShun SHUM
标题
引用次数
引用次数
年份
Raft: Reward ranked finetuning for generative foundation model alignment
H Dong, W Xiong, D Goyal, R Pan, S Diao, J Zhang, K Shum, T Zhang
Transactions on Machine Learning Research (TMLR), 2023
2012023
Automatic Prompt Augmentation and Selection with Chain-of-Thought from Labeled Data
KS Shum, S Diao, T Zhang
(EMNLP 2023) In Findings of the Association for Computational Linguistics …, 2023
692023
Lmflow: An extensible toolkit for finetuning and inference of large foundation models
S Diao, R Pan, H Dong, KS Shum, J Zhang, W Xiong, T Zhang
(NAACL 2024) In Proceedings of the 2024 Conference of the North American …, 2024
412024
TILGAN: transformer-based implicit latent GAN for diverse and coherent text generation
S Diao, X Shen, K Shum, Y Song, T Zhang
(ACL 2021) Findings of the Association for Computational linguistics: ACL …, 2021
222021
Ragtruth: A hallucination corpus for developing trustworthy retrieval-augmented language models
Y Wu, J Zhu, S Xu, K Shum, C Niu, R Zhong, J Song, T Zhang
(ACL 2024) Proceedings of the 62nd Annual Meeting of the Association for …, 2024
182024
Plum: Prompt learning using metaheuristic
R Pan, S Xing, S Diao, X Liu, K Shum, J Zhang, T Zhang
(ACL 2024) Findings of the Association for Computational Linguistics ACL 2024, 2024
52024
FIRST: Teach A Reliable Large Language Model Through Efficient Trustworthy Distillation
KS Shum, M Xu, J Zhang, Z Chen, S Diao, H Dong, J Zhang, MO Raza
arXiv preprint arXiv:2408.12168, 2024
2024
系统目前无法执行此操作,请稍后再试。
文章 1–7