Raft: Reward ranked finetuning for generative foundation model alignment H Dong, W Xiong, D Goyal, R Pan, S Diao, J Zhang, K Shum, T Zhang Transactions on Machine Learning Research (TMLR), 2023 | 201 | 2023 |
Automatic Prompt Augmentation and Selection with Chain-of-Thought from Labeled Data KS Shum, S Diao, T Zhang (EMNLP 2023) In Findings of the Association for Computational Linguistics …, 2023 | 69 | 2023 |
Lmflow: An extensible toolkit for finetuning and inference of large foundation models S Diao, R Pan, H Dong, KS Shum, J Zhang, W Xiong, T Zhang (NAACL 2024) In Proceedings of the 2024 Conference of the North American …, 2024 | 41 | 2024 |
TILGAN: transformer-based implicit latent GAN for diverse and coherent text generation S Diao, X Shen, K Shum, Y Song, T Zhang (ACL 2021) Findings of the Association for Computational linguistics: ACL …, 2021 | 22 | 2021 |
Ragtruth: A hallucination corpus for developing trustworthy retrieval-augmented language models Y Wu, J Zhu, S Xu, K Shum, C Niu, R Zhong, J Song, T Zhang (ACL 2024) Proceedings of the 62nd Annual Meeting of the Association for …, 2024 | 18 | 2024 |
Plum: Prompt learning using metaheuristic R Pan, S Xing, S Diao, X Liu, K Shum, J Zhang, T Zhang (ACL 2024) Findings of the Association for Computational Linguistics ACL 2024, 2024 | 5 | 2024 |
FIRST: Teach A Reliable Large Language Model Through Efficient Trustworthy Distillation KS Shum, M Xu, J Zhang, Z Chen, S Diao, H Dong, J Zhang, MO Raza arXiv preprint arXiv:2408.12168, 2024 | | 2024 |