Can llms express their uncertainty? an empirical evaluation of confidence elicitation in llms M Xiong, Z Hu, X Lu, Y Li, J Fu, J He, B Hooi arXiv preprint arXiv:2306.13063, 2023 | 109 | 2023 |
An exploratory study of reactions to bot comments on GitHub JC Farah, B Spaenlehauer, X Lu, S Ingram, D Gillet Proceedings of the Fourth International Workshop on Bots in Software …, 2022 | 11 | 2022 |
WASA: Watermark-based source attribution for large language model-generated data J Wang, X Lu, Z Zhao, Z Dai, CS Foo, SK Ng, BKH Low arXiv preprint arXiv:2310.00646, 2023 | 7 | 2023 |
TRACE: TRansformer-based Attribution using Contrastive Embeddings in LLMs C Wang, X Lu, SK Ng, BKH Low arXiv preprint arXiv:2407.04981, 2024 | | 2024 |
On Newton's Method to Unlearn Neural Networks N Bui, X Lu, SK Ng, BKH Low arXiv preprint arXiv:2406.14507, 2024 | | 2024 |
WASA: WAtermark-based Source Attribution for Large Language Model-Generated Data X Lu, J Wang, Z Zhao, Z Dai, CS Foo, SK Ng, BKH Low | | |