Full parameter fine-tuning for large language models with limited resources K Lv, Y Yang, T Liu, Q Gao, Q Guo, X Qiu arXiv preprint arXiv:2306.09782, 2023 | 76 | 2023 |
Alignment for honesty Y Yang, E Chern, X Qiu, G Neubig, P Liu arXiv preprint arXiv:2312.07000, 2023 | 38 | 2023 |
An AMR-based link prediction approach for document-level event argument extraction Y Yang, Q Guo, X Hu, Y Zhang, X Qiu, Z Zhang arXiv preprint arXiv:2305.19162, 2023 | 21 | 2023 |
Plan, verify and switch: Integrated reasoning with diverse x-of-thoughts T Liu, Q Guo, Y Yang, X Hu, Y Zhang, X Qiu, Z Zhang arXiv preprint arXiv:2310.14628, 2023 | 18 | 2023 |
Uncertain local-to-global networks for document-level event factuality identification P Cao, Y Chen, Y Yang, K Liu, J Zhao Proceedings of the 2021 Conference on Empirical Methods in Natural Language …, 2021 | 16 | 2021 |
DORE: Document ordered relation extraction based on generative framework Q Guo, Y Yang, H Yan, X Qiu, Z Zhang arXiv preprint arXiv:2210.16064, 2022 | 7 | 2022 |
OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI Z Huang, Z Wang, S Xia, X Li, H Zou, R Xu, RZ Fan, L Ye, E Chern, Y Ye, ... arXiv preprint arXiv:2406.12753, 2024 | 6 | 2024 |
Collie: Collaborative training of large language models in an efficient way K Lv, S Zhang, T Gu, S Xing, J Hong, K Chen, X Liu, Y Yang, H Guo, T Liu, ... Proceedings of the 2023 Conference on Empirical Methods in Natural Language …, 2023 | 6 | 2023 |
Weak-to-strong reasoning Y Yang, Y Ma, P Liu arXiv preprint arXiv:2407.13647, 2024 | 4 | 2024 |
BeHonest: Benchmarking Honesty of Large Language Models S Chern, Z Hu, Y Yang, E Chern, Y Guo, J Jin, B Wang, P Liu arXiv preprint arXiv:2406.13261, 2024 | 2 | 2024 |