Language models as knowledge embeddings X Wang, Q He, J Liang, Y Xiao arXiv preprint arXiv:2206.12617, 2022 | 40 | 2022 |
Bbt-fin: Comprehensive construction of chinese financial domain pre-trained language model, corpus and benchmark D Lu, H Wu, J Liang, Y Xu, Q He, Y Geng, M Han, Y Xin, Y Xiao arXiv preprint arXiv:2302.09432, 2023 | 31 | 2023 |
Xiezhi: An ever-updating benchmark for holistic domain knowledge evaluation Z Gu, X Zhu, H Ye, L Zhang, J Wang, Y Zhu, S Jiang, Z Xiong, Z Li, W Wu, ... Proceedings of the AAAI Conference on Artificial Intelligence 38 (16), 18099 …, 2024 | 28 | 2024 |
Knowledgpt: Enhancing large language models with retrieval and storage access on knowledge bases X Wang, Q Yang, Y Qiu, J Liang, Q He, Z Gu, Y Xiao, W Wang arXiv preprint arXiv:2308.11761, 2023 | 23 | 2023 |
Can Large Language Models Understand Real-World Complex Instructions? Q He, J Zeng, W Huang, L Chen, J Xiao, Q He, X Zhou, J Liang, Y Xiao Proceedings of the AAAI Conference on Artificial Intelligence 38 (16), 18188 …, 2024 | 21 | 2024 |
Can pre-trained language models interpret similes as smart as human? Q He, S Cheng, Z Li, R Xie, Y Xiao arXiv preprint arXiv:2203.08452, 2022 | 13 | 2022 |
From Complex to Simple: Enhancing Multi-Constraint Complex Instruction Following Ability of Large Language Models Q He, J Zeng, Q He, J Liang, Y Xiao arXiv preprint arXiv:2404.15846, 2024 | 2 | 2024 |
Small language model can self-correct H Han, J Liang, J Shi, Q He, Y Xiao Proceedings of the AAAI Conference on Artificial Intelligence 38 (16), 18162 …, 2024 | 2 | 2024 |
Maps-kb: A million-scale probabilistic simile knowledge base Q He, X Wang, J Liang, Y Xiao Proceedings of the AAAI Conference on Artificial Intelligence 37 (5), 6398-6406, 2023 | 2 | 2023 |
A context-enhanced generate-then-evaluate framework for chinese abbreviation prediction H Tong, C Xie, J Liang, Q He, Z Yue, J Liu, Y Xiao, W Wang Proceedings of the 31st ACM International Conference on Information …, 2022 | 2 | 2022 |
Enhancing quantitative reasoning skills of large language models through dimension perception Y Huang, Q He, J Liang, S Jiang, Y Xiao, Y Chen 2024 IEEE 40th International Conference on Data Engineering (ICDE), 789-802, 2024 | 1 | 2024 |
Reason from Fallacy: Enhancing Large Language Models' Logical Reasoning through Logical Fallacy Understanding Y Li, D Wang, J Liang, G Jiang, Q He, Y Xiao, D Yang arXiv preprint arXiv:2404.04293, 2024 | 1 | 2024 |
Laying the Foundation First? Investigating the Generalization from Atomic Skills to Complex Reasoning Tasks Y Huang, Q He, Y Xu, J Liang, Y Xiao arXiv preprint arXiv:2403.09479, 2024 | 1 | 2024 |
Light Up the Shadows: Enhance Long-Tailed Entity Grounding with Concept-Guided Vision-Language Models Y Zhang, Q He, X Wang, S Yuan, J Liang, Y Xiao arXiv preprint arXiv:2406.10902, 2024 | | 2024 |
Is There a One-Model-Fits-All Approach to Information Extraction? Revisiting Task Definition Biases W Huang, Q He, Z Li, J Liang, Y Xiao arXiv preprint arXiv:2403.16396, 2024 | | 2024 |
HAUSER: Towards Holistic and Automatic Evaluation of Simile Generation Q He, Y Zhang, J Liang, Y Huang, Y Xiao, Y Chen arXiv preprint arXiv:2306.07554, 2023 | | 2023 |
Domain Mastery Benchmark: An Ever-Updating Benchmark for Evaluating Holistic Domain Knowledge of Large Language Model--A Preliminary Release Z Gu, X Zhu, H Ye, L Zhang, Z Xiong, Z Li, Q He, S Jiang, H Feng, Y Xiao arXiv preprint arXiv:2304.11679, 2023 | | 2023 |