Meta-reasoning: Semantics-symbol deconstruction for large language models

Y Wang, Z Zhang, P Zhang, B Yang, R Wang - arXiv preprint arXiv …, 2023 - arxiv.org
Neural-symbolic methods have demonstrated efficiency in enhancing the reasoning abilities
of large language models (LLMs). However, existing methods mainly rely on syntactically …

SciRIFF: A Resource to Enhance Language Model Instruction-Following over Scientific Literature

D Wadden, K Shi, J Morrison, A Naik, S Singh… - arXiv preprint arXiv …, 2024 - arxiv.org
We present SciRIFF (Scientific Resource for Instruction-Following and Finetuning), a dataset
of 137K instruction-following demonstrations for 54 tasks covering five essential scientific …

Focus On This, Not That! Steering LLMs With Adaptive Feature Specification

TA Lamb, A Davies, A Paren, PHS Torr… - arXiv preprint arXiv …, 2024 - arxiv.org
Despite the success of Instruction Tuning (IT) in training large language models (LLMs) to
perform arbitrary user-specified tasks, these models often still leverage spurious or biased …

EXCGEC: A Benchmark of Edit-wise Explainable Chinese Grammatical Error Correction

J Ye, S Qin, Y Li, X Cheng, L Qin, HT Zheng… - arXiv preprint arXiv …, 2024 - arxiv.org
Existing studies explore the explainability of Grammatical Error Correction (GEC) in a limited
scenario, where they ignore the interaction between corrections and explanations. To bridge …

SWE-Fixer: Training Open-Source LLMs for Effective and Efficient GitHub Issue Resolution

C Xie, B Li, C Gao, H Du, W Lam, D Zou… - arXiv preprint arXiv …, 2025 - arxiv.org
Large Language Models (LLMs) have demonstrated remarkable proficiency across a variety
of complex tasks. One significant application of LLMs is in tackling software engineering …