T Wang, S Li, W Lu - arXiv preprint arXiv:2407.18248, 2024 - arxiv.org
Effective training of language models (LMs) for mathematical reasoning tasks demands high- quality supervised fine-tuning data. Besides obtaining annotations from human experts, a …
J Qin, Z Huang, Y Zeng, Q Zhang… - IEEE/ACM Transactions …, 2024 - ieeexplore.ieee.org
Though quite challenging, training a deep neural network for automatically solving Math Word Problems (MWPs) has increasingly attracted attention due to its significance in …
Effective pre-training of large language models (LLMs) has been challenging due to the immense resource demands and the complexity of the technical processes involved. This …
X Yang, J Lin, Z Wang, C Zhai - arXiv preprint arXiv:2411.16454, 2024 - arxiv.org
Large language models (LLMs) are known to struggle with complicated reasoning tasks such as math word problems (MWPs). In this paper, we present how analogy from similarly …
Quantitative and numerical comprehension in language is an important task in many fields like education and finance, but still remains a challenging task for language models. While …
In recent years, language models have emerged into a technology adopted in a wide variety of applications, nowadays largely exceeding traditional natural language processing tasks …