Empowering multi-step reasoning across languages via program-aided language models

L Ranaldi, G Pucci, B Haddow… - Proceedings of the 2024 …, 2024 - aclanthology.org
In-context learning methods are popular inference strategies where Large Language
Models (LLMs) are elicited to solve a task using provided demonstrations without parameter …

[PDF][PDF] How far does the sequence of compositions impact Multilingual Pre-Training?

L Ranaldi, G Pucci, FM Zanzotto - 2024 - ceur-ws.org
An Efficient strategy for conducting pre-training of language models is the concatenation of
contiguous sequences of text of fixed length through causal masking that estimates the …