We report on the systems that the Math Information Retrieval group at Masaryk University (mirmu) and the team of Faculty of Informatics students (msm) prepared for task 1 (find …
L Mikula, M Štefánik, M Petrovič, P Sojka - arXiv preprint arXiv:2305.06841, 2023 - arxiv.org
While the Large Language Models (LLMs) dominate a majority of language understanding tasks, previous work shows that some of these results are supported by modelling spurious …
Despite the rapid recent progress in creating accurate and compact in-context learners, most recent work focuses on in-context learning (ICL) for tasks in English. However, the ability to …
M Štefánik, M Kadlčík, P Sojka - arXiv preprint arXiv:2403.09703, 2024 - arxiv.org
Many recent language models (LMs) are capable of in-context learning (ICL), manifested in the LMs' ability to perform a new task solely from natural-language instruction. Previous work …
Domain adaptation allows generative language models to address specific flaws caused by the domain shift of their application. However, the traditional adaptation by further training on …