Multi-Task Domain Adaptation for Language Grounding with 3D Objects

P Sun, Y Song, X Pan, P Dong, X Yang, Q Wang… - … on Computer Vision, 2024 - Springer
The existing works on object-level language grounding with 3D objects mostly focus on
improving performance by utilizing the off-the-shelf pre-trained models to capture features …

[PDF][PDF] Diverse Semantics Representation is King.

M Geletka, V Kalivoda, M Stefánik, M Toma… - CLEF (Working Notes …, 2022 - dei.unipd.it
We report on the systems that the Math Information Retrieval group at Masaryk University
(mirmu) and the team of Faculty of Informatics students (msm) prepared for task 1 (find …

Think twice: Measuring the efficiency of eliminating prediction shortcuts of question answering models

L Mikula, M Štefánik, M Petrovič, P Sojka - arXiv preprint arXiv:2305.06841, 2023 - arxiv.org
While the Large Language Models (LLMs) dominate a majority of language understanding
tasks, previous work shows that some of these results are supported by modelling spurious …

Resources and Few-shot Learners for In-context Learning in Slavic Languages

M Štefánik, M Kadlčík, P Gramacki, P Sojka - arXiv preprint arXiv …, 2023 - arxiv.org
Despite the rapid recent progress in creating accurate and compact in-context learners, most
recent work focuses on in-context learning (ICL) for tasks in English. However, the ability to …

Concept-aware Data Construction Improves In-context Learning of Language Models

M Štefánik, M Kadlčík, P Sojka - arXiv preprint arXiv:2403.09703, 2024 - arxiv.org
Many recent language models (LMs) are capable of in-context learning (ICL), manifested in
the LMs' ability to perform a new task solely from natural-language instruction. Previous work …

Soft Alignment Objectives for Robust Adaptation of Language Generation

M Štefánik, M Kadlčík, P Sojka - arXiv preprint arXiv:2211.16550, 2022 - arxiv.org
Domain adaptation allows generative language models to address specific flaws caused by
the domain shift of their application. However, the traditional adaptation by further training on …