Socratic Reasoning Improves Positive Text Rewriting

A Goel, N Daheim, I Gurevych - arXiv preprint arXiv:2403.03029, 2024 - arxiv.org
Reframing a negative into a positive thought is at the crux of several cognitive approaches to
mental health and psychotherapy that could be made more accessible by large language …

Identifying Task Groupings for Multi-Task Learning Using Pointwise V-Usable Information

Y Li, T Miller, S Bethard, G Savova - arXiv preprint arXiv:2410.12774, 2024 - arxiv.org
The success of multi-task learning can depend heavily on which tasks are grouped together.
Naively grouping all tasks or a random set of tasks can result in negative transfer, with the …

Democratizing Advanced Attribution Analyses of Generative Language Models with the Inseq Toolkit

G Sarti, N Feldhus, J Qi, M Nissim… - Joint of the 2nd World …, 2024 - research.rug.nl
Inseq 1 is a recent toolkit providing an intuitive and optimized interface to conduct feature
attribution analyses of generative language models. In this work, we present the latest …

Hierarchical Demonstration Order Optimization for Many-shot In-Context Learning

Y He, W Zheng, S Wang, Z Zheng, Y Dong, Y Zhu, J Li - openreview.net
In-Context Learning (ICL) is a technique where large language models (LLMs) leverage
multiple demonstrations (ie, examples) to perform tasks. With the recent expansion of LLM …