In-context learning (ICL) is the ability of a model to learn a new task by observing a few exemplars in its context. While prevalent in NLP, this capability has recently also been …
S Chen, H Sheen, T Wang, Z Yang - arXiv preprint arXiv:2409.10559, 2024 - arxiv.org
In-context learning (ICL) is a cornerstone of large language model (LLM) functionality, yet its theoretical foundations remain elusive due to the complexity of transformer architectures. In …
The in-context learning (ICL) capability of pre-trained models based on the transformer architecture has received growing interest in recent years. While theoretical understanding …
In this paper, we study multi-task structured bandit problem where the goal is to learn a near- optimal algorithm that minimizes cumulative regret. The tasks share a common structure and …
I Zisman, A Nikulin, A Polubarov, N Lyubaykin… - arXiv preprint arXiv …, 2024 - arxiv.org
In-context learning allows models like transformers to adapt to new tasks from a few examples without updating their weights, a desirable trait for reinforcement learning (RL) …
W Chen, S Paternain - arXiv preprint arXiv:2410.19982, 2024 - arxiv.org
Pretrained foundation models have exhibited extraordinary in-context learning performance, allowing zero-shot generalization to new tasks not encountered during the pretraining. In the …
A Berkes - arXiv preprint arXiv:2411.19746, 2024 - arxiv.org
Building operations consume approximately 40% of global energy, with Heating, Ventilation, and Air Conditioning (HVAC) systems responsible for up to 50% of this consumption. As …
Recent studies have demonstrated that Transformers can perform in-context reinforcement learning (RL) by imitating a source RL algorithm. This enables them to adapt to new tasks in …
В последнее время нейросетевые модели стали одним из наиболее перспективных направлений в области автоматического извлечения признаков из цифровых сигналов …