Towards continual reinforcement learning: A review and perspectives

K Khetarpal, M Riemer, I Rish, D Precup - Journal of Artificial Intelligence …, 2022 - jair.org
In this article, we aim to provide a literature review of different formulations and approaches
to continual reinforcement learning (RL), also known as lifelong or non-stationary RL. We …

A survey on hyperdimensional computing aka vector symbolic architectures, part ii: Applications, cognitive models, and challenges

D Kleyko, D Rachkovskij, E Osipov, A Rahimi - ACM Computing Surveys, 2023 - dl.acm.org
This is Part II of the two-part comprehensive survey devoted to a computing framework most
commonly known under the names Hyperdimensional Computing and Vector Symbolic …

Patching open-vocabulary models by interpolating weights

G Ilharco, M Wortsman, SY Gadre… - Advances in …, 2022 - proceedings.neurips.cc
Open-vocabulary models like CLIP achieve high accuracy across many image classification
tasks. However, there are still settings where their zero-shot performance is far from optimal …

Supermasks in superposition

M Wortsman, V Ramanujan, R Liu… - Advances in …, 2020 - proceedings.neurips.cc
We present the Supermasks in Superposition (SupSup) model, capable of sequentially
learning thousands of tasks without catastrophic forgetting. Our approach uses a randomly …

Orthogonal convolutional neural networks

J Wang, Y Chen, R Chakraborty… - Proceedings of the …, 2020 - openaccess.thecvf.com
Deep convolutional neural networks are hindered by training instability and feature
redundancy towards further performance improvement. A promising solution is to impose …

Learning theories for artificial intelligence promoting learning processes

D Gibson, V Kovanovic, D Ifenthaler… - British Journal of …, 2023 - Wiley Online Library
This paper discusses a three‐level model that synthesizes and unifies existing learning
theories to model the roles of artificial intelligence (AI) in promoting learning processes. The …

Side-tuning: a baseline for network adaptation via additive side networks

JO Zhang, A Sax, A Zamir, L Guibas, J Malik - Computer Vision–ECCV …, 2020 - Springer
When training a neural network for a desired task, one may prefer to adapt a pre-trained
network rather than starting from randomly initialized weights. Adaptation can be useful in …

Vector symbolic architectures as a computing framework for emerging hardware

D Kleyko, M Davies, EP Frady, P Kanerva… - Proceedings of the …, 2022 - ieeexplore.ieee.org
This article reviews recent progress in the development of the computing framework vector
symbolic architectures (VSA)(also known as hyperdimensional computing). This framework …

Continual learning via neural pruning

S Golkar, M Kagan, K Cho - arXiv preprint arXiv:1903.04476, 2019 - arxiv.org
We introduce Continual Learning via Neural Pruning (CLNP), a new method aimed at
lifelong learning in fixed capacity models based on neuronal model sparsification. In this …

Online continual learning on class incremental blurry task configuration with anytime inference

H Koh, D Kim, JW Ha, J Choi - arXiv preprint arXiv:2110.10031, 2021 - arxiv.org
Despite rapid advances in continual learning, a large body of research is devoted to
improving performance in the existing setups. While a handful of work do propose new …