The dormant neuron phenomenon in deep reinforcement learning

G Sokar, R Agarwal, PS Castro… - … Conference on Machine …, 2023 - proceedings.mlr.press
In this work we identify the dormant neuron phenomenon in deep reinforcement learning,
where an agent's network suffers from an increasing number of inactive neurons, thereby …

Mixtures of experts unlock parameter scaling for deep rl

J Obando-Ceron, G Sokar, T Willi, C Lyle… - arXiv preprint arXiv …, 2024 - arxiv.org
The recent rapid progress in (self) supervised learning models is in large part predicted by
empirical scaling laws: a model's performance scales proportionally to its size. Analogous …

Automatic noise filtering with dynamic sparse training in deep reinforcement learning

B Grooten, G Sokar, S Dohare, E Mocanu… - arXiv preprint arXiv …, 2023 - arxiv.org
Tomorrow's robots will need to distinguish useful information from noise when performing
different tasks. A household robot for instance may continuously receive a plethora of …

Fantastic weights and how to find them: Where to prune in dynamic sparse training

A Nowak, B Grooten, DC Mocanu… - Advances in Neural …, 2024 - proceedings.neurips.cc
Abstract Dynamic Sparse Training (DST) is a rapidly evolving area of research that seeks to
optimize the sparse initialization of a neural network by adapting its topology during training …

Policy Correction and State-Conditioned Action Evaluation for Few-Shot Lifelong Deep Reinforcement Learning

M Xu, X Chen, J Wang - IEEE Transactions on Neural …, 2024 - ieeexplore.ieee.org
Lifelong deep reinforcement learning (DRL) approaches are commonly employed to adapt
continuously to new tasks without forgetting previously acquired knowledge. While current …

A Novel Topology Adaptation Strategy for Dynamic Sparse Training in Deep Reinforcement Learning

M Xu, X Chen, J Wang - IEEE Transactions on Neural …, 2024 - ieeexplore.ieee.org
Deep reinforcement learning (DRL) has been widely adopted in various applications, yet it
faces practical limitations due to high storage and computational demands. Dynamic sparse …

Neuroplastic Expansion in Deep Reinforcement Learning

J Liu, J Obando-Ceron, A Courville, L Pan - arXiv preprint arXiv …, 2024 - arxiv.org
The loss of plasticity in learning agents, analogous to the solidification of neural pathways in
biological brains, significantly impedes learning and adaptation in reinforcement learning …

Value-Based Deep Multi-Agent Reinforcement Learning with Dynamic Sparse Training

P Hu, S Li, Z Li, L Pan, L Huang - arXiv preprint arXiv:2409.19391, 2024 - arxiv.org
Deep Multi-agent Reinforcement Learning (MARL) relies on neural networks with numerous
parameters in multi-agent scenarios, often incurring substantial computational overhead …

Navigating Extremes: Dynamic Sparsity in Large Output Space

N Ullah, E Schultheis, M Lasby, Y Ioannou… - arXiv preprint arXiv …, 2024 - arxiv.org
In recent years, Dynamic Sparse Training (DST) has emerged as an alternative to post-
training pruning for generating efficient models. In principle, DST allows for a more memory …

One is More: Diverse Perspectives within a Single Network for Efficient DRL

Y Tan, L Pan, L Huang - arXiv preprint arXiv:2310.14009, 2023 - arxiv.org
Deep reinforcement learning has achieved remarkable performance in various domains by
leveraging deep neural networks for approximating value functions and policies. However …