The recent rapid progress in (self) supervised learning models is in large part predicted by empirical scaling laws: a model's performance scales proportionally to its size. Analogous …
Tomorrow's robots will need to distinguish useful information from noise when performing different tasks. A household robot for instance may continuously receive a plethora of …
Abstract Dynamic Sparse Training (DST) is a rapidly evolving area of research that seeks to optimize the sparse initialization of a neural network by adapting its topology during training …
M Xu, X Chen, J Wang - IEEE Transactions on Neural …, 2024 - ieeexplore.ieee.org
Lifelong deep reinforcement learning (DRL) approaches are commonly employed to adapt continuously to new tasks without forgetting previously acquired knowledge. While current …
M Xu, X Chen, J Wang - IEEE Transactions on Neural …, 2024 - ieeexplore.ieee.org
Deep reinforcement learning (DRL) has been widely adopted in various applications, yet it faces practical limitations due to high storage and computational demands. Dynamic sparse …
The loss of plasticity in learning agents, analogous to the solidification of neural pathways in biological brains, significantly impedes learning and adaptation in reinforcement learning …
P Hu, S Li, Z Li, L Pan, L Huang - arXiv preprint arXiv:2409.19391, 2024 - arxiv.org
Deep Multi-agent Reinforcement Learning (MARL) relies on neural networks with numerous parameters in multi-agent scenarios, often incurring substantial computational overhead …
In recent years, Dynamic Sparse Training (DST) has emerged as an alternative to post- training pruning for generating efficient models. In principle, DST allows for a more memory …
Y Tan, L Pan, L Huang - arXiv preprint arXiv:2310.14009, 2023 - arxiv.org
Deep reinforcement learning has achieved remarkable performance in various domains by leveraging deep neural networks for approximating value functions and policies. However …