Underpinning the past decades of work on the design, initialization, and optimization of neural networks is a seemingly innocuous assumption: that the network is trained on a\textit …
Continual learning is a sub-field of machine learning, which aims to allow machine learning models to continuously learn on new data, by accumulating knowledge without forgetting …
Modern deep-learning systems are specialized to problem settings in which training occurs once and then never again, as opposed to continual-learning settings in which training …
W Zhang, H Liu, B Li, J Xie, Y Huang… - Advances in …, 2024 - proceedings.neurips.cc
Abstract Training Generative Adversarial Networks (GANs) remains a challenging problem. The discriminator trains the generator by learning the distribution of real/generated data …
H Lee, H Cho, H Kim, D Kim, D Min, J Choo… - arXiv preprint arXiv …, 2024 - arxiv.org
This study investigates the loss of generalization ability in neural networks, revisiting warm- starting experiments from Ash & Adams. Our empirical analysis reveals that common …
Modern reinforcement learning has been conditioned by at least three dogmas. The first is the environment spotlight, which refers to our tendency to focus on modeling environments …
Many failures in deep continual and reinforcement learning are associated with increasing magnitudes of the weights, making them hard to change and potentially causing overfitting …
A sequential decision-making agent balances between exploring to gain new knowledge about an environment and exploiting current knowledge to maximize immediate reward. For …
L Friedman, R Meir - arXiv preprint arXiv:2406.09370, 2024 - arxiv.org
In continual learning, knowledge must be preserved and re-used between tasks, maintaining good transfer to future tasks and minimizing forgetting of previously learned …