A survey on modern trainable activation functions

A Apicella, F Donnarumma, F Isgrò, R Prevete - Neural Networks, 2021 - Elsevier
In neural networks literature, there is a strong interest in identifying and defining activation
functions which can improve neural network performance. In recent years there has been a …

Deep learning in neural networks: An overview

J Schmidhuber - Neural networks, 2015 - Elsevier
In recent years, deep artificial neural networks (including recurrent ones) have won
numerous contests in pattern recognition and machine learning. This historical survey …

Overcoming catastrophic forgetting with hard attention to the task

J Serra, D Suris, M Miron… - … conference on machine …, 2018 - proceedings.mlr.press
Catastrophic forgetting occurs when a neural network loses the information learned in a
previous task after training on subsequent tasks. This problem remains a hurdle for artificial …

Continual learning with deep generative replay

H Shin, JK Lee, J Kim, J Kim - Advances in neural …, 2017 - proceedings.neurips.cc
Attempts to train a comprehensive artificial intelligence capable of solving multiple tasks
have been impeded by a chronic problem called catastrophic forgetting. Although simply …

Continual learning through synaptic intelligence

F Zenke, B Poole, S Ganguli - International conference on …, 2017 - proceedings.mlr.press
While deep learning has led to remarkable advances across diverse applications, it
struggles in domains where the data distribution changes over the course of learning. In …

Orthogonal gradient descent for continual learning

M Farajtabar, N Azizan, A Mott… - … Conference on Artificial …, 2020 - proceedings.mlr.press
Neural networks are achieving state of the art and sometimes super-human performance on
learning tasks across a variety of domains. Whenever these problems require learning in a …

Meta networks

T Munkhdalai, H Yu - International conference on machine …, 2017 - proceedings.mlr.press
Neural networks have been successfully applied in applications with a large amount of
labeled data. However, the task of rapid generalization on new concepts with small training …

Selective experience replay for lifelong learning

D Isele, A Cosgun - Proceedings of the AAAI Conference on Artificial …, 2018 - ojs.aaai.org
Deep reinforcement learning has emerged as a powerful tool for a variety of learning tasks,
however deep nets typically exhibit forgetting when learning multiple tasks in sequence. To …

Overcoming catastrophic forgetting by incremental moment matching

SW Lee, JH Kim, J Jun, JW Ha… - Advances in neural …, 2017 - proceedings.neurips.cc
Catastrophic forgetting is a problem of neural networks that loses the information of the first
task after training the second task. Here, we propose a method, ie incremental moment …

Instance-aware semantic segmentation via multi-task network cascades

J Dai, K He, J Sun - Proceedings of the IEEE conference on …, 2016 - openaccess.thecvf.com
Semantic segmentation research has recently witnessed rapid progress, but many leading
methods are unable to identify object instances. In this paper, we present Multi-task Network …