Why should we add early exits to neural networks?

S Scardapane, M Scarpiniti, E Baccarelli… - Cognitive Computation, 2020 - Springer
Deep neural networks are generally designed as a stack of differentiable layers, in which a
prediction is obtained only after running the full stack. Recently, some contributions have …

A survey on green deep learning

J Xu, W Zhou, Z Fu, H Zhou, L Li - arXiv preprint arXiv:2111.05193, 2021 - arxiv.org
In recent years, larger and deeper models are springing up and continuously pushing state-
of-the-art (SOTA) results across various fields like natural language processing (NLP) and …

Greedy layerwise learning can scale to imagenet

E Belilovsky, M Eickenberg… - … conference on machine …, 2019 - proceedings.mlr.press
Shallow supervised 1-hidden layer neural networks have a number of favorable properties
that make them easier to interpret, analyze, and optimize than their deep counterparts, but …

Revisiting locally supervised learning: an alternative to end-to-end training

Y Wang, Z Ni, S Song, L Yang, G Huang - arXiv preprint arXiv:2101.10832, 2021 - arxiv.org
Due to the need to store the intermediate activations for back-propagation, end-to-end (E2E)
training of deep networks usually suffers from high GPUs memory footprint. This paper aims …

Deep cascade learning

ES Marquez, JS Hare… - IEEE transactions on …, 2018 - ieeexplore.ieee.org
In this paper, we propose a novel approach for efficient training of deep neural networks in a
bottom-up fashion using a layered structure. Our algorithm, which we refer to as deep …

Spiking neural networks and bio-inspired supervised deep learning: a survey

G Lagani, F Falchi, C Gennaro, G Amato - arXiv preprint arXiv:2307.16235, 2023 - arxiv.org
For a long time, biology and neuroscience fields have been a great source of inspiration for
computer scientists, towards the development of Artificial Intelligence (AI) technologies. This …

[HTML][HTML] Self-organized operational neural networks with generative neurons

S Kiranyaz, J Malik, HB Abdallah, T Ince, A Iosifidis… - Neural Networks, 2021 - Elsevier
Abstract Operational Neural Networks (ONNs) have recently been proposed to address the
well-known limitations and drawbacks of conventional Convolutional Neural Networks …

Operational neural networks

S Kiranyaz, T Ince, A Iosifidis, M Gabbouj - Neural Computing and …, 2020 - Springer
Feed-forward, fully connected artificial neural networks or the so-called multi-layer
perceptrons are well-known universal approximators. However, their learning performance …

Module-wise training of neural networks via the minimizing movement scheme

S Karkar, I Ayed, E de Bézenac… - Advances in Neural …, 2024 - proceedings.neurips.cc
Greedy layer-wise or module-wise training of neural networks is compelling in constrained
and on-device settings where memory is limited, as it circumvents a number of problems of …

Heterogeneous multilayer generalized operational perceptron

DT Tran, S Kiranyaz, M Gabbouj… - IEEE transactions on …, 2019 - ieeexplore.ieee.org
The traditional multilayer perceptron (MLP) using a McCulloch-Pitts neuron model is
inherently limited to a set of neuronal activities, ie, linear weighted sum followed by …