Why should we add early exits to neural networks?

S Scardapane, M Scarpiniti, E Baccarelli… - Cognitive Computation, 2020 - Springer
Deep neural networks are generally designed as a stack of differentiable layers, in which a
prediction is obtained only after running the full stack. Recently, some contributions have …

Besov function approximation and binary classification on low-dimensional manifolds using convolutional residual networks

H Liu, M Chen, T Zhao, W Liao - International Conference on …, 2021 - proceedings.mlr.press
Most of existing statistical theories on deep neural networks have sample complexities
cursed by the data dimension and therefore cannot well explain the empirical success of …

Approximation and non-parametric estimation of ResNet-type convolutional neural networks

K Oono, T Suzuki - International conference on machine …, 2019 - proceedings.mlr.press
Convolutional neural networks (CNNs) have been shown to achieve optimal approximation
and estimation error rates (in minimax sense) in several function classes. However, previous …

Building robust ensembles via margin boosting

D Zhang, H Zhang, A Courville… - International …, 2022 - proceedings.mlr.press
In the context of adversarial robustness, a single model does not usually have enough
power to defend against all possible adversarial attacks, and as a result, has sub-optimal …

Neural network architecture based on gradient boosting for IoT traffic prediction

M Lopez-Martin, B Carro… - Future Generation …, 2019 - Elsevier
Network traffic forecasting is an operational and management function that is critical for any
data network. It is even more important for IoT networks given the number of connected …

Nonparametric teaching for multiple learners

C Zhang, X Cao, W Liu, I Tsang… - Advances in Neural …, 2024 - proceedings.neurips.cc
We study the problem of teaching multiple learners simultaneously in the nonparametric
iterative teaching setting, where the teacher iteratively provides examples to the learner for …

Approximation with cnns in sobolev space: with applications to classification

G Shen, Y Jiao, Y Lin, J Huang - Advances in neural …, 2022 - proceedings.neurips.cc
We derive a novel approximation error bound with explicit prefactor for Sobolev-regular
functions using deep convolutional neural networks (CNNs). The bound is non-asymptotic in …

Nonparametric iterative machine teaching

C Zhang, X Cao, W Liu, I Tsang… - … Conference on Machine …, 2023 - proceedings.mlr.press
In this paper, we consider the problem of Iterative Machine Teaching (IMT), where the
teacher provides examples to the learner iteratively such that the learner can achieve fast …

An interpretive constrained linear model for ResNet and MgNet

J He, J Xu, L Zhang, J Zhu - Neural Networks, 2023 - Elsevier
We propose a constrained linear data-feature-mapping model as an interpretable
mathematical model for image classification using a convolutional neural network (CNN) …

Transport analysis of infinitely deep neural network

S Sonoda, N Murata - Journal of Machine Learning Research, 2019 - jmlr.org
We investigated the feature map inside deep neural networks (DNNs) by tracking the
transport map. We are interested in the role of depth--why do DNNs perform better than …