Continuous-in-depth neural networks

AF Queiruga, NB Erichson, D Taylor… - arXiv preprint arXiv …, 2020 - arxiv.org
arXiv preprint arXiv:2008.02389, 2020arxiv.org
Recent work has attempted to interpret residual networks (ResNets) as one step of a forward
Euler discretization of an ordinary differential equation, focusing mainly on syntactic
algebraic similarities between the two systems. Discrete dynamical integrators of continuous
dynamical systems, however, have a much richer structure. We first show that ResNets fail to
be meaningful dynamical integrators in this richer sense. We then demonstrate that neural
network models can learn to represent continuous dynamical systems, with this richer …
Recent work has attempted to interpret residual networks (ResNets) as one step of a forward Euler discretization of an ordinary differential equation, focusing mainly on syntactic algebraic similarities between the two systems. Discrete dynamical integrators of continuous dynamical systems, however, have a much richer structure. We first show that ResNets fail to be meaningful dynamical integrators in this richer sense. We then demonstrate that neural network models can learn to represent continuous dynamical systems, with this richer structure and properties, by embedding them into higher-order numerical integration schemes, such as the Runge Kutta schemes. Based on these insights, we introduce ContinuousNet as a continuous-in-depth generalization of ResNet architectures. ContinuousNets exhibit an invariance to the particular computational graph manifestation. That is, the continuous-in-depth model can be evaluated with different discrete time step sizes, which changes the number of layers, and different numerical integration schemes, which changes the graph connectivity. We show that this can be used to develop an incremental-in-depth training scheme that improves model quality, while significantly decreasing training time. We also show that, once trained, the number of units in the computational graph can even be decreased, for faster inference with little-to-no accuracy drop.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果