Decoupled greedy learning of cnns for synchronous and asynchronous distributed learning

E Belilovsky, L Leconte, L Caccia, M Eickenberg… - arXiv preprint arXiv …, 2021 - arxiv.org
arXiv preprint arXiv:2106.06401, 2021arxiv.org
A commonly cited inefficiency of neural network training using back-propagation is the
update locking problem: each layer must wait for the signal to propagate through the full
network before updating. Several alternatives that can alleviate this issue have been
proposed. In this context, we consider a simple alternative based on minimal feedback,
which we call Decoupled Greedy Learning (DGL). It is based on a classic greedy relaxation
of the joint training objective, recently shown to be effective in the context of Convolutional …
A commonly cited inefficiency of neural network training using back-propagation is the update locking problem: each layer must wait for the signal to propagate through the full network before updating. Several alternatives that can alleviate this issue have been proposed. In this context, we consider a simple alternative based on minimal feedback, which we call Decoupled Greedy Learning (DGL). It is based on a classic greedy relaxation of the joint training objective, recently shown to be effective in the context of Convolutional Neural Networks (CNNs) on large-scale image classification. We consider an optimization of this objective that permits us to decouple the layer training, allowing for layers or modules in networks to be trained with a potentially linear parallelization. With the use of a replay buffer we show that this approach can be extended to asynchronous settings, where modules can operate and continue to update with possibly large communication delays. To address bandwidth and memory issues we propose an approach based on online vector quantization. This allows to drastically reduce the communication bandwidth between modules and required memory for replay buffers. We show theoretically and empirically that this approach converges and compare it to the sequential solvers. We demonstrate the effectiveness of DGL against alternative approaches on the CIFAR-10 dataset and on the large-scale ImageNet dataset.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果