Recent successes in deep learning for vision and natural language processing are attributed to larger models but come with energy consumption and scalability issues. Current …
L Cheng, Y Gu, Q Liu, L Yang, C Liu… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
The amalgamation of artificial intelligence with Internet of Things (AIoT) devices have seen a rapid surge in growth, largely due to the effective implementation of deep neural network …
Artificial intelligence (AI) research and market have grown rapidly in the last few years, and this trend is expected to continue with many potential advancements and innovations in this …
Forward Gradients-the idea of using directional derivatives in forward differentiation mode- have recently been shown to be utilizable for neural network training while avoiding …
Deep neural networks conventionally employ end-to-end backpropagation for their training process, which lacks biological credibility and triggers a locking dilemma during network …
Distributed deep learning frameworks like federated learning (FL) and its variants are enabling personalized experiences across a wide range of web clients and mobile/IoT …
C Ma, J Wu, C Si, KC Tan - arXiv preprint arXiv:2402.17318, 2024 - arxiv.org
Deep neural networks are typically trained using global error signals that backpropagate (BP) end-to-end, which is not only biologically implausible but also suffers from the update …
F Liang, Z Zhang, H Lu, V Leung, Y Guo… - arXiv preprint arXiv …, 2024 - arxiv.org
With the rapid growth in the volume of data sets, models, and devices in the domain of deep learning, there is increasing attention on large-scale distributed deep learning. In contrast to …
Abstract End-to-end (E2E) training has become the de-facto standard for training modern deep networks, eg, ConvNets and vision Transformers (ViTs). Typically, a global error signal …