Transfer-learning methods aim to improve performance in a data-scarce target domain using a model pretrained on a data-rich source domain. A cost-efficient strategy, linear probing …
Works on lottery ticket hypothesis (LTH) and single-shot network pruning (SNIP) have raised a lot of attention currently on post-training pruning (iterative magnitude pruning), and before …
S Liu, L Yin, DC Mocanu… - … on Machine Learning, 2021 - proceedings.mlr.press
In this paper, we introduce a new perspective on training deep neural networks capable of state-of-the-art performance without the need for the expensive over-parameterization by …
Deep reinforcement learning (DRL) agents are trained through trial-and-error interactions with the environment. This leads to a long training time for dense neural networks to achieve …
S Liu, Z Wang - arXiv preprint arXiv:2302.02596, 2023 - arxiv.org
This article does not propose any novel algorithm or new hardware for sparsity. Instead, it aims to serve the" common good" for the increasingly prosperous Sparse Neural Network …
A fundamental task for artificial intelligence is learning. Deep Neural Networks have proven to cope perfectly with all learning paradigms, ie supervised, unsupervised, and …
Major complications arise from the recent increase in the amount of high-dimensional data, including high computational costs and memory requirements. Feature selection, which …
K Liu, Z Atashgahi, G Sokar… - International …, 2024 - proceedings.mlr.press
Feature selection algorithms aim to select a subset of informative features from a dataset to reduce the data dimensionality, consequently saving resource consumption and improving …
This paper does not describe a novel method. Instead, it studies an incremental, yet must- know baseline given the recent progress in sparse neural network training and Generative …