H Wang, C Qin, Y Bai, Y Zhang, Y Fu - arXiv preprint arXiv:2103.06460, 2021 - arxiv.org
Neural network pruning typically removes connections or neurons from a pretrained converged model; while a new pruning paradigm, pruning at initialization (PaI), attempts to …
Abstract Recently, Vision Transformer (ViT) has continuously established new milestones in the computer vision field, while the high computation and memory cost makes its …
Channel pruning has been broadly recognized as an effective technique to reduce the computation and memory cost of deep convolutional neural networks. However …
S Bibikar, H Vikalo, Z Wang, X Chen - Proceedings of the AAAI …, 2022 - ojs.aaai.org
Federated learning (FL) enables distribution of machine learning workloads from the cloud to resource-limited edge devices. Unfortunately, current deep networks remain not only too …
The deployment constraints in practical applications necessitate the pruning of large-scale deep learning models, ie, promoting their weight sparsity. As illustrated by the Lottery Ticket …
J Liu, P Ram, Y Yao, G Liu, Y Liu… - Advances in Neural …, 2024 - proceedings.neurips.cc
In response to recent data regulation requirements, machine unlearning (MU) has emerged as a critical process to remove the influence of specific examples from a given model …
The lottery ticket hypothesis (LTH) has shown that dense models contain highly sparse subnetworks (ie, winning tickets) that can be trained in isolation to match full accuracy …
Recently, bilevel optimization (BLO) has taken center stage in some very exciting developments in the area of signal processing (SP) and machine learning (ML). Roughly …
Large neural networks can be pruned to a small fraction of their original size, with little loss in accuracy, by following a time-consuming" train, prune, re-train" approach. Frankle & …