A survey on graph counterfactual explanations: definitions, methods, evaluation, and research challenges

MA Prado-Romero, B Prenkaj, G Stilo… - ACM Computing …, 2024 - dl.acm.org
Graph Neural Networks (GNNs) perform well in community detection and molecule
classification. Counterfactual Explanations (CE) provide counter-examples to overcome the …

Sparsity in deep learning: Pruning and growth for efficient inference and training in neural networks

T Hoefler, D Alistarh, T Ben-Nun, N Dryden… - Journal of Machine …, 2021 - jmlr.org
The growing energy and performance costs of deep learning have driven the community to
reduce the size of neural networks by selectively pruning components. Similarly to their …

Rigging the lottery: Making all tickets winners

U Evci, T Gale, J Menick, PS Castro… - … on machine learning, 2020 - proceedings.mlr.press
Many applications require sparse neural networks due to space or inference time
restrictions. There is a large body of work on training dense networks to yield sparse …

Discrimination-aware channel pruning for deep neural networks

Z Zhuang, M Tan, B Zhuang, J Liu… - Advances in neural …, 2018 - proceedings.neurips.cc
Channel pruning is one of the predominant approaches for deep model compression.
Existing pruning methods either train from scratch with sparsity constraints on channels, or …

Learning efficient convolutional networks through network slimming

Z Liu, J Li, Z Shen, G Huang, S Yan… - Proceedings of the …, 2017 - openaccess.thecvf.com
The deployment of deep convolutional neural networks (CNNs) in many real world
applications is largely hindered by their high computational cost. In this paper, we propose a …

Learning Sparse Neural Networks through Regularization

C Louizos, M Welling, DP Kingma - arXiv preprint arXiv:1712.01312, 2017 - arxiv.org
We propose a practical method for $ L_0 $ norm regularization for neural networks: pruning
the network during training by encouraging weights to become exactly zero. Such …

Cf-gnnexplainer: Counterfactual explanations for graph neural networks

A Lucic, MA Ter Hoeve, G Tolomei… - International …, 2022 - proceedings.mlr.press
Given the increasing promise of graph neural networks (GNNs) in real-world applications,
several methods have been developed for explaining their predictions. Existing methods for …

Autorep: Automatic relu replacement for fast private network inference

H Peng, S Huang, T Zhou, Y Luo… - Proceedings of the …, 2023 - openaccess.thecvf.com
The growth of the Machine-Learning-As-A-Service (MLaaS) market has highlighted clients'
data privacy and security issues. Private inference (PI) techniques using cryptographic …

Sparse training via boosting pruning plasticity with neuroregeneration

S Liu, T Chen, X Chen, Z Atashgahi… - Advances in …, 2021 - proceedings.neurips.cc
Works on lottery ticket hypothesis (LTH) and single-shot network pruning (SNIP) have raised
a lot of attention currently on post-training pruning (iterative magnitude pruning), and before …

Do we actually need dense over-parameterization? in-time over-parameterization in sparse training

S Liu, L Yin, DC Mocanu… - … on Machine Learning, 2021 - proceedings.mlr.press
In this paper, we introduce a new perspective on training deep neural networks capable of
state-of-the-art performance without the need for the expensive over-parameterization by …