RED: Looking for Redundancies for Data-FreeStructured Compression of Deep Neural Networks E Yvinec, A Dapogny, M Cord, K Bailly NeurIPS 2021 34, 20863-20873, 2021 | 23 | 2021 |
SPIQ: Data-Free Per-Channel Static Input Quantization E Yvinec, A Dapogny, M Cord, K Bailly WACV 2023, 2022 | 20 | 2022 |
RED++: Data-Free Pruning of Deep Neural Networks via Input Splitting and Output Merging E Yvinec, A Dapogny, K Bailly, M Cord TPAMI, 2022 | 18 | 2022 |
Multi-label transformer for action unit detection G Tallec, E Yvinec, A Dapogny, K Bailly arXiv preprint arXiv:2203.12531, 2022 | 15 | 2022 |
PowerQuant: Automorphism Search for Non-Uniform Quantization E Yvinec, A Dapogny, M Cord, K Bailly ICLR 2023, 2023 | 14 | 2023 |
To Fold or Not to Fold: a Necessary and Sufficient Condition on Batch-Normalization Layers Folding E Yvinec, A Dapogny, K Bailly IJCAI 2022, 2022 | 9 | 2022 |
SInGE: Sparsity via Integrated Gradients Estimation of Neuron Relevance E Yvinec, A Dapogny, M Cord, K Bailly NeurIPS 2022, 2022 | 8 | 2022 |
Rex: Data-free residual quantization error expansion E Yvinec, A Dapogny, M Cord, K Bailly Advances in Neural Information Processing Systems 36, 2024 | 7 | 2024 |
Nupes: Non-uniform post-training quantization via power exponent search E Yvinec, A Dapogny, K Bailly arXiv preprint arXiv:2308.05600, 2023 | 3 | 2023 |
Deesco: Deep heterogeneous ensemble with stochastic combinatory loss for gaze estimation E Yvinec, A Dapogny, K Bailly FG 2020, 146-152, 2020 | 3 | 2020 |
Fighting over-fitting with quantization for learning deep neural networks on noisy labels G Tallec, E Yvinec, A Dapogny, K Bailly 2023 IEEE International Conference on Image Processing (ICIP), 575-579, 2023 | 2 | 2023 |
Designing strong baselines for ternary neural network quantization through support and mass equalization E Yvinec, A Dapogny, K Bailly 2023 IEEE International Conference on Image Processing (ICIP), 540-544, 2023 | 1 | 2023 |
Gradient-based post-training quantization: Challenging the status quo E Yvinec, A Dapogny, K Bailly arXiv preprint arXiv:2308.07662, 2023 | 1 | 2023 |
Safer: Layer-level sensitivity assessment for efficient and robust neural network inference E Yvinec, A Dapogny, K Bailly, X Fischer arXiv preprint arXiv:2308.04753, 2023 | 1 | 2023 |
PIPE: Parallelized inference through ensembling of residual quantization expansions E Yvinec, A Dapogny, K Bailly Pattern Recognition 154, 110571, 2024 | | 2024 |
SPOT: Text Source Prediction from Originality Score Thresholding E Yvinec, G Kasser arXiv preprint arXiv:2405.20505, 2024 | | 2024 |
PIPE: Parallelized Inference Through Post-Training Quantization Ensembling of Residual Expansions E Yvinec, A Dapogny, K Bailly arXiv preprint arXiv:2311.15806, 2023 | | 2023 |
Archtree: on-the-fly tree-structured exploration for latency-aware pruning of deep neural networks RO Reboul, E Yvinec, A Dapogny, K Bailly arXiv preprint arXiv:2311.10549, 2023 | | 2023 |
Efficient Neural Networks: Post Training Pruning and Quantization E Yvinec Sorbonne Université, 2023 | | 2023 |
Archtree: on-the-fly tree-structured exploration for latency-aware pruning of deep neural networks R Ouazan Reboul, E Yvinec, A Dapogny, K Bailly arXiv e-prints, arXiv: 2311.10549, 2023 | | 2023 |