关注
Eldar Kurtic
Eldar Kurtic
在 ist.ac.at 的电子邮件经过验证
标题
引用次数
引用次数
年份
The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models
E Kurtic, D Campos, T Nguyen, E Frantar, M Kurtz, B Fineran, M Goin, ...
EMNLP 2022, 2022
1012022
M-FAC: Efficient Matrix-Free Approximations of Second-Order Information
E Frantar, E Kurtic, D Alistarh
NeurIPS 2021, 2021
482021
ZipLM: Hardware-Aware Structured Pruning of Language Models
E Kurtic, E Frantar, D Alistarh
arXiv preprint arXiv:2302.04089, 2023
182023
GMP*: Well-Tuned Gradual Magnitude Pruning Can Outperform Most BERT-Pruning Methods
E Kurtic, D Alistarh
arXiv preprint arXiv:2210.06384, 2022
132022
ZipLM: Inference-Aware Structured Pruning of Language Models
E Kurtić, E Frantar, D Alistarh
Advances in Neural Information Processing Systems 36, 2024
92024
Sparse Finetuning for Inference Acceleration of Large Language Models
E Kurtic, D Kuznedelev, E Frantar, M Goin, D Alistarh
arXiv preprint arXiv:2310.06927, 2023
92023
CrAM: A Compression-Aware Minimizer
A Peste, A Vladu, E Kurtic, CH Lampert, D Alistarh
ICLR 2023, 2022
82022
SparseProp: efficient sparse backpropagation for faster training of neural networks at the edge
M Nikdan, T Pegolotti, E Iofinova, E Kurtic, D Alistarh
International Conference on Machine Learning, 26215-26227, 2023
62023
CAP: Correlation-Aware Pruning for Highly-Accurate Sparse Vision Models
D Kuznedelev, E Kurtić, E Frantar, D Alistarh
Advances in Neural Information Processing Systems 36, 2024
52024
Accurate Neural Network Pruning Requires Rethinking Sparse Optimization
D Kuznedelev, E Kurtic, E Iofinova, E Frantar, A Peste, D Alistarh
arXiv preprint arXiv:2308.02060, 2023
42023
SparseProp: Efficient Sparse Backpropagation for Faster Training of Neural Networks
M Nikdan, T Pegolotti, E Iofinova, E Kurtic, D Alistarh
ICML 2023, 2023
42023
Enabling High-Sparsity Foundational Llama Models with Efficient Pretraining and Deployment
A Agarwalla, A Gupta, A Marques, S Pandit, M Goin, E Kurtic, K Leong, ...
arXiv preprint arXiv:2405.03594, 2024
22024
Error Feedback Can Accurately Compress Preconditioners
IV Modoranu, A Kalinov, E Kurtic, D Alistarh
arXiv preprint arXiv:2306.06098, 2023
22023
oViT: An Accurate Second-Order Pruning Framework for Vision Transformers
D Kuznedelev, E Kurtic, E Frantar, D Alistarh
arXiv preprint arXiv:2210.09223, 2022
22022
Vision Models Can Be Efficiently Specialized via Few-Shot Task-Aware Compression
D Kuznedelev, S Tabesh, K Noorbakhsh, E Frantar, S Beery, E Kurtic, ...
arXiv preprint arXiv:2303.14409, 2023
12023
Implementation of algorithm for detection of single phase fault with electric arc on dsPIC30F4013 microcontroller
K Korjenić, E Kurtić, A Akšamović
2018 17th International Symposium INFOTEH-JAHORINA (INFOTEH), 1-6, 2018
12018
Panza: A Personalized Text Writing Assistant via Data Playback and Local Fine-Tuning
A Nicolicioiu, E Iofinova, E Kurtic, M Nikdan, A Panferov, I Markov, ...
arXiv preprint arXiv:2407.10994, 2024
2024
Mathador-LM: A Dynamic Benchmark for Mathematical Reasoning on Large Language Models
E Kurtic, A Moeini, D Alistarh
arXiv preprint arXiv:2406.12572, 2024
2024
MicroAdam: Accurate Adaptive Optimization with Low Space Overhead and Provable Convergence
IV Modoranu, M Safaryan, G Malinovsky, E Kurtic, T Robert, P Richtarik, ...
arXiv preprint arXiv:2405.15593, 2024
2024
How to Prune Your Language Model: Recovering Accuracy on the “Sparsity May Cry” Benchmark
E Kurtic, T Hoefler, D Alistarh
Conference on Parsimony and Learning, 542-553, 2024
2024
系统目前无法执行此操作,请稍后再试。
文章 1–20