关注
Mengzhao Chen (陈锰钊)
Mengzhao Chen (陈锰钊)
其他姓名Mengzhao Chen
在 stu.xmu.edu.cn 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
Omniquant: Omnidirectionally calibrated quantization for large language models
W Shao*, M Chen*, Z Zhang, P Xu, L Zhao, Z Li, K Zhang, P Gao, Y Qiao, ...
ICLR2024 spotlight (* equal contribution), 2023
692023
Cf-vit: A general coarse-to-fine method for vision transformer
M Chen, M Lin, K Li, Y Shen, Y Wu, F Chao, R Ji
Proceedings of the AAAI Conference on Artificial Intelligence 37 (6), 7042-7052, 2023
442023
Super vision transformer
M Lin*, M Chen*, Y Zhang, C Shen, R Ji, L Cao
International Journal of Computer Vision (* equal contribution) 131 (12 …, 2023
212023
Diffrate: Differentiable compression rate for efficient vision transformers
M Chen, W Shao, P Xu, M Lin, K Zhang, F Chao, R Ji, Y Qiao, P Luo
Proceedings of the IEEE/CVF International Conference on Computer Vision …, 2023
192023
Fine-grained data distribution alignment for post-training quantization
Y Zhong, M Lin, M Chen, K Li, Y Shen, F Chao, Y Wu, R Ji
European Conference on Computer Vision, 70-86, 2022
192022
Smmix: Self-motivated image mixing for vision transformers
M Chen, M Lin, Z Lin, Y Zhang, F Chao, R Ji
Proceedings of the IEEE/CVF International Conference on Computer Vision …, 2023
62023
OptG: Optimizing Gradient-driven Criteria in Network Sparsity
Y Zhang, M Lin, M Chen, F Chao, R Ji
arXiv preprint arXiv:2201.12826, 2022
32022
Besa: Pruning large language models with blockwise parameter-efficient sparsity allocation
P Xu, W Shao, M Chen, S Tang, K Zhang, P Gao, F An, Y Qiao, P Luo
ICLR 2024, 2024
22024
I&S-ViT: An Inclusive & Stable Method for Pushing the Limit of Post-Training ViTs Quantization
Y Zhong, J Hu, M Lin, M Chen, R Ji
arXiv preprint arXiv:2311.10126, 2023
12023
EfficientQAT: Efficient Quantization-Aware Training for Large Language Models
M Chen, W Shao, P Xu, J Wang, P Gao, K Zhang, Y Qiao, P Luo
arXiv preprint arXiv:2407.11062, 2024
2024
Adapting LLaMA Decoder to Vision Transformer
J Wang, W Shao, M Chen, C Wu, Y Liu, K Zhang, S Zhang, K Chen, P Luo
arXiv preprint arXiv:2404.06773, 2024
2024
系统目前无法执行此操作,请稍后再试。
文章 1–11