Omniquant: Omnidirectionally calibrated quantization for large language models W Shao*, M Chen*, Z Zhang, P Xu, L Zhao, Z Li, K Zhang, P Gao, Y Qiao, ... ICLR2024 spotlight (* equal contribution), 2023 | 69 | 2023 |
Cf-vit: A general coarse-to-fine method for vision transformer M Chen, M Lin, K Li, Y Shen, Y Wu, F Chao, R Ji Proceedings of the AAAI Conference on Artificial Intelligence 37 (6), 7042-7052, 2023 | 44 | 2023 |
Super vision transformer M Lin*, M Chen*, Y Zhang, C Shen, R Ji, L Cao International Journal of Computer Vision (* equal contribution) 131 (12 …, 2023 | 21 | 2023 |
Diffrate: Differentiable compression rate for efficient vision transformers M Chen, W Shao, P Xu, M Lin, K Zhang, F Chao, R Ji, Y Qiao, P Luo Proceedings of the IEEE/CVF International Conference on Computer Vision …, 2023 | 19 | 2023 |
Fine-grained data distribution alignment for post-training quantization Y Zhong, M Lin, M Chen, K Li, Y Shen, F Chao, Y Wu, R Ji European Conference on Computer Vision, 70-86, 2022 | 19 | 2022 |
Smmix: Self-motivated image mixing for vision transformers M Chen, M Lin, Z Lin, Y Zhang, F Chao, R Ji Proceedings of the IEEE/CVF International Conference on Computer Vision …, 2023 | 6 | 2023 |
OptG: Optimizing Gradient-driven Criteria in Network Sparsity Y Zhang, M Lin, M Chen, F Chao, R Ji arXiv preprint arXiv:2201.12826, 2022 | 3 | 2022 |
Besa: Pruning large language models with blockwise parameter-efficient sparsity allocation P Xu, W Shao, M Chen, S Tang, K Zhang, P Gao, F An, Y Qiao, P Luo ICLR 2024, 2024 | 2 | 2024 |
I&S-ViT: An Inclusive & Stable Method for Pushing the Limit of Post-Training ViTs Quantization Y Zhong, J Hu, M Lin, M Chen, R Ji arXiv preprint arXiv:2311.10126, 2023 | 1 | 2023 |
EfficientQAT: Efficient Quantization-Aware Training for Large Language Models M Chen, W Shao, P Xu, J Wang, P Gao, K Zhang, Y Qiao, P Luo arXiv preprint arXiv:2407.11062, 2024 | | 2024 |
Adapting LLaMA Decoder to Vision Transformer J Wang, W Shao, M Chen, C Wu, Y Liu, K Zhang, S Zhang, K Chen, P Luo arXiv preprint arXiv:2404.06773, 2024 | | 2024 |