关注
Linfeng Zhang (张林峰)
Linfeng Zhang (张林峰)
其他姓名L. Zhang, 张林峰, 林峰 张
Shanghai Jiao Tong University
在 mails.tsinghua.edu.cn 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
Be your own teacher: Improve the performance of convolutional neural networks via self distillation
L Zhang, J Song, A Gao, J Chen, C Bao, K Ma
Proceedings of the IEEE/CVF international conference on computer vision …, 2019
9022019
Improve Object Detection with Feature-based Knowledge Distillation: Towards Accurate and Efficient Detectors
L Zhang, K Ma
The Ninth International Conference on Learning Representations (ICLR2021), 2021
1912021
Self-distillation: Towards efficient and compact neural networks
L Zhang, C Bao, K Ma
IEEE Transactions on Pattern Analysis and Machine Intelligence 44 (8), 4388-4403, 2021
1652021
Non-structured DNN weight pruning—Is it beneficial in any platform?
X Ma, S Lin, S Ye, Z He, L Zhang, G Yuan, SH Tan, Z Li, D Fan, X Qian, ...
IEEE transactions on neural networks and learning systems 33 (9), 4930-4944, 2021
1002021
Wavelet Knowledge Distillation: Towards Efficient Image-to-Image Translation
L Zhang, X Chen, X Tu, P Wan, N Xu, K Ma
Proceedings of the IEEE/CVF conference on Computer Vision and Pattern …, 2022
752022
SCAN: A scalable neural networks framework towards compact and efficient models
L Zhang, Z Tan, J Song, J Chen, C Bao, K Ma
NeurIPS2019, 2019
742019
Autoencoders as Cross-Modal Teachers: Can Pretrained 2D Image Transformers Help 3D Representation Learning?
R Dong, Z Qi, L Zhang, J Zhang, J Sun, Z Ge, L Yi, K Ma
ICLR2023, 2022
662022
Fine-grained emotion classification of Chinese microblogs based on graph convolution networks
Y Lai, L Zhang, D Han, R Zhou, G Wang
World Wide Web 23, 2771-2787, 2020
652020
Task-oriented feature distillation
L Zhang, Y Shi, Z Shi, K Ma, C Bao
NeurIPS2020 33, 14759-14771, 2020
482020
Auxiliary training: Towards accurate and robust models
L Zhang, M Yu, T Chen, Z Shi, C Bao, K Ma
Proceedings of the IEEE/CVF conference on computer vision and pattern …, 2020
442020
StructADMM: A systematic, high-efficiency framework of structured weight pruning for DNNs
T Zhang, S Ye, K Zhang, X Ma, N Liu, L Zhang, J Tang, K Ma, X Lin, ...
arXiv preprint arXiv:1807.11091, 2018
352018
Contrastive Deep Supervision
L Zhang, X Chen, J Zhang, R Dong, K Ma
European Conference on Computer Vision (ECCV2022), 2022
342022
Pointdistiller: structured knowledge distillation towards efficient and compact 3d detection
L Zhang, R Dong, HS Tai, K Ma
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR2023), 2022
322022
Non-structured dnn weight pruning considered harmful
Y Wang, S Ye, Z He, X Ma, L Zhang, S Lin, G Yuan, SH Tan, Z Li, D Fan, ...
arXiv preprint arXiv:1907.02124 2, 2019
142019
Finding the Task-Optimal Low-Bit Sub-Distribution in Deep Neural Networks
R Dong, Z Tan, M Wu, L Zhang, K Ma
International Conference on Machine Learning (ICML2022), 2021
112021
Region-aware knowledge distillation for efficient image-to-image translation
L Zhang, X Chen, R Dong, K Ma
The 34th British Machine Vision Conference 2023, 2022
102022
SMART: screen-based gesture recognition on commodity mobile devices
Z Liao, Z Luo, Q Huang, L Zhang, F Wu, Q Zhang, Y Wang
Proceedings of the 27th Annual International Conference on Mobile Computing …, 2021
102021
A Good Data Augmentation Policy Is Not All You Need: A Multi-Task Learning Perspective
L Zhang, K Ma
IEEE Transactions on Circuits and Systems for Video Technology, 2022
92022
Structured knowledge distillation for accurate and efficient object detection
L Zhang, K Ma
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023
82023
Multi-frequency representation enhancement with privilege information for video super-resolution
F Li, L Zhang, Z Liu, J Lei, Z Li
Proceedings of the IEEE/CVF International Conference on Computer Vision …, 2023
62023
系统目前无法执行此操作,请稍后再试。
文章 1–20