关注
Mingze Wang
Mingze Wang
School of Mathematical Sciences, Peking University
在 stu.pku.edu.cn 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
The Alignment Property of SGD Noise and How it Helps Select Flat Minima: A Stability Analysis
L Wu, M Wang, WJ Su
Advances in Neural Information Processing Systems (NeurIPS 2022), 1-25, 2022
35*2022
Understanding Multi-phase Optimization Dynamics and Rich Nonlinear Behaviors of ReLU Networks
M Wang, C Ma
Advances in Neural Information Processing Systems (NeurIPS 2023, Spotlight …, 2023
92023
Generalization Error Bounds for Deep Neural Networks Trained by SGD
M Wang, C Ma
arXiv: 2206.03299, 1-32, 2022
92022
Early Stage Convergence and Global Convergence of Training Mildly Parameterized Neural Networks
M Wang, C Ma
Advances in Neural Information Processing Systems (NeurIPS 2022), 1-73, 2022
52022
A Theoretical Analysis of Noise Geometry in Stochastic Gradient Descent
M Wang, L Wu
NeurIPS 2023 Workshop on M3L, 1-30, 2023
4*2023
Improving Generalization and Convergence by Enhancing Implicit Regularization
M Wang, H He, J Wang, Z Wang, G Huang, F Xiong, Z Li, W E, L Wu
arXiv preprint arXiv:2405.20763, 1-35, 2024
2024
Are AI-Generated Text Detectors Robust to Adversarial Perturbations?
G Huang, Y Zhang, Z Li, Y You, M Wang, Z Yang
Annual Meeting of the Association for Computational Linguistics (ACL 2024), 2024
2024
Loss Symmetry and Noise Equilibrium of Stochastic Gradient Descent
L Ziyin, M Wang, H Li, L Wu
arXiv preprint arXiv:2402.07193, 1-26, 2024
2024
Understanding the Expressive Power and Mechanisms of Transformer for Sequence Modeling
M Wang, W E
arXiv: 2402.00522, 1-70, 2024
2024
Achieving Margin Maximization Exponentially Fast via Progressive Norm Rescaling
M Wang, Z Min, L Wu
International Conference on Machine Learning (ICML 2024), 1-38, 2023
2023
系统目前无法执行此操作,请稍后再试。
文章 1–10