关注
Dachao Lin
Dachao Lin
在 huawei.com 的电子邮件经过验证
标题
引用次数
引用次数
年份
Toward understanding the importance of noise in training neural networks
M Zhou, T Liu, Y Li, D Lin, E Zhou, T Zhao
International Conference on Machine Learning, 7594-7602, 2019
912019
Explicit convergence rates of greedy and random quasi-Newton methods
D Lin, H Ye, Z Zhang
Journal of Machine Learning Research 23 (162), 1-40, 2022
192022
Greedy and random quasi-newton methods with faster explicit superlinear convergence
D Lin, H Ye, Z Zhang
Advances in Neural Information Processing Systems 34, 6646-6657, 2021
162021
Towards explicit superlinear convergence rate for SR1
H Ye, D Lin, X Chang, Z Zhang
Mathematical Programming 199 (1-2), 1273-1303, 2023
102023
Explicit superlinear convergence rates of Broyden's methods in nonlinear equations
D Lin, H Ye, Z Zhang
arXiv preprint arXiv:2109.01974, 2021
92021
Stochastic Distributed Optimization under Average Second-order Similarity: Algorithms and Analysis
D Lin, Y Han, H Ye, Z Zhang
Advances in Neural Information Processing Systems 36, 2024
82024
Greedy and random Broyden's methods with explicit superlinear convergence rates in nonlinear equations
H Ye, D Lin, Z Zhang
arXiv preprint arXiv:2110.08572, 2021
82021
Explicit superlinear convergence rates of the SR1 algorithm
H Ye, D Lin, Z Zhang, X Chang
arXiv preprint arXiv:2105.07162, 2021
72021
On the landscape of one-hidden-layer sparse networks and beyond
D Lin, R Sun, Z Zhang
Artificial Intelligence 309, 103739, 2022
52022
Faster directional convergence of linear neural networks under spherically symmetric data
D Lin, R Sun, Z Zhang
Advances in Neural Information Processing Systems 34, 4647-4660, 2021
42021
Optimal quantization for batch normalization in neural network deployments and beyond
D Lin, P Sun, G Xie, S Zhou, Z Zhang
arXiv preprint arXiv:2008.13128, 2020
42020
Global convergence analysis of deep linear networks with a one-neuron layer
K Chen, D Lin, Z Zhang
arXiv preprint arXiv:2201.02761, 2022
22022
Towards better generalization: Bp-svrg in training deep neural networks
H Jin, D Lin, Z Zhang
arXiv preprint arXiv:1908.06395, 2019
22019
On Non-local Convergence Analysis of Deep Linear Networks
K Chen, D Lin, Z Zhang
International Conference on Machine Learning, 3417-3443, 2022
12022
Anderson Acceleration Without Restart: A Novel Method with -Step Super Quadratic Convergence Rate
H Ye, D Lin, X Chang, Z Zhang
arXiv preprint arXiv:2403.16734, 2024
2024
On the Convergence of Policy in Unregularized Policy Mirror Descent
D Lin, Z Zhang
arXiv preprint arXiv:2205.08176, 2022
2022
Meta-Regularization: An Approach to Adaptive Choice of the Learning Rate in Gradient Descent
G Xie, H Jin, D Lin, Z Zhang
arXiv preprint arXiv:2104.05447, 2021
2021
系统目前无法执行此操作,请稍后再试。
文章 1–17