Milenas: Efficient neural architecture search via mixed-level reformulation C He, H Ye, L Shen, T Zhang Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2020 | 150 | 2020 |
Stochastic recursive gradient descent ascent for stochastic nonconvex-strongly-concave minimax problems L Luo, H Ye, Z Huang, T Zhang Advances in Neural Information Processing Systems 33, 20566-20577, 2020 | 115 | 2020 |
Multi-consensus decentralized accelerated gradient descent H Ye, L Luo, Z Zhou, T Zhang Journal of Machine Learning Research 24 (306), 1-50, 2023 | 60 | 2023 |
Approximate newton methods H Ye, L Luo, Z Zhang Journal of Machine Learning Research 22 (66), 1-41, 2021 | 46* | 2021 |
Hessian-aware zeroth-order optimization for black-box adversarial attack H Ye, Z Huang, C Fang, CJ Li, T Zhang arXiv preprint arXiv:1812.11377, 2018 | 41 | 2018 |
Fast Fisher discriminant analysis with randomized algorithms H Ye, Y Li, C Chen, Z Zhang Pattern Recognition 72, 82-92, 2017 | 36 | 2017 |
Decentralized accelerated proximal gradient descent H Ye, Z Zhou, L Luo, T Zhang Advances in Neural Information Processing Systems 33, 18308-18317, 2020 | 30 | 2020 |
DeEPCA: Decentralized exact PCA with linear convergence rate H Ye, T Zhang Journal of Machine Learning Research 22 (238), 1-27, 2021 | 24 | 2021 |
Nesterov's acceleration for approximate Newton H Ye, L Luo, Z Zhang Journal of Machine Learning Research 21 (142), 1-37, 2020 | 22* | 2020 |
Explicit convergence rates of greedy and random quasi-Newton methods D Lin, H Ye, Z Zhang Journal of Machine Learning Research 23 (162), 1-40, 2022 | 18 | 2022 |
Towards explicit superlinear convergence rate for SR1 H Ye, D Lin, X Chang, Z Zhang Mathematical Programming 199 (1), 1273-1303, 2023 | 16* | 2023 |
Greedy and random quasi-newton methods with faster explicit superlinear convergence D Lin, H Ye, Z Zhang Advances in Neural Information Processing Systems 34, 6646-6657, 2021 | 16 | 2021 |
PMGT-VR: A decentralized proximal-gradient algorithmic framework with variance reduction H Ye, W Xiong, T Zhang arXiv preprint arXiv:2012.15010, 2020 | 16 | 2020 |
Explicit superlinear convergence rates of Broyden's methods in nonlinear equations D Lin, H Ye, Z Zhang arXiv preprint arXiv:2109.01974, 2021 | 12 | 2021 |
Eigencurve: Optimal learning rate schedule for sgd on quadratic objectives with skewed hessian spectrums R Pan, H Ye, T Zhang arXiv preprint arXiv:2110.14109, 2021 | 9 | 2021 |
An optimal stochastic algorithm for decentralized nonconvex finite-sum optimization L Luo, H Ye arXiv preprint arXiv:2210.13931, 2022 | 8 | 2022 |
Greedy and random Broyden's methods with explicit superlinear convergence rates in nonlinear equations H Ye, D Lin, Z Zhang arXiv preprint arXiv:2110.08572, 2021 | 8 | 2021 |
Stochastic distributed optimization under average second-order similarity: Algorithms and analysis D Lin, Y Han, H Ye, Z Zhang Advances in Neural Information Processing Systems 36, 2024 | 6 | 2024 |
Accelerated distributed approximate Newton method H Ye, C He, X Chang IEEE Transactions on Neural Networks and Learning Systems 34 (11), 8642-8653, 2022 | 5 | 2022 |
Decentralized stochastic variance reduced extragradient method L Luo, H Ye arXiv preprint arXiv:2202.00509, 2022 | 5 | 2022 |