关注
Haishan Ye
Haishan Ye
在 xjtu.edu.cn 的电子邮件经过验证
标题
引用次数
引用次数
年份
Milenas: Efficient neural architecture search via mixed-level reformulation
C He, H Ye, L Shen, T Zhang
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2020
1502020
Stochastic recursive gradient descent ascent for stochastic nonconvex-strongly-concave minimax problems
L Luo, H Ye, Z Huang, T Zhang
Advances in Neural Information Processing Systems 33, 20566-20577, 2020
1152020
Multi-consensus decentralized accelerated gradient descent
H Ye, L Luo, Z Zhou, T Zhang
Journal of Machine Learning Research 24 (306), 1-50, 2023
602023
Approximate newton methods
H Ye, L Luo, Z Zhang
Journal of Machine Learning Research 22 (66), 1-41, 2021
46*2021
Hessian-aware zeroth-order optimization for black-box adversarial attack
H Ye, Z Huang, C Fang, CJ Li, T Zhang
arXiv preprint arXiv:1812.11377, 2018
412018
Fast Fisher discriminant analysis with randomized algorithms
H Ye, Y Li, C Chen, Z Zhang
Pattern Recognition 72, 82-92, 2017
362017
Decentralized accelerated proximal gradient descent
H Ye, Z Zhou, L Luo, T Zhang
Advances in Neural Information Processing Systems 33, 18308-18317, 2020
302020
DeEPCA: Decentralized exact PCA with linear convergence rate
H Ye, T Zhang
Journal of Machine Learning Research 22 (238), 1-27, 2021
242021
Nesterov's acceleration for approximate Newton
H Ye, L Luo, Z Zhang
Journal of Machine Learning Research 21 (142), 1-37, 2020
22*2020
Explicit convergence rates of greedy and random quasi-Newton methods
D Lin, H Ye, Z Zhang
Journal of Machine Learning Research 23 (162), 1-40, 2022
182022
Towards explicit superlinear convergence rate for SR1
H Ye, D Lin, X Chang, Z Zhang
Mathematical Programming 199 (1), 1273-1303, 2023
16*2023
Greedy and random quasi-newton methods with faster explicit superlinear convergence
D Lin, H Ye, Z Zhang
Advances in Neural Information Processing Systems 34, 6646-6657, 2021
162021
PMGT-VR: A decentralized proximal-gradient algorithmic framework with variance reduction
H Ye, W Xiong, T Zhang
arXiv preprint arXiv:2012.15010, 2020
162020
Explicit superlinear convergence rates of Broyden's methods in nonlinear equations
D Lin, H Ye, Z Zhang
arXiv preprint arXiv:2109.01974, 2021
122021
Eigencurve: Optimal learning rate schedule for sgd on quadratic objectives with skewed hessian spectrums
R Pan, H Ye, T Zhang
arXiv preprint arXiv:2110.14109, 2021
92021
An optimal stochastic algorithm for decentralized nonconvex finite-sum optimization
L Luo, H Ye
arXiv preprint arXiv:2210.13931, 2022
82022
Greedy and random Broyden's methods with explicit superlinear convergence rates in nonlinear equations
H Ye, D Lin, Z Zhang
arXiv preprint arXiv:2110.08572, 2021
82021
Stochastic distributed optimization under average second-order similarity: Algorithms and analysis
D Lin, Y Han, H Ye, Z Zhang
Advances in Neural Information Processing Systems 36, 2024
62024
Accelerated distributed approximate Newton method
H Ye, C He, X Chang
IEEE Transactions on Neural Networks and Learning Systems 34 (11), 8642-8653, 2022
52022
Decentralized stochastic variance reduced extragradient method
L Luo, H Ye
arXiv preprint arXiv:2202.00509, 2022
52022
系统目前无法执行此操作,请稍后再试。
文章 1–20