关注
Qihang Lin
标题
引用次数
引用次数
年份
Smoothing proximal gradient method for general structured sparse learning
X Chen, Q Lin, S Kim, JG Carbonell, EP Xing
Proceedings of the Twenty-Seventh Conference on Uncertainty in Artificial …, 2011
523*2011
Weakly-convex–concave min–max optimization: provable algorithms and applications in machine learning
H Rafique, M Liu, Q Lin, T Yang
Optimization Methods and Software, 1-35, 2021
2512021
A Unified Analysis of Stochastic Momentum Methods for Deep Learning.
Y Yan, T Yang, Z Li, Q Lin, Y Yang
IJCAI, 2955-2961, 2018
225*2018
An Accelerated Randomized Proximal Coordinate Gradient Method and its Application to Regularized Empirical Risk Minimization
Q Lin, Z Lu, L Xiao
SIAM Journal on Optimization 25 (4), 2244–2273, 2015
1532015
An accelerated proximal coordinate gradient method
Q Lin, Z Lu, L Xiao
Advances in Neural Information Processing Systems, 3059-3067, 2014
1532014
Optimistic knowledge gradient policy for optimal budget allocation in crowdsourcing
X Chen, Q Lin, D Zhou
International Conference on Machine Learning, 64-72, 2013
1532013
Distributed stochastic variance reduced gradient methods by sampling extra data with replacement
JD Lee, Q Lin, T Ma, T Yang
Journal of Machine Learning Research 18 (122), 1-43, 2017
126*2017
An adaptive accelerated proximal gradient method and its homotopy continuation for sparse optimization
Q Lin, L Xiao
Computational Optimization and Applications 60 (3), 633–674, 2015
1132015
First-order convergence theory for weakly-convex-weakly-concave min-max problems
M Liu, H Rafique, Q Lin, T Yang
Journal of Machine Learning Research 22 (169), 1-34, 2021
112*2021
RSG: Beating subgradient method without smoothness and strong convexity
T Yang, Q Lin
Journal of Machine Learning Research 19 (6), 1−33, 2015
992015
Generalized inverse classification
MT Lash, Q Lin, N Street, JG Robinson, J Ohlmann
Proceedings of the 2017 SIAM International Conference on Data Mining, 162-170, 2017
782017
Optimal epoch stochastic gradient descent ascent methods for min-max optimization
Y Yan, Y Xu, Q Lin, W Liu, T Yang
Advances in Neural Information Processing Systems 33, 5789-5800, 2020
70*2020
Optimal regularized dual averaging methods for stochastic optimization
X Chen, Q Lin, J Pena
Advances in Neural Information Processing Systems 25, 2012
682012
Stochastic convex optimization: Faster local growth implies faster global convergence
Y Xu, Q Lin, T Yang
International Conference on Machine Learning, 3821-3830, 2017
66*2017
ADMM without a fixed penalty parameter: Faster convergence with new adaptive penalization
Y Xu, M Liu, Q Lin, T Yang
Advances in neural information processing systems 30, 2017
602017
Complexity of an inexact proximal-point penalty method for constrained smooth non-convex optimization
Q Lin, R Ma, Y Xu
Computational optimization and applications 82 (1), 175-224, 2022
57*2022
Sparse latent semantic analysis
X Chen, Y Qi, B Bai, Q Lin, JG Carbonell
Proceedings of the 2011 SIAM International Conference on Data Mining, 474-485, 2011
572011
Dscovr: Randomized primal-dual block coordinate algorithms for asynchronous distributed optimization
L Xiao, AW Yu, Q Lin, W Chen
Journal of Machine Learning Research 20 (43), 1-58, 2019
522019
Block-normalized gradient method: An empirical study for training deep neural network
AW Yu, L Huang, Q Lin, R Salakhutdinov, J Carbonell
arXiv preprint arXiv:1707.04822, 2017
47*2017
Hybrid predictive models: When an interpretable model collaborates with a black-box model
T Wang, Q Lin
Journal of Machine Learning Research 22 (137), 1-38, 2021
442021
系统目前无法执行此操作,请稍后再试。
文章 1–20