Calibrating the adaptive learning rate to improve convergence of ADAM Q Tong, G Liang, J Bi Neurocomputing 481, 333-356, 2022 | 81* | 2022 |
Effective federated adaptive gradient methods with non-iid decentralized data Q Tong, G Liang, J Bi arXiv preprint arXiv:2009.06557, 2020 | 33 | 2020 |
Multi-view spectral graph convolution with consistent edge attention for molecular modeling C Shang, Q Liu, Q Tong, J Sun, M Song, J Bi Neurocomputing 445, 12-25, 2021 | 19 | 2021 |
Federated nonconvex sparse learning Q Tong, G Liang, T Zhu, J Bi arXiv preprint arXiv:2101.00052, 2020 | 12 | 2020 |
Asynchronous parallel stochastic Quasi-Newton methods Q Tong, G Liang, X Cai, C Zhu, J Bi Parallel computing 101, 102721, 2021 | 8 | 2021 |
An effective hard thresholding method based on stochastic variance reduction for nonconvex sparse learning G Liang, Q Tong, C Zhu, J Bi Proceedings of the AAAI Conference on Artificial Intelligence 34 (02), 1585-1592, 2020 | 7 | 2020 |
Federated Optimization of ℓ0-norm Regularized Sparse Learning Q Tong, G Liang, J Ding, T Zhu, M Pan, J Bi Algorithms 15 (9), 319, 2022 | 3 | 2022 |
Escaping saddle points with stochastically controlled stochastic gradient methods G Liang, Q Tong, C Zhu, J Bi arXiv preprint arXiv:2103.04413, 2021 | 3 | 2021 |
Stochastic privacy-preserving methods for nonconvex sparse learning G Liang, Q Tong, J Ding, M Pan, J Bi Information Sciences 630, 567-585, 2023 | 2 | 2023 |
An Effective Tensor Regression with Latent Sparse Regularization. K Chen, T Xu, G Liang, Q Tong, M Song, J Bi Journal of Data Science 20 (2), 2022 | 2 | 2022 |
Stochastic Variance-Reduced Iterative Hard Thresholding in Graph Sparsity Optimization D Fox, S Hernandez, Q Tong arXiv preprint arXiv:2407.16968, 2024 | | 2024 |
Effective Proximal Methods for Non-convex Non-smooth Regularized Learning G Liang, Q Tong, J Ding, M Pan, J Bi 2020 IEEE International Conference on Data Mining (ICDM), 342-351, 2020 | | 2020 |
Parallel and Federated Algorithms for Large-scale Machine Learning Problems Q Tong | | |