作者
Yangfan Zhou, Kaizhu Huang, Cheng Cheng, Xuguang Wang, Amir Hussain, Xin Liu
发表日期
2022/5/23
期刊
IEEE Transactions on Emerging Topics in Computational Intelligence
卷号
7
期号
2
页码范围
565-577
出版商
IEEE
简介
The training process for deep learning and pattern recognition normally involves the use of convex and strongly convex optimization algorithms such as AdaBelief and SAdam to handle lots of “uninformative” samples that should be ignored, thus incurring extra calculations. To solve this open problem, we propose to design bandit sampling method to make these algorithms focus on “informative” samples during training process. Our contribution is twofold: first, we propose a convex optimization algorithm with bandit sampling, termed AdaBeliefBS, and prove that it converges faster than its original version; second, we prove that bandit sampling works well for strongly convex algorithms, and propose a generalized SAdam, called SAdamBS, that converges faster than SAdam. Finally, we conduct a series of experiments on various benchmark datasets to verify the fast convergence rate of our proposed algorithms.
学术搜索中的文章
Y Zhou, K Huang, C Cheng, X Wang, A Hussain, X Liu - IEEE Transactions on Emerging Topics in …, 2022