Determinantal point processes for mini-batch diversification

C Zhang, H Kjellstrom, S Mandt - arXiv preprint arXiv:1705.00607, 2017 - arxiv.org
arXiv preprint arXiv:1705.00607, 2017arxiv.org
We study a mini-batch diversification scheme for stochastic gradient descent (SGD). While
classical SGD relies on uniformly sampling data points to form a mini-batch, we propose a
non-uniform sampling scheme based on the Determinantal Point Process (DPP). The DPP
relies on a similarity measure between data points and gives low probabilities to mini-
batches which contain redundant data, and higher probabilities to mini-batches with more
diverse data. This simultaneously balances the data and leads to stochastic gradients with …
We study a mini-batch diversification scheme for stochastic gradient descent (SGD). While classical SGD relies on uniformly sampling data points to form a mini-batch, we propose a non-uniform sampling scheme based on the Determinantal Point Process (DPP). The DPP relies on a similarity measure between data points and gives low probabilities to mini-batches which contain redundant data, and higher probabilities to mini-batches with more diverse data. This simultaneously balances the data and leads to stochastic gradients with lower variance. We term this approach Diversified Mini-Batch SGD (DM-SGD). We show that regular SGD and a biased version of stratified sampling emerge as special cases. Furthermore, DM-SGD generalizes stratified sampling to cases where no discrete features exist to bin the data into groups. We show experimentally that our method results more interpretable and diverse features in unsupervised setups, and in better classification accuracies in supervised setups.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果