Large-scale image classification: Fast feature extraction and SVM training

Y Lin, F Lv, S Zhu, M Yang, T Cour, K Yu, L Cao… - CVPR …, 2011 - ieeexplore.ieee.org
CVPR 2011, 2011ieeexplore.ieee.org
Most research efforts on image classification so far have been focused on medium-scale
datasets, which are often defined as datasets that can fit into the memory of a desktop
(typically 4G~ 48G). There are two main reasons for the limited effort on large-scale image
classification. First, until the emergence of ImageNet dataset, there was almost no publicly
available large-scale benchmark data for image classification. This is mostly because class
labels are expensive to obtain. Second, large-scale classification is hard because it poses …
Most research efforts on image classification so far have been focused on medium-scale datasets, which are often defined as datasets that can fit into the memory of a desktop (typically 4G~48G). There are two main reasons for the limited effort on large-scale image classification. First, until the emergence of ImageNet dataset, there was almost no publicly available large-scale benchmark data for image classification. This is mostly because class labels are expensive to obtain. Second, large-scale classification is hard because it poses more challenges than its medium-scale counterparts. A key challenge is how to achieve efficiency in both feature extraction and classifier training without compromising performance. This paper is to show how we address this challenge using ImageNet dataset as an example. For feature extraction, we develop a Hadoop scheme that performs feature extraction in parallel using hundreds of mappers. This allows us to extract fairly sophisticated features (with dimensions being hundreds of thousands) on 1.2 million images within one day. For SVM training, we develop a parallel averaging stochastic gradient descent (ASGD) algorithm for training one-against-all 1000-class SVM classifiers. The ASGD algorithm is capable of dealing with terabytes of training data and converges very fast-typically 5 epochs are sufficient. As a result, we achieve state-of-the-art performance on the ImageNet 1000-class classification, i.e., 52.9% in classification accuracy and 71.8% in top 5 hit rate.
ieeexplore.ieee.org
以上显示的是最相近的搜索结果。 查看全部搜索结果