Fast and scalable polynomial kernels via explicit feature maps

N Pham, R Pagh - Proceedings of the 19th ACM SIGKDD international …, 2013 - dl.acm.org
Proceedings of the 19th ACM SIGKDD international conference on Knowledge …, 2013dl.acm.org
Approximation of non-linear kernels using random feature mapping has been successfully
employed in large-scale data analysis applications, accelerating the training of kernel
machines. While previous random feature mappings run in O (ndD) time for n training
samples in d-dimensional space and D random feature maps, we propose a novel
randomized tensor product technique, called Tensor Sketching, for approximating any
polynomial kernel in O (n (d+ DD)) time. Also, we introduce both absolute and relative error …
Approximation of non-linear kernels using random feature mapping has been successfully employed in large-scale data analysis applications, accelerating the training of kernel machines. While previous random feature mappings run in O(ndD) time for training samples in d-dimensional space and D random feature maps, we propose a novel randomized tensor product technique, called Tensor Sketching, for approximating any polynomial kernel in O(n(d+D \log{D})) time. Also, we introduce both absolute and relative error bounds for our approximation to guarantee the reliability of our estimation algorithm. Empirically, Tensor Sketching achieves higher accuracy and often runs orders of magnitude faster than the state-of-the-art approach for large-scale real-world datasets.
ACM Digital Library
以上显示的是最相近的搜索结果。 查看全部搜索结果