作者
Tony Jebara, Risi Kondor
发表日期
2003/8/11
图书
Learning Theory and Kernel Machines: 16th Annual Conference on Learning Theory and 7th Kernel Workshop, COLT/Kernel 2003, Washington, DC, USA, August 24-27, 2003. Proceedings
页码范围
57-71
出版商
Springer Berlin Heidelberg
简介
We introduce a new class of kernels between distributions. These induce a kernel on the input space between data points by associating to each datum a generative model fit to the data point individually. The kernel is then computed by integrating the product of the two generative models corresponding to two data points. This kernel permits discriminative estimation via, for instance, support vector machines, while exploiting the properties, assumptions, and invariances inherent in the choice of generative model. It satisfies Mercer’s condition and can be computed in closed form for a large class of models, including exponential family models, mixtures, hidden Markov models and Bayesian networks. For other models the kernel can be approximated by sampling methods. Experiments are shown for multinomial models in text classification and for hidden Markov models for protein sequence classification.
引用总数
20022003200420052006200720082009201020112012201320142015201620172018201920202021202220232024111311711116141197101283759161174
学术搜索中的文章
T Jebara, R Kondor - Learning Theory and Kernel Machines: 16th Annual …, 2003