Creating and sharing knowledge for telecommunications

Combining information theoretic kernels with generative embeddings for classification

Bicego, M. ; Ulas, A. ; Castellani, U. ; Perina, A. ; Murino, V. ; Aguiar, P. ; Figueiredo, M. A. T.

Neurocomputing Vol. 101, Nº 1, pp. 161 - 169, January, 2013.

ISSN (print): 0925-2312
ISSN (online):

Journal Impact Factor: 1,234 (in 2008)

Digital Object Identifier: 10.1016/j.neucom.2012.08.014

Abstract
Classical approaches to learn classifiers for structured objects (e.g., images, sequences) use generative models in a standard Bayesian framework. To exploit the state-of-the-art performance of discriminative learning, while also taking advantage of generative models of the data, generative embeddings have been recently proposed as a way of building hybrid discriminative/generative approaches. A generative embedding is a mapping, induced by a generative model (usually learned from data), from the object space into a fixed dimensional space, adequate for discriminative classifier learning. Generative embeddings have been shown to often outperform the classifiers obtained directly from the generative models upon which they are built.

Using a generative embedding for classification involves two main steps: (i) defining and learning a generative model and using it to build the embedding; (ii) discriminatively learning a (maybe kernel) classifier with the embedded data. The literature on generative embeddings is essentially focused on step (i), usually taking some standard off-the-shelf tool for step (ii). Here, we adopt a different approach, by focusing also on the discriminative learning step. In particular, we exploit the probabilistic nature of generative embeddings, by using kernels defined on probability measures; in particular we investigate the use of a recent family of non-extensive information theoretic kernels on the top of different generative embeddings. We show, in different medical applications that the approach yields state-of-the-art performance.