Display options
Share it on

IEEE Trans Neural Netw Learn Syst. 2016 Jun;27(6):1322-32. doi: 10.1109/TNNLS.2015.2497275. Epub 2016 Jan 28.

Learning in Variable-Dimensional Spaces.

IEEE transactions on neural networks and learning systems

Michelangelo Diligenti, Marco Gori, Claudio Sacca

PMID: 26849873 DOI: 10.1109/TNNLS.2015.2497275

Abstract

This paper proposes a unified approach to learning in environments in which patterns can be represented in variable-dimension domains, which nicely includes the case in which there are missing features. The proposal is based on the representation of the environment by pointwise constraints that are shown to model naturally pattern relationships that come out in problems of information retrieval, computer vision, and related fields. The given interpretation of learning leads to capturing the truly different aspects of similarity coming from the content at different dimensions and the pattern links. It turns out that functions that process real-valued features and functions that operate on symbolic entities are learned within a unified framework of regularization that can also be expressed using the kernel machines mathematical and algorithmic apparatus. Interestingly, in the extreme cases in which only the content or only the links are available, our theory returns classic kernel machines or graph regularization, respectively. We show experimental results that provide clear evidence of the remarkable improvements that are obtained when both types of similarities are exploited on artificial and real-world benchmarks.

Publication Types