*Abstract*:
In this paper we study a new framework introduced by
Vapnik (1998; 2006) that is an alternative capacity concept to
the large margin approach.
In the particular case of binary classification,
we are given a set of labeled examples,
and a collection of “non-examples” that do not belong to either class of interest.
This collection, called the *Universum*, allows one to encode prior knowledge
by representing meaningful concepts in the same domain
as the problem at hand.
We describe an algorithm to leverage the Universum by maximizing
the number of observed contradictions,
and show experimentally that this approach
delivers accuracy improvements over using labeled data alone.

Jason Weston, Ronan Collobert, Fabian Sinz , Léon Bottou and Vladimir Vapnik: **Inference with the Universum**, *Proceedings of the Twenty-third International Conference on Machine Learning (ICML 2006)*, IMLS/ICML, 2006.

@inproceedings{weston-collobert-sinz-bottou-vapnik-2006, author = {Weston, Jason and Collobert, Ronan and Sinz, Fabian and Bottou, L\'{e}on and Vapnik, Vladimir}, title = {Inference with the Universum}, year = {2006}, booktitle = {Proceedings of the Twenty-third International Conference on Machine Learning (ICML 2006)}, publisher = {IMLS/ICML}, note = {ACM Digital Library}, url = {http://leon.bottou.org/papers/weston-collobert-sinz-bottou-vapnik-2006}, }

- Home page of Jason Weston
- Home page of Ronan Collobert
- Home page of Fabian Sinz