Semi-Supervised Learning via Generalized Maximum Entropy
نویسندگان
چکیده
Various supervised inference methods can be analyzed as convex duals of the generalized maximum entropy (MaxEnt) framework. Generalized MaxEnt aims to find a distribution that maximizes an entropy function while respecting prior information represented as potential functions in miscellaneous forms of constraints and/or penalties. We extend this framework to semi-supervised learning by incorporating unlabeled data via modifications to these potential functions reflecting structural assumptions on the data geometry. The proposed approach leads to a family of discriminative semi-supervised algorithms, that are convex, scalable, inherently multi-class, easy to implement, and that can be kernelized naturally. Experimental evaluation of special cases shows the competitiveness of our methodology.
منابع مشابه
Generalized Expectation Criteria for Semi-Supervised Learning with Weakly Labeled Data
In this paper, we present an overview of generalized expectation criteria (GE), a simple, robust, scalable method for semi-supervised training using weakly-labeled data. GE fits model parameters by favoring models that match certain expectation constraints, such as marginal label distributions, on the unlabeled data. This paper shows how to apply generalized expectation criteria to two classes ...
متن کاملA Maximum Entropy Approach to Semi-supervised Learning
Various supervised inference methods can be analyzed as convex duals of a generalized maximum entropy framework, where the goal is to find a distribution with maximum entropy subject to the moment matching constraints on the data. We extend this framework to semi-supervised learning using two approaches: 1) by incorporating unlabeled data into the data constraints and 2) by imposing similarity ...
متن کاملMaximum margin semi-supervised learning with irrelevant data
Semi-supervised learning (SSL) is a typical learning paradigms training a model from both labeled and unlabeled data. The traditional SSL models usually assume unlabeled data are relevant to the labeled data, i.e., following the same distributions of the targeted labeled data. In this paper, we address a different, yet formidable scenario in semi-supervised classification, where the unlabeled d...
متن کاملSemi-supervised learning for text classification using feature affinity regularization
Most conventional semi-supervised learning methods attempt to directly include unlabeled data into training objectives. This paper presents an alternative approach that learns feature affinity information from unlabeled data, which is incorporated into the training objective as regularization of a maximum entropy model. The regularization favors models for which correlated features have similar...
متن کاملMaximum Entropy Semi-Supervised Inverse Reinforcement Learning
A popular approach to apprenticeship learning (AL) is to formulate it as an inverse reinforcement learning (IRL) problem. The MaxEnt-IRL algorithm successfully integrates the maximum entropy principle into IRL and unlike its predecessors, it resolves the ambiguity arising from the fact that a possibly large number of policies could match the expert’s behavior. In this paper, we study an AL sett...
متن کامل