Minimax-optimal semi-supervised regression on unknown manifolds: supplementary material
نویسندگان
چکیده
منابع مشابه
Minimax-optimal semi-supervised regression on unknown manifolds
We consider the problem of semi-supervised regression when the predictor variables are drawn from an unknown manifold. A simple approach to this problem is to first use both the labeled and unlabeled data to estimate the manifold geodesic distance between pairs of points, and then apply a k nearest neighbor regressor based on these distance estimates. We prove that given sufficiently many unlab...
متن کاملStatistical Analysis of Semi-Supervised Regression
Semi-supervised methods use unlabeled data in addition to labeled data to construct predictors. While existing semi-supervised methods have shown some promising empirical performance, their development has been based largely based on heuristics. In this paper we study semi-supervised learning from the viewpoint of minimax theory. Our first result shows that some common methods based on regulari...
متن کاملMinimax Binary Classifier Aggregation with General Losses
We address the problem of aggregating an ensemble of predictors with known loss bounds in a semi-supervised binary classification setting, to minimize prediction loss incurred on the unlabeled data. We find the minimax optimal predictions for a very general class of loss functions including all convex and many non-convex losses, extending a recent analysis of the problem for misclassification e...
متن کاملOptimal Binary Classifier Aggregation for General Losses
We address the problem of aggregating an ensemble of predictors with known loss bounds in a semi-supervised binary classification setting, to minimize prediction loss incurred on the unlabeled data. We find the minimax optimal predictions for a very general class of loss functions including all convex and many non-convex losses, extending a recent analysis of the problem for misclassification e...
متن کاملSupplementary material: Gaussian process nonparametric tensor estimator and its minimax optimality
In this supplementary material, we give the comprehensive proof and the generalized theorems. We consider a more general regression setting: yi = f (xi) + i, (S-1) where f : X → R is the unknown true function. We suppose that the true function f is well approximated by f∗ = ∑d∗ r=1 ∏K k=1 f ∗ r (k) (that is f ' f∗). When f = f∗, this generalized regression problem is equivalent to that in the m...
متن کامل