Matrix regularization techniques for online multitask learning
نویسندگان
چکیده
In this paper we examine the problem of prediction with expert advice in a setup where the learner is presented with a sequence of examples coming from different tasks. In order for the learner to be able to benefit from performing multiple tasks simultaneously, we make assumptions of task relatedness by constraining the comparator to use a lesser number of best experts than the number of tasks. We show how this corresponds naturally to learning under spectral or structural matrix constraints, and propose regularization techniques to enforce the constraints. The regularization techniques proposed here are interesting in their own right and multitask learning is just one application for the ideas. A theoretical analysis of one such regularizer is performed, and a regret bound that shows benefits of this setup is reported.
منابع مشابه
Linear Algorithms for Online Multitask Classification
We design and analyze interacting online algorithms for multitask classification that perform better than independent learners whenever the tasks are related in a certain sense. We formalize task relatedness in different ways, and derive formal guarantees on the performance advantage provided by interaction. Our online analysis gives new stimulating insights into previously known co-regularizat...
متن کاملMulti-Task Multiple Kernel Relationship Learning
This paper presents a novel multitask multiple kernel learning framework that efficiently learns the kernel weights leveraging the relationship across multiple tasks. The idea is to automatically infer this task relationship in the RKHS space corresponding to the given base kernels. The problem is formulated as a regularization-based approach called MultiTask Multiple Kernel Relationship Learni...
متن کاملMultitask SVM learning for Remote Sensing Data Classification
This paper proposes multitask learning to tackle several problems in remote sensing data classification. The method alleviates sample selection bias by imposing cross-information in the classifiers through matrix regularization. We consider the support vector machine as core learner and two regularization schemes for multitask learning. In the first one, we use the Euclidean distance of the pre...
متن کاملSelf-Paced Multitask Learning with Shared Knowledge
This paper introduces self-paced task selection to multitask learning, where instances from more closely related tasks are selected in a progression of easier-to-harder tasks, to emulate an effective human education strategy, but applied to multitask machine learning. We develop the mathematical foundation for the approach based on iterative selection of the most appropriate task, learning the ...
متن کاملExcess risk bounds for multitask learning with trace norm regularization
Trace norm regularization is a popular method of multitask learning. We give excess risk bounds with explicit dependence on the number of tasks, the number of examples per task and properties of the data distribution. The bounds are independent of the dimension of the input space, which may be infinite as in the case of reproducing kernel Hilbert spaces. A byproduct of the proof are bounds on t...
متن کامل