نتایج جستجو برای: domain adaptation

تعداد نتایج: 542537  

Journal: :J. Artif. Intell. Res. 2006
Hal Daumé Daniel Marcu

The most basic assumption used in statistical learning theory is that training data and test data are drawn from the same underlying distribution. Unfortunately, in many applications, the “in-domain” test data is drawn from a distribution that is related, but not identical, to the “out-of-domain” distribution of the training data. We consider the common case in which labeled out-of-domain data ...

2011
John Blitzer Sham M. Kakade Dean P. Foster

Domain adaptation algorithms address a key issue in applied machine learning: How can we train a system under a source distribution but achieve high performance under a different target distribution? We tackle this question for divergent distributions where crucial predictive target features may not even have support under the source distribution. In this setting, the key intuition is that that...

2015
Yaroslav Ganin Victor S. Lempitsky

Top-performing deep architectures are trained on massive amounts of labeled data. In the absence of labeled data for a certain task, domain adaptation often provides an attractive option given that labeled data of similar nature but from a different domain (e.g. synthetic images) are available. Here, we propose a new approach to domain adaptation in deep architectures that can be trained on lar...

In this paper, a new method for image denoising based on incoherent dictionary learning and domain transfer technique is proposed. The idea of using sparse representation concept is one of the most interesting areas for researchers. The goal of sparse coding is to approximately model the input data as a weighted linear combination of a small number of basis vectors. Two characteristics should b...

2017
Saeid Motiian Quinn Jones Seyed Mehdi Iranmanesh Gianfranco Doretto

This work provides a framework for addressing the problem of supervised domain adaptation with deep models. The main idea is to exploit adversarial learning to learn an embedded subspace that simultaneously maximizes the confusion between two domains while semantically aligning their embedding. The supervised setting becomes attractive especially when there are only a few target data samples th...

Journal: :J. Web Sem. 2016
John P. McCrae Mihael Arcan Kartik Asooja Jorge Gracia Paul Buitelaar Philipp Cimiano

2008
Christian Widmer Daniel Huson Gunnar Rätsch

The machine learning approach to problems from computational biology is to learn models from labeled training examples. A common problem in supervised learning is the lack of sufficient training data. Often, however, there exists a large body of data on a different, but related problem. In order to boost performance on our problem of interest, we would like to utilize information from this rela...

Journal: :CoRR 2017
Kuan-Chuan Peng Ziyan Wu Jan Ernst

Current state-of-the-art approaches in domain adaptation and fusion show promising results with either labeled or unlabeled task-relevant target-domain training data. However, the fact that the task-relevant target-domain training data can be unavailable is often ignored by the prior works. To tackle this issue, instead of using the task-relevant target-domain training data, we propose zeroshot...

Journal: :CoRR 2018
Yunfei Teng Anna Choromanska Mariusz Bojarski

The unsupervised image-to-image translation aims at finding a mapping between the source (A) and target (B) image domains, where in many applications aligned image pairs are not available at training. This is an ill-posed learning problem since it requires inferring the joint probability distribution from marginals. Joint learning of coupled mappings FAB : A → B and FBA : B → A is commonly used...

2007
John Blitzer Koby Crammer Alex Kulesza Fernando Pereira Jennifer Wortman Vaughan

Empirical risk minimization offers well-known learning guarantees when training and test data come from the same domain. In the real world, though, we often wish to adapt a classifier from a source domain with a large amount of training data to different target domain with very little training data. In this work we give uniform convergence bounds for algorithms that minimize a convex combinatio...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید