Bi-Classifier Determinacy Maximization for Unsupervised Domain Adaptation

نویسندگان

چکیده

Unsupervised domain adaptation challenges the problem of transferring knowledge from a well-labelled source to an unlabelled target domain. Recently, adversarial learning with bi-classifier has been proven effective in pushing cross-domain distributions close. Prior approaches typically leverage disagreement between learn transferable representations, however, they often neglect classifier determinacy domain, which could result lack feature discriminability. In this paper, we present simple yet method, namely Bi-Classifier Determinacy Maximization (BCDM), tackle problem. Motivated by observation that samples cannot always be separated distinctly decision boundary, here proposed BCDM, design novel disparity (CDD) metric, formulates discrepancy as class relevance distinct predictions and implicitly introduces constraint on To end, BCDM can generate discriminative representations encouraging predictive outputs consistent determined, meanwhile, preserve diversity manner. Furthermore, properties CDD well theoretical guarantees BCDM's generalization bound are both elaborated. Extensive experiments show compares favorably against existing state-of-the-art methods.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Maximum Classifier Discrepancy for Unsupervised Domain Adaptation

In this work, we present a method for unsupervised domain adaptation (UDA), where we aim to transfer knowledge from a label-rich domain (i.e., a source domain) to an unlabeled domain (i.e., a target domain). Many adversarial learning methods have been proposed for this task. These methods train domain classifier networks (i.e., a discriminator) to discriminate distinguish the features as either...

متن کامل

Deep Unsupervised Domain Adaptation for Image Classification via Low Rank Representation Learning

Domain adaptation is a powerful technique given a wide amount of labeled data from similar attributes in different domains. In real-world applications, there is a huge number of data but almost more of them are unlabeled. It is effective in image classification where it is expensive and time-consuming to obtain adequate label data. We propose a novel method named DALRRL, which consists of deep ...

متن کامل

Boosting for Unsupervised Domain Adaptation

To cope with machine learning problems where the learner receives data from different source and target distributions, a new learning framework named domain adaptation (DA) has emerged, opening the door for designing theoretically well-founded algorithms. In this paper, we present SLDAB, a self-labeling DA algorithm, which takes its origin from both the theory of boosting and the theory of DA. ...

متن کامل

Unsupervised Transductive Domain Adaptation

Supervised learning with large scale labeled datasets and deep layered models has made a paradigm shift in diverse areas in learning and recognition. However, this approach still suffers generalization issues under the presence of a domain shift between the training and the test data distribution. In this regard, unsupervised domain adaptation algorithms have been proposed to directly address t...

متن کامل

Deep Adversarial Attention Alignment for Unsupervised Domain Adaptation: the Benefit of Target Expectation Maximization

In this paper we make two contributions to unsupervised domain adaptation in the convolutional neural network. First, our approach transfers knowledge in the deep side of neural networks for all convolutional layers. Previous methods usually do so by directly aligning higherlevel representations, e.g., aligning the activations of fullyconnected layers. In this case, although the convolutional l...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence

سال: 2021

ISSN: ['2159-5399', '2374-3468']

DOI: https://doi.org/10.1609/aaai.v35i10.17027