Robust Ensembling Network for Unsupervised Domain Adaptation
- URL: http://arxiv.org/abs/2108.09473v1
- Date: Sat, 21 Aug 2021 09:19:13 GMT
- Title: Robust Ensembling Network for Unsupervised Domain Adaptation
- Authors: Han Sun, Lei Lin, Ningzhong Liu, Huiyu Zhou
- Abstract summary: We propose a Robust Ensembling Network (REN) for unsupervised domain adaptation (UDA)
REN mainly includes a teacher network and a student network, which performs standard domain adaptation training and updates weights of the teacher network.
For the purpose of improving the basic ability of the student network, we utilize the consistency constraint to balance the error between the student network and the teacher network.
- Score: 20.152004296679138
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, in order to address the unsupervised domain adaptation (UDA)
problem, extensive studies have been proposed to achieve transferrable models.
Among them, the most prevalent method is adversarial domain adaptation, which
can shorten the distance between the source domain and the target domain.
Although adversarial learning is very effective, it still leads to the
instability of the network and the drawbacks of confusing category information.
In this paper, we propose a Robust Ensembling Network (REN) for UDA, which
applies a robust time ensembling teacher network to learn global information
for domain transfer. Specifically, REN mainly includes a teacher network and a
student network, which performs standard domain adaptation training and updates
weights of the teacher network. In addition, we also propose a dual-network
conditional adversarial loss to improve the ability of the discriminator.
Finally, for the purpose of improving the basic ability of the student network,
we utilize the consistency constraint to balance the error between the student
network and the teacher network. Extensive experimental results on several UDA
datasets have demonstrated the effectiveness of our model by comparing with
other state-of-the-art UDA algorithms.
Related papers
- Adaptive Teaching with Shared Classifier for Knowledge Distillation [6.03477652126575]
Knowledge distillation (KD) is a technique used to transfer knowledge from a teacher network to a student network.
We propose adaptive teaching with a shared classifier (ATSC)
Our approach achieves state-of-the-art results on the CIFAR-100 and ImageNet datasets in both single-teacher and multiteacher scenarios.
arXiv Detail & Related papers (2024-06-12T08:51:08Z) - Direct Distillation between Different Domains [97.39470334253163]
We propose a new one-stage method dubbed Direct Distillation between Different Domains" (4Ds)
We first design a learnable adapter based on the Fourier transform to separate the domain-invariant knowledge from the domain-specific knowledge.
We then build a fusion-activation mechanism to transfer the valuable domain-invariant knowledge to the student network.
arXiv Detail & Related papers (2024-01-12T02:48:51Z) - Common Knowledge Learning for Generating Transferable Adversarial
Examples [60.1287733223249]
This paper focuses on an important type of black-box attacks, where the adversary generates adversarial examples by a substitute (source) model.
Existing methods tend to give unsatisfactory adversarial transferability when the source and target models are from different types of DNN architectures.
We propose a common knowledge learning (CKL) framework to learn better network weights to generate adversarial examples.
arXiv Detail & Related papers (2023-07-01T09:07:12Z) - Feature-domain Adaptive Contrastive Distillation for Efficient Single
Image Super-Resolution [3.2453621806729234]
CNN-based SISR has numerous parameters and high computational cost to achieve better performance.
Knowledge Distillation (KD) transfers teacher's useful knowledge to student.
We propose a feature-domain adaptive contrastive distillation (FACD) method for efficiently training lightweight student SISR networks.
arXiv Detail & Related papers (2022-11-29T06:24:14Z) - Exploring Adversarially Robust Training for Unsupervised Domain
Adaptation [71.94264837503135]
Unsupervised Domain Adaptation (UDA) methods aim to transfer knowledge from a labeled source domain to an unlabeled target domain.
This paper explores how to enhance the unlabeled data robustness via AT while learning domain-invariant features for UDA.
We propose a novel Adversarially Robust Training method for UDA accordingly, referred to as ARTUDA.
arXiv Detail & Related papers (2022-02-18T17:05:19Z) - Understanding and Improvement of Adversarial Training for Network
Embedding from an Optimization Perspective [31.312873512603808]
Network Embedding aims to learn a function mapping the nodes to Euclidean space contribute to multiple learning analysis tasks on networks.
To tackle these problems, researchers utilize Adversarial Training for Network Embedding (AdvTNE) and achieve state-of-the-art performance.
arXiv Detail & Related papers (2021-05-17T16:41:53Z) - Teacher-Student Competition for Unsupervised Domain Adaptation [28.734814582911845]
This paper proposes an unsupervised domain adaptation approach with Teacher-Student Competition (TSC)
In particular, a student network is introduced to learn the target-specific feature space, and we design a novel competition mechanism to select more credible pseudo-labels for the training of student network.
Our proposed TSC framework significantly outperforms the state-of-the-art domain adaptation methods on Office-31 and ImageCLEF-DA benchmarks.
arXiv Detail & Related papers (2020-10-19T14:58:29Z) - Deep Multi-Task Learning for Cooperative NOMA: System Design and
Principles [52.79089414630366]
We develop a novel deep cooperative NOMA scheme, drawing upon the recent advances in deep learning (DL)
We develop a novel hybrid-cascaded deep neural network (DNN) architecture such that the entire system can be optimized in a holistic manner.
arXiv Detail & Related papers (2020-07-27T12:38:37Z) - Delta Schema Network in Model-based Reinforcement Learning [125.99533416395765]
This work is devoted to unresolved problems of Artificial General Intelligence - the inefficiency of transfer learning.
We are expanding the schema networks method which allows to extract the logical relationships between objects and actions from the environment data.
We present algorithms for training a Delta Network (DSN), predicting future states of the environment and planning actions that will lead to positive reward.
arXiv Detail & Related papers (2020-06-17T15:58:25Z) - Supervised Domain Adaptation using Graph Embedding [86.3361797111839]
Domain adaptation methods assume that distributions between the two domains are shifted and attempt to realign them.
We propose a generic framework based on graph embedding.
We show that the proposed approach leads to a powerful Domain Adaptation framework.
arXiv Detail & Related papers (2020-03-09T12:25:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.