Cross-Inferential Networks for Source-free Unsupervised Domain
Adaptation
- URL: http://arxiv.org/abs/2306.16957v1
- Date: Thu, 29 Jun 2023 14:04:24 GMT
- Title: Cross-Inferential Networks for Source-free Unsupervised Domain
Adaptation
- Authors: Yushun Tang, Qinghai Guo, and Zhihai He
- Abstract summary: We propose to explore a new method called cross-inferential networks (CIN)
Our main idea is that, when we adapt the network model to predict the sample labels from encoded features, we use these prediction results to construct new training samples with derived labels.
Our experimental results on benchmark datasets demonstrate that our proposed CIN approach can significantly improve the performance of source-free UDA.
- Score: 17.718392065388503
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One central challenge in source-free unsupervised domain adaptation (UDA) is
the lack of an effective approach to evaluate the prediction results of the
adapted network model in the target domain. To address this challenge, we
propose to explore a new method called cross-inferential networks (CIN). Our
main idea is that, when we adapt the network model to predict the sample labels
from encoded features, we use these prediction results to construct new
training samples with derived labels to learn a new examiner network that
performs a different but compatible task in the target domain. Specifically, in
this work, the base network model is performing image classification while the
examiner network is tasked to perform relative ordering of triplets of samples
whose training labels are carefully constructed from the prediction results of
the base network model. Two similarity measures, cross-network correlation
matrix similarity and attention consistency, are then developed to provide
important guidance for the UDA process. Our experimental results on benchmark
datasets demonstrate that our proposed CIN approach can significantly improve
the performance of source-free UDA.
Related papers
- Adversarial Semi-Supervised Domain Adaptation for Semantic Segmentation:
A New Role for Labeled Target Samples [7.199108088621308]
We design new training objective losses for cases when labeled target data behave as source samples or as real target samples.
To support our approach, we consider a complementary method that mixes source and labeled target data, then applies the same adaptation process.
We illustrate our findings through extensive experiments on the benchmarks GTA5, SYNTHIA, and Cityscapes.
arXiv Detail & Related papers (2023-12-12T15:40:22Z) - Self-training through Classifier Disagreement for Cross-Domain Opinion
Target Extraction [62.41511766918932]
Opinion target extraction (OTE) or aspect extraction (AE) is a fundamental task in opinion mining.
Recent work focus on cross-domain OTE, which is typically encountered in real-world scenarios.
We propose a new SSL approach that opts for selecting target samples whose model output from a domain-specific teacher and student network disagrees on the unlabelled target data.
arXiv Detail & Related papers (2023-02-28T16:31:17Z) - Training a Bidirectional GAN-based One-Class Classifier for Network
Intrusion Detection [8.158224495708978]
Existing generative adversarial networks (GANs) are primarily used for creating synthetic samples from reals.
In our proposed method, we construct the trained encoder-discriminator as a one-class classifier based on Bidirectional GAN (Bi-GAN)
Our experimental result illustrates that our proposed method is highly effective to be used in network intrusion detection tasks.
arXiv Detail & Related papers (2022-02-02T23:51:11Z) - Self-Ensembling GAN for Cross-Domain Semantic Segmentation [107.27377745720243]
This paper proposes a self-ensembling generative adversarial network (SE-GAN) exploiting cross-domain data for semantic segmentation.
In SE-GAN, a teacher network and a student network constitute a self-ensembling model for generating semantic segmentation maps, which together with a discriminator, forms a GAN.
Despite its simplicity, we find SE-GAN can significantly boost the performance of adversarial training and enhance the stability of the model.
arXiv Detail & Related papers (2021-12-15T09:50:25Z) - Attentive Prototypes for Source-free Unsupervised Domain Adaptive 3D
Object Detection [85.11649974840758]
3D object detection networks tend to be biased towards the data they are trained on.
We propose a single-frame approach for source-free, unsupervised domain adaptation of lidar-based 3D object detectors.
arXiv Detail & Related papers (2021-11-30T18:42:42Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Self-Challenging Improves Cross-Domain Generalization [81.99554996975372]
Convolutional Neural Networks (CNN) conduct image classification by activating dominant features that correlated with labels.
We introduce a simple training, Self-Challenging Representation (RSC), that significantly improves the generalization of CNN to the out-of-domain data.
RSC iteratively challenges the dominant features activated on the training data, and forces the network to activate remaining features that correlates with labels.
arXiv Detail & Related papers (2020-07-05T21:42:26Z) - Calibrated Adversarial Refinement for Stochastic Semantic Segmentation [5.849736173068868]
We present a strategy for learning a calibrated predictive distribution over semantic maps, where the probability associated with each prediction reflects its ground truth correctness likelihood.
We demonstrate the versatility and robustness of the approach by achieving state-of-the-art results on the multigrader LIDC dataset and on a modified Cityscapes dataset with injected ambiguities.
We show that the core design can be adapted to other tasks requiring learning a calibrated predictive distribution by experimenting on a toy regression dataset.
arXiv Detail & Related papers (2020-06-23T16:39:59Z) - A Transductive Multi-Head Model for Cross-Domain Few-Shot Learning [72.30054522048553]
We present a new method, Transductive Multi-Head Few-Shot learning (TMHFS), to address the Cross-Domain Few-Shot Learning challenge.
The proposed methods greatly outperform the strong baseline, fine-tuning, on four different target domains.
arXiv Detail & Related papers (2020-06-08T02:39:59Z) - Incremental Unsupervised Domain-Adversarial Training of Neural Networks [17.91571291302582]
In the context of supervised statistical learning, it is typically assumed that the training set comes from the same distribution that draws the test samples.
Here we take a different avenue and approach the problem from an incremental point of view, where the model is adapted to the new domain iteratively.
Our results report a clear improvement with respect to the non-incremental case in several datasets, also outperforming other state-of-the-art domain adaptation algorithms.
arXiv Detail & Related papers (2020-01-13T09:54:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.