Domain-knowledge Inspired Pseudo Supervision (DIPS) for Unsupervised
Image-to-Image Translation Models to Support Cross-Domain Classification
- URL: http://arxiv.org/abs/2303.10310v4
- Date: Sat, 30 Sep 2023 14:38:44 GMT
- Title: Domain-knowledge Inspired Pseudo Supervision (DIPS) for Unsupervised
Image-to-Image Translation Models to Support Cross-Domain Classification
- Authors: Firas Al-Hindawi, Md Mahfuzur Rahman Siddiquee, Teresa Wu, Han Hu,
Ying Sun
- Abstract summary: This paper introduces a new method called Domain-knowledge Inspired Pseudo Supervision (DIPS)
DIPS uses domain-informed Gaussian Mixture Models to generate pseudo annotations to enable the use of traditional supervised metrics.
It proves its effectiveness by outperforming various GAN evaluation metrics, including FID, when selecting the optimal saved checkpoint model.
- Score: 16.4151067682813
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The ability to classify images is dependent on having access to large labeled
datasets and testing on data from the same domain that the model can train on.
Classification becomes more challenging when dealing with new data from a
different domain, where gathering and especially labeling a larger image
dataset for retraining a classification model requires a labor-intensive human
effort. Cross-domain classification frameworks were developed to handle this
data domain shift problem by utilizing unsupervised image-to-image translation
models to translate an input image from the unlabeled domain to the labeled
domain. The problem with these unsupervised models lies in their unsupervised
nature. For lack of annotations, it is not possible to use the traditional
supervised metrics to evaluate these translation models to pick the best-saved
checkpoint model. This paper introduces a new method called Domain-knowledge
Inspired Pseudo Supervision (DIPS) which utilizes domain-informed Gaussian
Mixture Models to generate pseudo annotations to enable the use of traditional
supervised metrics. This method was designed specifically to support
cross-domain classification applications contrary to other typically used
metrics such as the FID which were designed to evaluate the model in terms of
the quality of the generated image from a human-eye perspective. DIPS proves
its effectiveness by outperforming various GAN evaluation metrics, including
FID, when selecting the optimal saved checkpoint model. It is also evaluated
against truly supervised metrics. Furthermore, DIPS showcases its robustness
and interpretability by demonstrating a strong correlation with truly
supervised metrics, highlighting its superiority over existing state-of-the-art
alternatives. The code and data to replicate the results can be found on the
official Github repository: https://github.com/Hindawi91/DIPS
Related papers
- SiamSeg: Self-Training with Contrastive Learning for Unsupervised Domain Adaptation Semantic Segmentation in Remote Sensing [14.007392647145448]
UDA enables models to learn from unlabeled target domain data while training on labeled source domain data.
We propose integrating contrastive learning into UDA, enhancing the model's capacity to capture semantic information.
Our SimSeg method outperforms existing approaches, achieving state-of-the-art results.
arXiv Detail & Related papers (2024-10-17T11:59:39Z) - Adaptive Face Recognition Using Adversarial Information Network [57.29464116557734]
Face recognition models often degenerate when training data are different from testing data.
We propose a novel adversarial information network (AIN) to address it.
arXiv Detail & Related papers (2023-05-23T02:14:11Z) - Attentive Prototypes for Source-free Unsupervised Domain Adaptive 3D
Object Detection [85.11649974840758]
3D object detection networks tend to be biased towards the data they are trained on.
We propose a single-frame approach for source-free, unsupervised domain adaptation of lidar-based 3D object detectors.
arXiv Detail & Related papers (2021-11-30T18:42:42Z) - Source-Free Domain Adaptive Fundus Image Segmentation with Denoised
Pseudo-Labeling [56.98020855107174]
Domain adaptation typically requires to access source domain data to utilize their distribution information for domain alignment with the target data.
In many real-world scenarios, the source data may not be accessible during the model adaptation in the target domain due to privacy issue.
We present a novel denoised pseudo-labeling method for this problem, which effectively makes use of the source model and unlabeled target data.
arXiv Detail & Related papers (2021-09-19T06:38:21Z) - Unsupervised domain adaptation for clinician pose estimation and
instance segmentation in the OR [4.024513066910992]
We study how joint person pose estimation and segmentation instance can be performed on low resolutions images from 1x to 12x.
We propose a novel unsupervised domain adaptation method, called emphAdaptOR, to adapt a model from an emphin-the-wild labeled source domain to a statistically different unlabeled target domain.
We show the generality of our method as a semi-supervised learning (SSL) method on the large-scale emphCOCO dataset.
arXiv Detail & Related papers (2021-08-26T14:07:43Z) - Efficient Pre-trained Features and Recurrent Pseudo-Labeling
inUnsupervised Domain Adaptation [6.942003070153651]
We show how to efficiently opt for the best pre-trained features from seventeen well-known ImageNet models in unsupervised DA problems.
We propose a recurrent pseudo-labeling model using the best pre-trained features (termed PRPL) to improve classification performance.
arXiv Detail & Related papers (2021-04-27T21:35:28Z) - Semi-Supervised Domain Adaptation with Prototypical Alignment and
Consistency Learning [86.6929930921905]
This paper studies how much it can help address domain shifts if we further have a few target samples labeled.
To explore the full potential of landmarks, we incorporate a prototypical alignment (PA) module which calculates a target prototype for each class from the landmarks.
Specifically, we severely perturb the labeled images, making PA non-trivial to achieve and thus promoting model generalizability.
arXiv Detail & Related papers (2021-04-19T08:46:08Z) - Robust Domain-Free Domain Generalization with Class-aware Alignment [4.442096198968069]
Domain-Free Domain Generalization (DFDG) is a model-agnostic method to achieve better generalization performance on the unseen test domain.
DFDG uses novel strategies to learn domain-invariant class-discriminative features.
It obtains competitive performance on both time series sensor and image classification public datasets.
arXiv Detail & Related papers (2021-02-17T17:46:06Z) - Group-Wise Semantic Mining for Weakly Supervised Semantic Segmentation [49.90178055521207]
This work addresses weakly supervised semantic segmentation (WSSS), with the goal of bridging the gap between image-level annotations and pixel-level segmentation.
We formulate WSSS as a novel group-wise learning task that explicitly models semantic dependencies in a group of images to estimate more reliable pseudo ground-truths.
In particular, we devise a graph neural network (GNN) for group-wise semantic mining, wherein input images are represented as graph nodes.
arXiv Detail & Related papers (2020-12-09T12:40:13Z) - Learning from Scale-Invariant Examples for Domain Adaptation in Semantic
Segmentation [6.320141734801679]
We propose a novel approach of exploiting scale-invariance property of semantic segmentation model for self-supervised domain adaptation.
Our algorithm is based on a reasonable assumption that, in general, regardless of the size of the object and stuff (given context) the semantic labeling should be unchanged.
We show that this constraint is violated over the images of the target domain, and hence could be used to transfer labels in-between differently scaled patches.
arXiv Detail & Related papers (2020-07-28T19:40:45Z) - Joint Visual and Temporal Consistency for Unsupervised Domain Adaptive
Person Re-Identification [64.37745443119942]
This paper jointly enforces visual and temporal consistency in the combination of a local one-hot classification and a global multi-class classification.
Experimental results on three large-scale ReID datasets demonstrate the superiority of proposed method in both unsupervised and unsupervised domain adaptive ReID tasks.
arXiv Detail & Related papers (2020-07-21T14:31:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.