Self-Supervision & Meta-Learning for One-Shot Unsupervised Cross-Domain
Detection
- URL: http://arxiv.org/abs/2106.03496v1
- Date: Mon, 7 Jun 2021 10:33:04 GMT
- Title: Self-Supervision & Meta-Learning for One-Shot Unsupervised Cross-Domain
Detection
- Authors: F. Cappio Borlino, S. Polizzotto, A. D'Innocente, S. Bucci, B. Caputo,
T. Tommasi
- Abstract summary: We present an object detection algorithm able to perform unsupervised adaptation across domains by using only one target sample, seen at test time.
We exploit meta-learning to simulate single-sample cross domain learning episodes and better align to the test condition.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep detection models have largely demonstrated to be extremely powerful in
controlled settings, but appear brittle and fail when applied off-the-shelf on
unseen domains. All the adaptive approaches developed to amend this issue
access a sizable amount of target samples at training time, a strategy not
suitable when the target is unknown and its data are not available in advance.
Consider for instance the task of monitoring image feeds from social media: as
every image is uploaded by a different user it belongs to a different target
domain that is impossible to foresee during training. Our work addresses this
setting, presenting an object detection algorithm able to perform unsupervised
adaptation across domains by using only one target sample, seen at test time.
We introduce a multi-task architecture that one-shot adapts to any incoming
sample by iteratively solving a self-supervised task on it. We further exploit
meta-learning to simulate single-sample cross domain learning episodes and
better align to the test condition. Moreover, a cross-task pseudo-labeling
procedure allows to focus on the image foreground and enhances the adaptation
process. A thorough benchmark analysis against the most recent cross-domain
detection methods and a detailed ablation study show the advantage of our
approach.
Related papers
- Spectral Adversarial MixUp for Few-Shot Unsupervised Domain Adaptation [72.70876977882882]
Domain shift is a common problem in clinical applications, where the training images (source domain) and the test images (target domain) are under different distributions.
We propose a novel method for Few-Shot Unsupervised Domain Adaptation (FSUDA), where only a limited number of unlabeled target domain samples are available for training.
arXiv Detail & Related papers (2023-09-03T16:02:01Z) - You Only Train Once: Learning a General Anomaly Enhancement Network with
Random Masks for Hyperspectral Anomaly Detection [31.984085248224574]
We introduce a new approach to address the challenge of generalization in hyperspectral anomaly detection (AD)
Our method eliminates the need for adjusting parameters or retraining on new test scenes as required by most existing methods.
Our method achieves competitive performance when the training and test set are captured by different sensor devices.
arXiv Detail & Related papers (2023-03-31T12:23:56Z) - Style Mixing and Patchwise Prototypical Matching for One-Shot
Unsupervised Domain Adaptive Semantic Segmentation [21.01132797297286]
In one-shot unsupervised domain adaptation, segmentors only see one unlabeled target image during training.
We propose a new OSUDA method that can effectively relieve such computational burden.
Our method achieves new state-of-the-art performance on two commonly used benchmarks for domain adaptive semantic segmentation.
arXiv Detail & Related papers (2021-12-09T02:47:46Z) - Semi-Supervised Domain Adaptation with Prototypical Alignment and
Consistency Learning [86.6929930921905]
This paper studies how much it can help address domain shifts if we further have a few target samples labeled.
To explore the full potential of landmarks, we incorporate a prototypical alignment (PA) module which calculates a target prototype for each class from the landmarks.
Specifically, we severely perturb the labeled images, making PA non-trivial to achieve and thus promoting model generalizability.
arXiv Detail & Related papers (2021-04-19T08:46:08Z) - Unsupervised and self-adaptative techniques for cross-domain person
re-identification [82.54691433502335]
Person Re-Identification (ReID) across non-overlapping cameras is a challenging task.
Unsupervised Domain Adaptation (UDA) is a promising alternative, as it performs feature-learning adaptation from a model trained on a source to a target domain without identity-label annotation.
In this paper, we propose a novel UDA-based ReID method that takes advantage of triplets of samples created by a new offline strategy.
arXiv Detail & Related papers (2021-03-21T23:58:39Z) - Uncertainty-Aware Unsupervised Domain Adaptation in Object Detection [34.18382705952121]
Unlabelled domain adaptive object detection aims to adapt detectors from a labelled source domain to an unsupervised target domain.
adversarial learning may impair the alignment of well-aligned samples as it merely aligns the global distributions across domains.
We design an uncertainty-aware domain adaptation network (UaDAN) that introduces conditional adversarial learning to align well-aligned and poorly-aligned samples separately.
arXiv Detail & Related papers (2021-02-27T15:04:07Z) - Instance Localization for Self-supervised Detection Pretraining [68.24102560821623]
We propose a new self-supervised pretext task, called instance localization.
We show that integration of bounding boxes into pretraining promotes better task alignment and architecture alignment for transfer learning.
Experimental results demonstrate that our approach yields state-of-the-art transfer learning results for object detection.
arXiv Detail & Related papers (2021-02-16T17:58:57Z) - Surprisingly Simple Semi-Supervised Domain Adaptation with Pretraining
and Consistency [93.89773386634717]
Visual domain adaptation involves learning to classify images from a target visual domain using labels available in a different source domain.
We show that in the presence of a few target labels, simple techniques like self-supervision (via rotation prediction) and consistency regularization can be effective without any adversarial alignment to learn a good target classifier.
Our Pretraining and Consistency (PAC) approach, can achieve state of the art accuracy on this semi-supervised domain adaptation task, surpassing multiple adversarial domain alignment methods, across multiple datasets.
arXiv Detail & Related papers (2021-01-29T18:40:17Z) - A Review of Single-Source Deep Unsupervised Visual Domain Adaptation [81.07994783143533]
Large-scale labeled training datasets have enabled deep neural networks to excel across a wide range of benchmark vision tasks.
In many applications, it is prohibitively expensive and time-consuming to obtain large quantities of labeled data.
To cope with limited labeled training data, many have attempted to directly apply models trained on a large-scale labeled source domain to another sparsely labeled or unlabeled target domain.
arXiv Detail & Related papers (2020-09-01T00:06:50Z) - One-Shot Unsupervised Cross-Domain Detection [33.04327634746745]
This paper presents an object detection algorithm able to perform unsupervised adaption across domains by using only one target sample, seen at test time.
We achieve this by introducing a multi-task architecture that one-shot adapts to any incoming sample by iteratively solving a self-supervised task on it.
arXiv Detail & Related papers (2020-05-23T22:12:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.