Few-Shot Object Detection in Unseen Domains
- URL: http://arxiv.org/abs/2204.05072v1
- Date: Mon, 11 Apr 2022 13:16:41 GMT
- Title: Few-Shot Object Detection in Unseen Domains
- Authors: Karim Guirguis, George Eskandar, Matthias Kayser, Bin Yang, Juergen
Beyerer
- Abstract summary: Few-shot object detection (FSOD) has thrived in recent years to learn novel object classes with limited data.
We propose various data augmentations techniques on the few shots of novel classes to account for all possible domain-specific information.
Our experiments on the T-LESS dataset show that the proposed approach succeeds in alleviating the domain gap considerably.
- Score: 4.36080478413575
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Few-shot object detection (FSOD) has thrived in recent years to learn novel
object classes with limited data by transfering knowledge gained on abundant
base classes. FSOD approaches commonly assume that both the scarcely provided
examples of novel classes and test-time data belong to the same domain.
However, this assumption does not hold in various industrial and robotics
applications (e.g., object grasping and manipulation), where a model can learn
novel classes from a source domain while inferring on classes from a different
target domain. In this work, we address the task of zero-shot domain
adaptation, also known as domain generalization, for FSOD. Specifically, we
assume that neither images nor labels of the novel classes in the target domain
are available during training. Our approach for solving the domain gap is
two-fold. First, we leverage a meta-training paradigm, where we learn
domain-invariant features on the base classes. Second, we propose various data
augmentations techniques on the few shots of novel classes to account for all
possible domain-specific information. To further constraint the network into
encoding domain-agnostic class-specific representations only, a contrastive
loss is proposed to maximize the mutual information between foreground
proposals and class prototypes, and to reduce the network's bias to the
background information. Our experiments on the T-LESS dataset show that the
proposed approach succeeds in alleviating the domain gap considerably without
utilizing labels or images of novel categories from the target domain.
Related papers
- Domain Adaptive Few-Shot Open-Set Learning [36.39622440120531]
We propose Domain Adaptive Few-Shot Open Set Recognition (DA-FSOS) and introduce a meta-learning-based architecture named DAFOSNET.
Our training approach ensures that DAFOS-NET can generalize well to new scenarios in the target domain.
We present three benchmarks for DA-FSOS based on the Office-Home, mini-ImageNet/CUB, and DomainNet datasets.
arXiv Detail & Related papers (2023-09-22T12:04:47Z) - Few-Shot Classification in Unseen Domains by Episodic Meta-Learning
Across Visual Domains [36.98387822136687]
Few-shot classification aims to carry out classification given only few labeled examples for the categories of interest.
In this paper, we present a unique learning framework for domain-generalized few-shot classification.
By advancing meta-learning strategies, our learning framework exploits data across multiple source domains to capture domain-invariant features.
arXiv Detail & Related papers (2021-12-27T06:54:11Z) - Structured Latent Embeddings for Recognizing Unseen Classes in Unseen
Domains [108.11746235308046]
We propose a novel approach that learns domain-agnostic structured latent embeddings by projecting images from different domains.
Our experiments on the challenging DomainNet and DomainNet-LS benchmarks show the superiority of our approach over existing methods.
arXiv Detail & Related papers (2021-07-12T17:57:46Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Inferring Latent Domains for Unsupervised Deep Domain Adaptation [54.963823285456925]
Unsupervised Domain Adaptation (UDA) refers to the problem of learning a model in a target domain where labeled data are not available.
This paper introduces a novel deep architecture which addresses the problem of UDA by automatically discovering latent domains in visual datasets.
We evaluate our approach on publicly available benchmarks, showing that it outperforms state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-03-25T14:33:33Z) - Towards Recognizing New Semantic Concepts in New Visual Domains [9.701036831490768]
We argue that it is crucial to design deep architectures that can operate in previously unseen visual domains and recognize novel semantic concepts.
In the first part of the thesis, we describe different solutions to enable deep models to generalize to new visual domains.
In the second part, we show how to extend the knowledge of a pretrained deep model to new semantic concepts, without access to the original training set.
arXiv Detail & Related papers (2020-12-16T16:23:40Z) - A Review of Single-Source Deep Unsupervised Visual Domain Adaptation [81.07994783143533]
Large-scale labeled training datasets have enabled deep neural networks to excel across a wide range of benchmark vision tasks.
In many applications, it is prohibitively expensive and time-consuming to obtain large quantities of labeled data.
To cope with limited labeled training data, many have attempted to directly apply models trained on a large-scale labeled source domain to another sparsely labeled or unlabeled target domain.
arXiv Detail & Related papers (2020-09-01T00:06:50Z) - Domain Adaptation for Semantic Parsing [68.81787666086554]
We propose a novel semantic for domain adaptation, where we have much fewer annotated data in the target domain compared to the source domain.
Our semantic benefits from a two-stage coarse-to-fine framework, thus can provide different and accurate treatments for the two stages.
Experiments on a benchmark dataset show that our method consistently outperforms several popular domain adaptation strategies.
arXiv Detail & Related papers (2020-06-23T14:47:41Z) - Handling new target classes in semantic segmentation with domain
adaptation [34.11498666008825]
We propose a framework to enable "boundless" adaptation in the target domain.
It relies on a novel architecture, along with a dedicated learning scheme, to bridge the source-target domain gap.
Our framework outperforms the baselines by significant margins.
arXiv Detail & Related papers (2020-04-02T16:59:57Z) - Mind the Gap: Enlarging the Domain Gap in Open Set Domain Adaptation [65.38975706997088]
Open set domain adaptation (OSDA) assumes the presence of unknown classes in the target domain.
We show that existing state-of-the-art methods suffer a considerable performance drop in the presence of larger domain gaps.
We propose a novel framework to specifically address the larger domain gaps.
arXiv Detail & Related papers (2020-03-08T14:20:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.