Achieving Domain Generalization in Underwater Object Detection by Image
Stylization and Domain Mixup
- URL: http://arxiv.org/abs/2104.02230v1
- Date: Tue, 6 Apr 2021 01:45:07 GMT
- Title: Achieving Domain Generalization in Underwater Object Detection by Image
Stylization and Domain Mixup
- Authors: Pinhao Song, Linhui Dai, Peipei Yuan, Hong Liu and Runwei Ding
- Abstract summary: Existing underwater object detection methods degrade seriously when facing domain shift problem caused by complicated underwater environments.
We propose a domain generalization method from the aspect of data augmentation.
Comprehensive experiments on S-UODAC 2020 datasets demonstrate that the proposed method is able to learn domain-invariant representations.
- Score: 8.983901488753967
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The performance of existing underwater object detection methods degrades
seriously when facing domain shift problem caused by complicated underwater
environments. Due to the limitation of the number of domains in the dataset,
deep detectors easily just memorize a few seen domain, which leads to low
generalization ability. Ulteriorly, it can be inferred that the detector
trained on as many domains as possible is domain-invariant. Based on this
viewpoint, we propose a domain generalization method from the aspect of data
augmentation. First, the style transfer model transforms images from one source
domain to another, enriching the domain diversity of the training data. Second,
interpolating different domains on feature level, new domains can be sampled on
the domain manifold. With our method, detectors will be robust to domain shift.
Comprehensive experiments on S-UODAC2020 datasets demonstrate that the proposed
method is able to learn domain-invariant representations, and outperforms other
domain generalization methods. The source code is available at
https://github.com/mousecpn.
Related papers
- Domain-Rectifying Adapter for Cross-Domain Few-Shot Segmentation [40.667166043101076]
We propose a small adapter for rectifying diverse target domain styles to the source domain.
The adapter is trained to rectify the image features from diverse synthesized target domains to align with the source domain.
Our method achieves promising results on cross-domain few-shot semantic segmentation tasks.
arXiv Detail & Related papers (2024-04-16T07:07:40Z) - Compositional Semantic Mix for Domain Adaptation in Point Cloud
Segmentation [65.78246406460305]
compositional semantic mixing represents the first unsupervised domain adaptation technique for point cloud segmentation.
We present a two-branch symmetric network architecture capable of concurrently processing point clouds from a source domain (e.g. synthetic) and point clouds from a target domain (e.g. real-world)
arXiv Detail & Related papers (2023-08-28T14:43:36Z) - Aggregation of Disentanglement: Reconsidering Domain Variations in
Domain Generalization [9.577254317971933]
We argue that the domain variantions also contain useful information, ie, classification-aware information, for downstream tasks.
We propose a novel paradigm called Domain Disentanglement Network (DDN) to disentangle the domain expert features from the source domain images.
We also propound a new contrastive learning method to guide the domain expert features to form a more balanced and separable feature space.
arXiv Detail & Related papers (2023-02-05T09:48:57Z) - Multi-Scale Multi-Target Domain Adaptation for Angle Closure
Classification [50.658613573816254]
We propose a novel Multi-scale Multi-target Domain Adversarial Network (M2DAN) for angle closure classification.
Based on these domain-invariant features at different scales, the deep model trained on the source domain is able to classify angle closure on multiple target domains.
arXiv Detail & Related papers (2022-08-25T15:27:55Z) - Domain Invariant Masked Autoencoders for Self-supervised Learning from
Multi-domains [73.54897096088149]
We propose a Domain-invariant Masked AutoEncoder (DiMAE) for self-supervised learning from multi-domains.
The core idea is to augment the input image with style noise from different domains and then reconstruct the image from the embedding of the augmented image.
Experiments on PACS and DomainNet illustrate that DiMAE achieves considerable gains compared with recent state-of-the-art methods.
arXiv Detail & Related papers (2022-05-10T09:49:40Z) - Multilevel Knowledge Transfer for Cross-Domain Object Detection [26.105283273950942]
Domain shift is a well known problem where a model trained on a particular domain (source) does not perform well when exposed to samples from a different domain (target)
In this work, we address the domain shift problem for the object detection task.
Our approach relies on gradually removing the domain shift between the source and the target domains.
arXiv Detail & Related papers (2021-08-02T15:24:40Z) - AFAN: Augmented Feature Alignment Network for Cross-Domain Object
Detection [90.18752912204778]
Unsupervised domain adaptation for object detection is a challenging problem with many real-world applications.
We propose a novel augmented feature alignment network (AFAN) which integrates intermediate domain image generation and domain-adversarial training.
Our approach significantly outperforms the state-of-the-art methods on standard benchmarks for both similar and dissimilar domain adaptations.
arXiv Detail & Related papers (2021-06-10T05:01:20Z) - Batch Normalization Embeddings for Deep Domain Generalization [50.51405390150066]
Domain generalization aims at training machine learning models to perform robustly across different and unseen domains.
We show a significant increase in classification accuracy over current state-of-the-art techniques on popular domain generalization benchmarks.
arXiv Detail & Related papers (2020-11-25T12:02:57Z) - Improved Multi-Source Domain Adaptation by Preservation of Factors [0.0]
Domain Adaptation (DA) is a highly relevant research topic when it comes to image classification with deep neural networks.
In this paper, we describe based on a theory of visual factors how real-world scenes appear in images in general.
We show that different domains can be described by a set of so called domain factors, whose values are consistent within a domain, but can change across domains.
arXiv Detail & Related papers (2020-10-15T14:19:57Z) - Domain2Vec: Domain Embedding for Unsupervised Domain Adaptation [56.94873619509414]
Conventional unsupervised domain adaptation studies the knowledge transfer between a limited number of domains.
We propose a novel Domain2Vec model to provide vectorial representations of visual domains based on joint learning of feature disentanglement and Gram matrix.
We demonstrate that our embedding is capable of predicting domain similarities that match our intuition about visual relations between different domains.
arXiv Detail & Related papers (2020-07-17T22:05:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.