Cross-Modality Domain Adaptation for Freespace Detection: A Simple yet
Effective Baseline
- URL: http://arxiv.org/abs/2210.02991v1
- Date: Thu, 6 Oct 2022 15:31:49 GMT
- Title: Cross-Modality Domain Adaptation for Freespace Detection: A Simple yet
Effective Baseline
- Authors: Yuanbin Wang, Leyan Zhu, Shaofei Huang, Tianrui Hui, Xiaojie Li, Fei
Wang, Si Liu
- Abstract summary: Freespace detection aims at classifying each pixel of the image captured by the camera as drivable or non-drivable.
We develop a cross-modality domain adaptation framework which exploits both RGB images and surface normal maps generated from depth images.
To better bridge the domain gap between source domain (synthetic data) and target domain (real-world data), we also propose a Selective Feature Alignment (SFA) module.
- Score: 21.197212665408262
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As one of the fundamental functions of autonomous driving system, freespace
detection aims at classifying each pixel of the image captured by the camera as
drivable or non-drivable. Current works of freespace detection heavily rely on
large amount of densely labeled training data for accuracy and robustness,
which is time-consuming and laborious to collect and annotate. To the best of
our knowledge, we are the first work to explore unsupervised domain adaptation
for freespace detection to alleviate the data limitation problem with synthetic
data. We develop a cross-modality domain adaptation framework which exploits
both RGB images and surface normal maps generated from depth images. A
Collaborative Cross Guidance (CCG) module is proposed to leverage the context
information of one modality to guide the other modality in a cross manner, thus
realizing inter-modality intra-domain complement. To better bridge the domain
gap between source domain (synthetic data) and target domain (real-world data),
we also propose a Selective Feature Alignment (SFA) module which only aligns
the features of consistent foreground area between the two domains, thus
realizing inter-domain intra-modality adaptation. Extensive experiments are
conducted by adapting three different synthetic datasets to one real-world
dataset for freespace detection respectively. Our method performs closely to
fully supervised freespace detection methods (93.08 v.s. 97.50 F1 score) and
outperforms other general unsupervised domain adaptation methods for semantic
segmentation with large margins, which shows the promising potential of domain
adaptation for freespace detection.
Related papers
- Source-free Domain Adaptive Object Detection in Remote Sensing Images [11.19538606490404]
We propose a source-free object detection (SFOD) setting for RS images.
It aims to perform target domain adaptation using only the source pre-trained model.
Our method does not require access to source domain RS images.
arXiv Detail & Related papers (2024-01-31T15:32:44Z) - Compositional Semantic Mix for Domain Adaptation in Point Cloud
Segmentation [65.78246406460305]
compositional semantic mixing represents the first unsupervised domain adaptation technique for point cloud segmentation.
We present a two-branch symmetric network architecture capable of concurrently processing point clouds from a source domain (e.g. synthetic) and point clouds from a target domain (e.g. real-world)
arXiv Detail & Related papers (2023-08-28T14:43:36Z) - Improving Anomaly Segmentation with Multi-Granularity Cross-Domain
Alignment [17.086123737443714]
Anomaly segmentation plays a pivotal role in identifying atypical objects in images, crucial for hazard detection in autonomous driving systems.
While existing methods demonstrate noteworthy results on synthetic data, they often fail to consider the disparity between synthetic and real-world data domains.
We introduce the Multi-Granularity Cross-Domain Alignment framework, tailored to harmonize features across domains at both the scene and individual sample levels.
arXiv Detail & Related papers (2023-08-16T22:54:49Z) - An Unsupervised Domain Adaptive Approach for Multimodal 2D Object
Detection in Adverse Weather Conditions [5.217255784808035]
We propose an unsupervised domain adaptation framework to bridge the domain gap between source and target domains.
We use a data augmentation scheme that simulates weather distortions to add domain confusion and prevent overfitting on the source data.
Experiments performed on the DENSE dataset show that our method can substantially alleviate the domain gap.
arXiv Detail & Related papers (2022-03-07T18:10:40Z) - Learning Cross-modal Contrastive Features for Video Domain Adaptation [138.75196499580804]
We propose a unified framework for video domain adaptation, which simultaneously regularizes cross-modal and cross-domain feature representations.
Specifically, we treat each modality in a domain as a view and leverage the contrastive learning technique with properly designed sampling strategies.
arXiv Detail & Related papers (2021-08-26T18:14:18Z) - AFAN: Augmented Feature Alignment Network for Cross-Domain Object
Detection [90.18752912204778]
Unsupervised domain adaptation for object detection is a challenging problem with many real-world applications.
We propose a novel augmented feature alignment network (AFAN) which integrates intermediate domain image generation and domain-adversarial training.
Our approach significantly outperforms the state-of-the-art methods on standard benchmarks for both similar and dissimilar domain adaptations.
arXiv Detail & Related papers (2021-06-10T05:01:20Z) - Adapt Everywhere: Unsupervised Adaptation of Point-Clouds and Entropy
Minimisation for Multi-modal Cardiac Image Segmentation [10.417009344120917]
We present a novel UDA method for multi-modal cardiac image segmentation.
The proposed method is based on adversarial learning and adapts network features between source and target domain in different spaces.
We validated our method on two cardiac datasets by adapting from the annotated source domain to the unannotated target domain.
arXiv Detail & Related papers (2021-03-15T08:59:44Z) - Bi-Dimensional Feature Alignment for Cross-Domain Object Detection [71.85594342357815]
We propose a novel unsupervised cross-domain detection model.
It exploits the annotated data in a source domain to train an object detector for a different target domain.
The proposed model mitigates the cross-domain representation divergence for object detection.
arXiv Detail & Related papers (2020-11-14T03:03:11Z) - Domain Conditioned Adaptation Network [90.63261870610211]
We propose a Domain Conditioned Adaptation Network (DCAN) to excite distinct convolutional channels with a domain conditioned channel attention mechanism.
This is the first work to explore the domain-wise convolutional channel activation for deep DA networks.
arXiv Detail & Related papers (2020-05-14T04:23:24Z) - Spatial Attention Pyramid Network for Unsupervised Domain Adaptation [66.75008386980869]
Unsupervised domain adaptation is critical in various computer vision tasks.
We design a new spatial attention pyramid network for unsupervised domain adaptation.
Our method performs favorably against the state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2020-03-29T09:03:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.