ADAS: A Simple Active-and-Adaptive Baseline for Cross-Domain 3D Semantic
Segmentation
- URL: http://arxiv.org/abs/2212.10390v2
- Date: Wed, 21 Dec 2022 12:47:03 GMT
- Title: ADAS: A Simple Active-and-Adaptive Baseline for Cross-Domain 3D Semantic
Segmentation
- Authors: Ben Fei, Siyuan Huang, Jiakang Yuan, Botian Shi, Bo Zhang, Tao Chen,
Min Dou, Yu Qiao
- Abstract summary: We propose an Active-and-Adaptive (ADAS) baseline to enhance the weak cross-domain generalization ability of a well-trained 3D segmentation model.
ADAS performs an active sampling operation to select a maximally-informative subset from both source and target domains for effective adaptation.
ADAS is verified to be effective in many cross-domain settings including: 1) Unsupervised Domain Adaptation (UDA), which means that all samples from target domain are unlabeled; 2) Unsupervised Few-shot Domain Adaptation (UFDA), which means that only a few unlabeled samples are available in the unlabeled target domain.
- Score: 38.66509154973051
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: State-of-the-art 3D semantic segmentation models are trained on the
off-the-shelf public benchmarks, but they often face the major challenge when
these well-trained models are deployed to a new domain. In this paper, we
propose an Active-and-Adaptive Segmentation (ADAS) baseline to enhance the weak
cross-domain generalization ability of a well-trained 3D segmentation model,
and bridge the point distribution gap between domains. Specifically, before the
cross-domain adaptation stage begins, ADAS performs an active sampling
operation to select a maximally-informative subset from both source and target
domains for effective adaptation, reducing the adaptation difficulty under 3D
scenarios. Benefiting from the rise of multi-modal 2D-3D datasets, ADAS
utilizes a cross-modal attention-based feature fusion module that can extract a
representative pair of image features and point features to achieve a
bi-directional image-point feature interaction for better safe adaptation.
Experimentally, ADAS is verified to be effective in many cross-domain settings
including: 1) Unsupervised Domain Adaptation (UDA), which means that all
samples from target domain are unlabeled; 2) Unsupervised Few-shot Domain
Adaptation (UFDA) which means that only a few unlabeled samples are available
in the unlabeled target domain; 3) Active Domain Adaptation (ADA) which means
that the selected target samples by ADAS are manually annotated. Their results
demonstrate that ADAS achieves a significant accuracy gain by easily coupling
ADAS with self-training methods or off-the-shelf UDA works.
Related papers
- CMDA: Cross-Modal and Domain Adversarial Adaptation for LiDAR-Based 3D
Object Detection [14.063365469339812]
LiDAR-based 3D Object Detection methods often do not generalize well to target domains outside the source (or training) data distribution.
We introduce a novel unsupervised domain adaptation (UDA) method, called CMDA, which leverages visual semantic cues from an image modality.
We also introduce a self-training-based learning strategy, wherein a model is adversarially trained to generate domain-invariant features.
arXiv Detail & Related papers (2024-03-06T14:12:38Z) - D3GU: Multi-Target Active Domain Adaptation via Enhancing Domain
Alignment [58.23851910855917]
Multi-Target Active Domain Adaptation (MT-ADA) framework for image classification, named D3GU, is proposed.
D3GU applies De Domain Discrimination (D3) during training to achieve both source-target and target-target domain alignments.
Experiments on three benchmark datasets, Office31, OfficeHome, and DomainNet, have been conducted to validate consistently superior performance of D3GU for MT-ADA.
arXiv Detail & Related papers (2024-01-10T13:45:51Z) - Divide and Adapt: Active Domain Adaptation via Customized Learning [56.79144758380419]
We present Divide-and-Adapt (DiaNA), a new ADA framework that partitions the target instances into four categories with stratified transferable properties.
With a novel data subdivision protocol based on uncertainty and domainness, DiaNA can accurately recognize the most gainful samples.
Thanks to the "divideand-adapt" spirit, DiaNA can handle data with large variations of domain gap.
arXiv Detail & Related papers (2023-07-21T14:37:17Z) - ADAS: A Direct Adaptation Strategy for Multi-Target Domain Adaptive
Semantic Segmentation [12.148050135641583]
We design a multi-target domain transfer network (MTDT-Net) that aligns visual attributes across domains.
We also propose a bi-directional adaptive region selection (BARS) that reduces the attribute ambiguity among the class labels.
Our method is the first MTDA method that directly adapts to multiple domains in semantic segmentation.
arXiv Detail & Related papers (2022-03-14T01:55:42Z) - Stagewise Unsupervised Domain Adaptation with Adversarial Self-Training
for Road Segmentation of Remote Sensing Images [93.50240389540252]
Road segmentation from remote sensing images is a challenging task with wide ranges of application potentials.
We propose a novel stagewise domain adaptation model called RoadDA to address the domain shift (DS) issue in this field.
Experiment results on two benchmarks demonstrate that RoadDA can efficiently reduce the domain gap and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-08-28T09:29:14Z) - Unsupervised Domain Adaptive 3D Detection with Multi-Level Consistency [90.71745178767203]
Deep learning-based 3D object detection has achieved unprecedented success with the advent of large-scale autonomous driving datasets.
Existing 3D domain adaptive detection methods often assume prior access to the target domain annotations, which is rarely feasible in the real world.
We study a more realistic setting, unsupervised 3D domain adaptive detection, which only utilizes source domain annotations.
arXiv Detail & Related papers (2021-07-23T17:19:23Z) - CLDA: Contrastive Learning for Semi-Supervised Domain Adaptation [1.2691047660244335]
Unsupervised Domain Adaptation (UDA) aims to align the labeled source distribution with the unlabeled target distribution to obtain domain invariant predictive models.
We propose Contrastive Learning framework for semi-supervised Domain Adaptation (CLDA) that attempts to bridge the intra-domain gap.
CLDA achieves state-of-the-art results on all the above datasets.
arXiv Detail & Related papers (2021-06-30T20:23:19Z) - AFAN: Augmented Feature Alignment Network for Cross-Domain Object
Detection [90.18752912204778]
Unsupervised domain adaptation for object detection is a challenging problem with many real-world applications.
We propose a novel augmented feature alignment network (AFAN) which integrates intermediate domain image generation and domain-adversarial training.
Our approach significantly outperforms the state-of-the-art methods on standard benchmarks for both similar and dissimilar domain adaptations.
arXiv Detail & Related papers (2021-06-10T05:01:20Z) - Semi-Supervised Domain Adaptation via Adaptive and Progressive Feature
Alignment [32.77436219094282]
SSDAS employs a few labeled target samples as anchors for adaptive and progressive feature alignment between labeled source samples and unlabeled target samples.
In addition, we replace the dissimilar source features by high-confidence target features continuously during the iterative training process.
Extensive experiments show the proposed SSDAS greatly outperforms a number of baselines.
arXiv Detail & Related papers (2021-06-05T09:12:50Z) - Semi-Supervised Domain Adaptation with Prototypical Alignment and
Consistency Learning [86.6929930921905]
This paper studies how much it can help address domain shifts if we further have a few target samples labeled.
To explore the full potential of landmarks, we incorporate a prototypical alignment (PA) module which calculates a target prototype for each class from the landmarks.
Specifically, we severely perturb the labeled images, making PA non-trivial to achieve and thus promoting model generalizability.
arXiv Detail & Related papers (2021-04-19T08:46:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.