Pseudo-Trilateral Adversarial Training for Domain Adaptive
Traversability Prediction
- URL: http://arxiv.org/abs/2306.14370v1
- Date: Mon, 26 Jun 2023 00:39:32 GMT
- Title: Pseudo-Trilateral Adversarial Training for Domain Adaptive
Traversability Prediction
- Authors: Zheng Chen, Durgakant Pushp, Jason M. Gregory, Lantao Liu
- Abstract summary: Traversability prediction is a fundamental perception capability for autonomous navigation.
We propose a novel perception model that adopts a coarse-to-fine alignment (CALI) to perform unsupervised domain adaptation (UDA)
We show the superiorities of our proposed models over multiple baselines in several challenging domain adaptation setups.
- Score: 8.145900996884993
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traversability prediction is a fundamental perception capability for
autonomous navigation. Deep neural networks (DNNs) have been widely used to
predict traversability during the last decade. The performance of DNNs is
significantly boosted by exploiting a large amount of data. However, the
diversity of data in different domains imposes significant gaps in the
prediction performance. In this work, we make efforts to reduce the gaps by
proposing a novel pseudo-trilateral adversarial model that adopts a
coarse-to-fine alignment (CALI) to perform unsupervised domain adaptation
(UDA). Our aim is to transfer the perception model with high data efficiency,
eliminate the prohibitively expensive data labeling, and improve the
generalization capability during the adaptation from easy-to-access source
domains to various challenging target domains. Existing UDA methods usually
adopt a bilateral zero-sum game structure. We prove that our CALI model -- a
pseudo-trilateral game structure is advantageous over existing bilateral game
structures. This proposed work bridges theoretical analyses and algorithm
designs, leading to an efficient UDA model with easy and stable training. We
further develop a variant of CALI -- Informed CALI (ICALI), which is inspired
by the recent success of mixup data augmentation techniques and mixes
informative regions based on the results of CALI. This mixture step provides an
explicit bridging between the two domains and exposes underperforming classes
more during training. We show the superiorities of our proposed models over
multiple baselines in several challenging domain adaptation setups. To further
validate the effectiveness of our proposed models, we then combine our
perception model with a visual planner to build a navigation system and show
the high reliability of our model in complex natural environments.
Related papers
- CMDA: Cross-Modal and Domain Adversarial Adaptation for LiDAR-Based 3D
Object Detection [14.063365469339812]
LiDAR-based 3D Object Detection methods often do not generalize well to target domains outside the source (or training) data distribution.
We introduce a novel unsupervised domain adaptation (UDA) method, called CMDA, which leverages visual semantic cues from an image modality.
We also introduce a self-training-based learning strategy, wherein a model is adversarially trained to generate domain-invariant features.
arXiv Detail & Related papers (2024-03-06T14:12:38Z) - ViDA: Homeostatic Visual Domain Adapter for Continual Test Time Adaptation [48.039156140237615]
A Continual Test-Time Adaptation task is proposed to adapt the pre-trained model to continually changing target domains.
We design a Visual Domain Adapter (ViDA) for CTTA, explicitly handling both domain-specific and domain-shared knowledge.
Our proposed method achieves state-of-the-art performance in both classification and segmentation CTTA tasks.
arXiv Detail & Related papers (2023-06-07T11:18:53Z) - CNN Feature Map Augmentation for Single-Source Domain Generalization [6.053629733936548]
Domain Generalization (DG) has gained significant traction during the past few years.
The goal in DG is to produce models which continue to perform well when presented with data distributions different from the ones available during training.
We propose an alternative regularization technique for convolutional neural network architectures in the single-source DG image classification setting.
arXiv Detail & Related papers (2023-05-26T08:48:17Z) - CAusal and collaborative proxy-tasKs lEarning for Semi-Supervised Domain
Adaptation [20.589323508870592]
Semi-supervised domain adaptation (SSDA) adapts a learner to a new domain by effectively utilizing source domain data and a few labeled target samples.
We show that the proposed model significantly outperforms SOTA methods in terms of effectiveness and generalisability on SSDA datasets.
arXiv Detail & Related papers (2023-03-30T16:48:28Z) - Boosting the Generalization Capability in Cross-Domain Few-shot Learning
via Noise-enhanced Supervised Autoencoder [23.860842627883187]
We teach the model to capture broader variations of the feature distributions with a novel noise-enhanced supervised autoencoder (NSAE)
NSAE trains the model by jointly reconstructing inputs and predicting the labels of inputs as well as their reconstructed pairs.
We also take advantage of NSAE structure and propose a two-step fine-tuning procedure that achieves better adaption and improves classification performance in the target domain.
arXiv Detail & Related papers (2021-08-11T04:45:56Z) - Source-Free Open Compound Domain Adaptation in Semantic Segmentation [99.82890571842603]
In SF-OCDA, only the source pre-trained model and the target data are available to learn the target model.
We propose the Cross-Patch Style Swap (CPSS) to diversify samples with various patch styles in the feature-level.
Our method produces state-of-the-art results on the C-Driving dataset.
arXiv Detail & Related papers (2021-06-07T08:38:41Z) - Dual Attentive Sequential Learning for Cross-Domain Click-Through Rate
Prediction [76.98616102965023]
Cross domain recommender system constitutes a powerful method to tackle the cold-start and sparsity problem.
We propose a novel approach to cross-domain sequential recommendations based on the dual learning mechanism.
arXiv Detail & Related papers (2021-06-05T01:21:21Z) - Towards Fair Knowledge Transfer for Imbalanced Domain Adaptation [61.317911756566126]
We propose a Towards Fair Knowledge Transfer framework to handle the fairness challenge in imbalanced cross-domain learning.
Specifically, a novel cross-domain mixup generation is exploited to augment the minority source set with target information to enhance fairness.
Our model significantly improves over 20% on two benchmarks in terms of the overall accuracy.
arXiv Detail & Related papers (2020-10-23T06:29:09Z) - Decomposed Adversarial Learned Inference [118.27187231452852]
We propose a novel approach, Decomposed Adversarial Learned Inference (DALI)
DALI explicitly matches prior and conditional distributions in both data and code spaces.
We validate the effectiveness of DALI on the MNIST, CIFAR-10, and CelebA datasets.
arXiv Detail & Related papers (2020-04-21T20:00:35Z) - Diversity inducing Information Bottleneck in Model Ensembles [73.80615604822435]
In this paper, we target the problem of generating effective ensembles of neural networks by encouraging diversity in prediction.
We explicitly optimize a diversity inducing adversarial loss for learning latent variables and thereby obtain diversity in the output predictions necessary for modeling multi-modal data.
Compared to the most competitive baselines, we show significant improvements in classification accuracy, under a shift in the data distribution.
arXiv Detail & Related papers (2020-03-10T03:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.