Synthetic-to-Real Domain Adaptation for Lane Detection
- URL: http://arxiv.org/abs/2007.04023v2
- Date: Mon, 9 Nov 2020 12:52:14 GMT
- Title: Synthetic-to-Real Domain Adaptation for Lane Detection
- Authors: Noa Garnett, Roy Uziel, Netalee Efrat, Dan Levi
- Abstract summary: We explore learning from abundant, randomly generated synthetic data, together with unlabeled or partially labeled target domain data.
This poses the challenge of adapting models learned on the unrealistic synthetic domain to real images.
We develop a novel autoencoder-based approach that uses synthetic labels unaligned with particular images for adapting to target domain data.
- Score: 5.811502603310248
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurate lane detection, a crucial enabler for autonomous driving, currently
relies on obtaining a large and diverse labeled training dataset. In this work,
we explore learning from abundant, randomly generated synthetic data, together
with unlabeled or partially labeled target domain data, instead. Randomly
generated synthetic data has the advantage of controlled variability in the
lane geometry and lighting, but it is limited in terms of photo-realism. This
poses the challenge of adapting models learned on the unrealistic synthetic
domain to real images. To this end we develop a novel autoencoder-based
approach that uses synthetic labels unaligned with particular images for
adapting to target domain data. In addition, we explore existing domain
adaptation approaches, such as image translation and self-supervision, and
adjust them to the lane detection task. We test all approaches in the
unsupervised domain adaptation setting in which no target domain labels are
available and in the semi-supervised setting in which a small portion of the
target images are labeled. In extensive experiments using three different
datasets, we demonstrate the possibility to save costly target domain labeling
efforts. For example, using our proposed autoencoder approach on the llamas and
tuSimple lane datasets, we can almost recover the fully supervised accuracy
with only 10% of the labeled data. In addition, our autoencoder approach
outperforms all other methods in the semi-supervised domain adaptation
scenario.
Related papers
- SiamSeg: Self-Training with Contrastive Learning for Unsupervised Domain Adaptation Semantic Segmentation in Remote Sensing [14.007392647145448]
UDA enables models to learn from unlabeled target domain data while training on labeled source domain data.
We propose integrating contrastive learning into UDA, enhancing the model's capacity to capture semantic information.
Our SimSeg method outperforms existing approaches, achieving state-of-the-art results.
arXiv Detail & Related papers (2024-10-17T11:59:39Z) - Adapt Anything: Tailor Any Image Classifiers across Domains And
Categories Using Text-to-Image Diffusion Models [82.95591765009105]
We aim to study if a modern text-to-image diffusion model can tailor any task-adaptive image classifier across domains and categories.
We utilize only one off-the-shelf text-to-image model to synthesize images with category labels derived from the corresponding text prompts.
arXiv Detail & Related papers (2023-10-25T11:58:14Z) - Compositional Semantic Mix for Domain Adaptation in Point Cloud
Segmentation [65.78246406460305]
compositional semantic mixing represents the first unsupervised domain adaptation technique for point cloud segmentation.
We present a two-branch symmetric network architecture capable of concurrently processing point clouds from a source domain (e.g. synthetic) and point clouds from a target domain (e.g. real-world)
arXiv Detail & Related papers (2023-08-28T14:43:36Z) - Enhancing Visual Domain Adaptation with Source Preparation [5.287588907230967]
Domain Adaptation techniques fail to consider the characteristics of the source domain itself.
We propose Source Preparation (SP), a method to mitigate source domain biases.
We show that SP enhances UDA across a range of visual domains, with improvements up to 40.64% in mIoU over baseline.
arXiv Detail & Related papers (2023-06-16T18:56:44Z) - Do More With What You Have: Transferring Depth-Scale from Labeled to Unlabeled Domains [43.16293941978469]
Self-supervised depth estimators result in up-to-scale predictions that are linearly correlated to their absolute depth values across the domain.
We show that aligning the field-of-view of two datasets prior to training results in a common linear relationship for both domains.
We use this observed property to transfer the depth-scale from source datasets that have absolute depth labels to new target datasets that lack these measurements.
arXiv Detail & Related papers (2023-03-14T07:07:34Z) - Unsupervised Foggy Scene Understanding via Self Spatial-Temporal Label
Diffusion [51.11295961195151]
We exploit the characteristics of the foggy image sequence of driving scenes to densify the confident pseudo labels.
Based on the two discoveries of local spatial similarity and adjacent temporal correspondence of the sequential image data, we propose a novel Target-Domain driven pseudo label Diffusion scheme.
Our scheme helps the adaptive model achieve 51.92% and 53.84% mean intersection-over-union (mIoU) on two publicly available natural foggy datasets.
arXiv Detail & Related papers (2022-06-10T05:16:50Z) - Semi-Supervised Domain Adaptation with Prototypical Alignment and
Consistency Learning [86.6929930921905]
This paper studies how much it can help address domain shifts if we further have a few target samples labeled.
To explore the full potential of landmarks, we incorporate a prototypical alignment (PA) module which calculates a target prototype for each class from the landmarks.
Specifically, we severely perturb the labeled images, making PA non-trivial to achieve and thus promoting model generalizability.
arXiv Detail & Related papers (2021-04-19T08:46:08Z) - A Review of Single-Source Deep Unsupervised Visual Domain Adaptation [81.07994783143533]
Large-scale labeled training datasets have enabled deep neural networks to excel across a wide range of benchmark vision tasks.
In many applications, it is prohibitively expensive and time-consuming to obtain large quantities of labeled data.
To cope with limited labeled training data, many have attempted to directly apply models trained on a large-scale labeled source domain to another sparsely labeled or unlabeled target domain.
arXiv Detail & Related papers (2020-09-01T00:06:50Z) - Phase Consistent Ecological Domain Adaptation [76.75730500201536]
We focus on the task of semantic segmentation, where annotated synthetic data are aplenty, but annotating real data is laborious.
The first criterion, inspired by visual psychophysics, is that the map between the two image domains be phase-preserving.
The second criterion aims to leverage ecological statistics, or regularities in the scene which are manifest in any image of it, regardless of the characteristics of the illuminant or the imaging sensor.
arXiv Detail & Related papers (2020-04-10T06:58:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.