Domain Adaptation Through Task Distillation
- URL: http://arxiv.org/abs/2008.11911v1
- Date: Thu, 27 Aug 2020 04:44:49 GMT
- Title: Domain Adaptation Through Task Distillation
- Authors: Brady Zhou, Nimit Kalra, Philipp Kr\"ahenb\"uhl
- Abstract summary: Deep networks devour millions of precisely annotated images to build their powerful representations.
Images exist in any interesting domain, simulated or real, and are easy to label and extend.
We use these recognition datasets to link up a source and target domain to transfer models between them in a task distillation framework.
- Score: 5.371337604556311
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep networks devour millions of precisely annotated images to build their
complex and powerful representations. Unfortunately, tasks like autonomous
driving have virtually no real-world training data. Repeatedly crashing a car
into a tree is simply too expensive. The commonly prescribed solution is
simple: learn a representation in simulation and transfer it to the real world.
However, this transfer is challenging since simulated and real-world visual
experiences vary dramatically. Our core observation is that for certain tasks,
such as image recognition, datasets are plentiful. They exist in any
interesting domain, simulated or real, and are easy to label and extend. We use
these recognition datasets to link up a source and target domain to transfer
models between them in a task distillation framework. Our method can
successfully transfer navigation policies between drastically different
simulators: ViZDoom, SuperTuxKart, and CARLA. Furthermore, it shows promising
results on standard domain adaptation benchmarks.
Related papers
- Learning autonomous driving from aerial imagery [67.06858775696453]
Photogrammetric simulators allow the synthesis of novel views through the transformation of pre-generated assets into novel views.
We use a Neural Radiance Field (NeRF) as an intermediate representation to synthesize novel views from the point of view of a ground vehicle.
arXiv Detail & Related papers (2024-10-18T05:09:07Z) - Natural Language Can Help Bridge the Sim2Real Gap [9.458180590551715]
Sim2Real is a promising paradigm for overcoming data scarcity in the real-world target domain.
We propose using natural language descriptions of images as a unifying signal across domains.
We demonstrate that training the image encoder to predict the language description serves as a useful, data-efficient pretraining step.
arXiv Detail & Related papers (2024-05-16T12:02:02Z) - Learning Interactive Real-World Simulators [96.5991333400566]
We explore the possibility of learning a universal simulator of real-world interaction through generative modeling.
We use the simulator to train both high-level vision-language policies and low-level reinforcement learning policies.
Video captioning models can benefit from training with simulated experience, opening up even wider applications.
arXiv Detail & Related papers (2023-10-09T19:42:22Z) - Sim-to-Real via Sim-to-Seg: End-to-end Off-road Autonomous Driving
Without Real Data [56.49494318285391]
We present Sim2Seg, a re-imagining of RCAN that crosses the visual reality gap for off-road autonomous driving.
This is done by learning to translate randomized simulation images into simulated segmentation and depth maps.
This allows us to train an end-to-end RL policy in simulation, and directly deploy in the real-world.
arXiv Detail & Related papers (2022-10-25T17:50:36Z) - Towards Optimal Strategies for Training Self-Driving Perception Models
in Simulation [98.51313127382937]
We focus on the use of labels in the synthetic domain alone.
Our approach introduces both a way to learn neural-invariant representations and a theoretically inspired view on how to sample the data from the simulator.
We showcase our approach on the bird's-eye-view vehicle segmentation task with multi-sensor data.
arXiv Detail & Related papers (2021-11-15T18:37:43Z) - DriveGAN: Towards a Controllable High-Quality Neural Simulation [147.6822288981004]
We introduce a novel high-quality neural simulator referred to as DriveGAN.
DriveGAN achieves controllability by disentangling different components without supervision.
We train DriveGAN on multiple datasets, including 160 hours of real-world driving data.
arXiv Detail & Related papers (2021-04-30T15:30:05Z) - Sim2Real for Self-Supervised Monocular Depth and Segmentation [7.376636976924]
Image-based learning methods for autonomous vehicle perception tasks require large quantities of labelled, real data in order to properly train without overfitting.
Recent advances in domain adaptation have indicated that a shared latent space assumption can help to bridge the gap between the simulation and real domains.
We demonstrate that a twin VAE-based architecture with a shared latent space and auxiliary decoders is able to bridge the sim2real gap without requiring any paired, ground-truth data in the real domain.
arXiv Detail & Related papers (2020-12-01T03:25:02Z) - Deep Traffic Sign Detection and Recognition Without Target Domain Real
Images [52.079665469286496]
We propose a novel database generation method that requires no real image from the target-domain, and (ii) templates of the traffic signs.
The method does not aim at overcoming the training with real data, but to be a compatible alternative when the real data is not available.
On large data sets, training with a fully synthetic data set almost matches the performance of training with a real one.
arXiv Detail & Related papers (2020-07-30T21:06:47Z) - Virtual to Real adaptation of Pedestrian Detectors [9.432150710329607]
ViPeD is a new synthetically generated set of images collected with the graphical engine of the video game GTA V - Grand Theft Auto V.
We propose two different Domain Adaptation techniques suitable for the pedestrian detection task, but possibly applicable to general object detection.
Experiments show that the network trained with ViPeD can generalize over unseen real-world scenarios better than the detector trained over real-world data.
arXiv Detail & Related papers (2020-01-09T14:50:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.