Self-Supervised Steering Angle Prediction for Vehicle Control Using
Visual Odometry
- URL: http://arxiv.org/abs/2103.11204v1
- Date: Sat, 20 Mar 2021 16:29:01 GMT
- Title: Self-Supervised Steering Angle Prediction for Vehicle Control Using
Visual Odometry
- Authors: Qadeer Khan, Patrick Wenzel, Daniel Cremers
- Abstract summary: We show how a model can be trained to control a vehicle's trajectory using camera poses estimated through visual odometry methods.
We propose a scalable framework that leverages trajectory information from several different runs using a camera setup placed at the front of a car.
- Score: 55.11913183006984
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Vision-based learning methods for self-driving cars have primarily used
supervised approaches that require a large number of labels for training.
However, those labels are usually difficult and expensive to obtain. In this
paper, we demonstrate how a model can be trained to control a vehicle's
trajectory using camera poses estimated through visual odometry methods in an
entirely self-supervised fashion. We propose a scalable framework that
leverages trajectory information from several different runs using a camera
setup placed at the front of a car. Experimental results on the CARLA simulator
demonstrate that our proposed approach performs at par with the model trained
with supervision.
Related papers
- Guiding Attention in End-to-End Driving Models [49.762868784033785]
Vision-based end-to-end driving models trained by imitation learning can lead to affordable solutions for autonomous driving.
We study how to guide the attention of these models to improve their driving quality by adding a loss term during training.
In contrast to previous work, our method does not require these salient semantic maps to be available during testing time.
arXiv Detail & Related papers (2024-04-30T23:18:51Z) - Safe Navigation: Training Autonomous Vehicles using Deep Reinforcement
Learning in CARLA [0.0]
The goal of this project is to train autonomous vehicles to make decisions to navigate in uncertain environments using deep reinforcement learning techniques.
The simulator provides a realistic and urban environment for training and testing self-driving models.
arXiv Detail & Related papers (2023-10-23T04:23:07Z) - Robust Autonomous Vehicle Pursuit without Expert Steering Labels [41.168074206046164]
We present a learning method for lateral and longitudinal motion control of an ego-vehicle for vehicle pursuit.
The car being controlled does not have a pre-defined route, rather it reactively adapts to follow a target vehicle while maintaining a safety distance.
We extensively validate our approach using the CARLA simulator on a wide range of terrains.
arXiv Detail & Related papers (2023-08-16T14:09:39Z) - Policy Pre-training for End-to-end Autonomous Driving via
Self-supervised Geometric Modeling [96.31941517446859]
We propose PPGeo (Policy Pre-training via Geometric modeling), an intuitive and straightforward fully self-supervised framework curated for the policy pretraining in visuomotor driving.
We aim at learning policy representations as a powerful abstraction by modeling 3D geometric scenes on large-scale unlabeled and uncalibrated YouTube driving videos.
In the first stage, the geometric modeling framework generates pose and depth predictions simultaneously, with two consecutive frames as input.
In the second stage, the visual encoder learns driving policy representation by predicting the future ego-motion and optimizing with the photometric error based on current visual observation only.
arXiv Detail & Related papers (2023-01-03T08:52:49Z) - Masked Visual Pre-training for Motor Control [118.18189211080225]
Self-supervised visual pre-training from real-world images is effective for learning motor control tasks from pixels.
We freeze the visual encoder and train neural network controllers on top with reinforcement learning.
This is the first self-supervised model to exploit real-world images at scale for motor control.
arXiv Detail & Related papers (2022-03-11T18:58:10Z) - Self-Supervised Moving Vehicle Detection from Audio-Visual Cues [29.06503735149157]
We propose a self-supervised approach that leverages audio-visual cues to detect moving vehicles in videos.
Our approach employs contrastive learning for localizing vehicles in images from corresponding pairs of images and recorded audio.
We show that our model can be used as a teacher to supervise an audio-only detection model.
arXiv Detail & Related papers (2022-01-30T09:52:14Z) - Learning Perceptual Locomotion on Uneven Terrains using Sparse Visual
Observations [75.60524561611008]
This work aims to exploit the use of sparse visual observations to achieve perceptual locomotion over a range of commonly seen bumps, ramps, and stairs in human-centred environments.
We first formulate the selection of minimal visual input that can represent the uneven surfaces of interest, and propose a learning framework that integrates such exteroceptive and proprioceptive data.
We validate the learned policy in tasks that require omnidirectional walking over flat ground and forward locomotion over terrains with obstacles, showing a high success rate.
arXiv Detail & Related papers (2021-09-28T20:25:10Z) - SODA10M: Towards Large-Scale Object Detection Benchmark for Autonomous
Driving [94.11868795445798]
We release a Large-Scale Object Detection benchmark for Autonomous driving, named as SODA10M, containing 10 million unlabeled images and 20K images labeled with 6 representative object categories.
To improve diversity, the images are collected every ten seconds per frame within 32 different cities under different weather conditions, periods and location scenes.
We provide extensive experiments and deep analyses of existing supervised state-of-the-art detection models, popular self-supervised and semi-supervised approaches, and some insights about how to develop future models.
arXiv Detail & Related papers (2021-06-21T13:55:57Z) - Vehicle trajectory prediction in top-view image sequences based on deep
learning method [1.181206257787103]
Estimating and predicting surrounding vehicles' movement is essential for an automated vehicle and advanced safety systems.
A model with low computational complexity is proposed, which is trained by images taken from the road's aerial image.
The proposed model can predict the vehicle's future path in any freeway only by viewing the images related to the history of the target vehicle's movement and its neighbors.
arXiv Detail & Related papers (2021-02-02T20:48:19Z) - What My Motion tells me about Your Pose: A Self-Supervised Monocular 3D
Vehicle Detector [41.12124329933595]
We demonstrate the use of monocular visual odometry for the self-supervised fine-tuning of a model for orientation estimation pre-trained on a reference domain.
We subsequently demonstrate an optimization-based monocular 3D bounding box detector built on top of the self-supervised vehicle orientation estimator.
arXiv Detail & Related papers (2020-07-29T12:58:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.