Incorporating Orientations into End-to-end Driving Model for Steering
Control
- URL: http://arxiv.org/abs/2103.05846v1
- Date: Wed, 10 Mar 2021 03:14:41 GMT
- Title: Incorporating Orientations into End-to-end Driving Model for Steering
Control
- Authors: Peng Wan, Zhenbo Song, Jianfeng Lu
- Abstract summary: We present a novel end-to-end deep neural network model for autonomous driving.
It takes monocular image sequence as input, and directly generates the steering control angle.
Our dataset includes multiple driving scenarios, such as urban, country, and off-road.
- Score: 12.163394005517766
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we present a novel end-to-end deep neural network model for
autonomous driving that takes monocular image sequence as input, and directly
generates the steering control angle. Firstly, we model the end-to-end driving
problem as a local path planning process. Inspired by the environmental
representation in the classical planning algorithms(i.e. the beam curvature
method), pixel-wise orientations are fed into the network to learn
direction-aware features. Next, to handle the imbalanced distribution of
steering values in training datasets, we propose an improvement on a
cost-sensitive loss function named SteeringLoss2. Besides, we also present a
new end-to-end driving dataset, which provides corresponding LiDAR and image
sequences, as well as standard driving behaviors. Our dataset includes multiple
driving scenarios, such as urban, country, and off-road. Numerous experiments
are conducted on both public available LiVi-Set and our own dataset, and the
results show that the model using our proposed methods can predict steering
angle accurately.
Related papers
- DriveCoT: Integrating Chain-of-Thought Reasoning with End-to-End Driving [81.04174379726251]
This paper collects a comprehensive end-to-end driving dataset named DriveCoT.
It contains sensor data, control decisions, and chain-of-thought labels to indicate the reasoning process.
We propose a baseline model called DriveCoT-Agent, trained on our dataset, to generate chain-of-thought predictions and final decisions.
arXiv Detail & Related papers (2024-03-25T17:59:01Z) - Pioneering SE(2)-Equivariant Trajectory Planning for Automated Driving [45.18582668677648]
Planning the trajectory of the controlled ego vehicle is a key challenge in automated driving.
We propose a lightweight equivariant planning model that generates multi-modal joint predictions for all vehicles.
We also propose equivariant route attraction to guide the ego vehicle along a high-level route provided by an off-the-shelf GPS navigation system.
arXiv Detail & Related papers (2024-03-17T18:53:46Z) - A Tricycle Model to Accurately Control an Autonomous Racecar with Locked
Differential [71.53284767149685]
We present a novel formulation to model the effects of a locked differential on the lateral dynamics of an autonomous open-wheel racecar.
We include a micro-steps discretization approach to accurately linearize the dynamics and produce a prediction suitable for real-time implementation.
arXiv Detail & Related papers (2023-12-22T16:29:55Z) - Partial End-to-end Reinforcement Learning for Robustness Against Modelling Error in Autonomous Racing [0.0]
This paper addresses the issue of increasing the performance of reinforcement learning (RL) solutions for autonomous racing cars.
We propose a partial end-to-end algorithm that decouples the planning and control tasks.
By leveraging the robustness of a classical controller, our partial end-to-end driving algorithm exhibits better robustness towards model mismatches than standard end-to-end algorithms.
arXiv Detail & Related papers (2023-12-11T14:27:10Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - Pre-training on Synthetic Driving Data for Trajectory Prediction [61.520225216107306]
We propose a pipeline-level solution to mitigate the issue of data scarcity in trajectory forecasting.
We adopt HD map augmentation and trajectory synthesis for generating driving data, and then we learn representations by pre-training on them.
We conduct extensive experiments to demonstrate the effectiveness of our data expansion and pre-training strategies.
arXiv Detail & Related papers (2023-09-18T19:49:22Z) - Policy Pre-training for End-to-end Autonomous Driving via
Self-supervised Geometric Modeling [96.31941517446859]
We propose PPGeo (Policy Pre-training via Geometric modeling), an intuitive and straightforward fully self-supervised framework curated for the policy pretraining in visuomotor driving.
We aim at learning policy representations as a powerful abstraction by modeling 3D geometric scenes on large-scale unlabeled and uncalibrated YouTube driving videos.
In the first stage, the geometric modeling framework generates pose and depth predictions simultaneously, with two consecutive frames as input.
In the second stage, the visual encoder learns driving policy representation by predicting the future ego-motion and optimizing with the photometric error based on current visual observation only.
arXiv Detail & Related papers (2023-01-03T08:52:49Z) - Fully End-to-end Autonomous Driving with Semantic Depth Cloud Mapping
and Multi-Agent [2.512827436728378]
We propose a novel deep learning model trained with end-to-end and multi-task learning manners to perform both perception and control tasks simultaneously.
The model is evaluated on CARLA simulator with various scenarios made of normal-adversarial situations and different weathers to mimic real-world conditions.
arXiv Detail & Related papers (2022-04-12T03:57:01Z) - Multi-modal Scene-compliant User Intention Estimation for Navigation [1.9117798322548485]
A framework to generated user intention distributions when operating a mobile vehicle is proposed in this work.
The model learns from past observed trajectories and leverages traversability information derived from the visual surroundings.
Experiments were conducted on a dataset collected with a custom wheelchair model built onto the open-source urban driving simulator CARLA.
arXiv Detail & Related papers (2021-06-13T05:11:33Z) - A Driving Behavior Recognition Model with Bi-LSTM and Multi-Scale CNN [59.57221522897815]
We propose a neural network model based on trajectories information for driving behavior recognition.
We evaluate the proposed model on the public BLVD dataset, achieving a satisfying performance.
arXiv Detail & Related papers (2021-03-01T06:47:29Z) - A Deep Learning Framework for Generation and Analysis of Driving
Scenario Trajectories [2.908482270923597]
We propose a unified deep learning framework for the generation and analysis of driving scenario trajectories.
We experimentally investigate the performance of the proposed framework on real-world scenario trajectories obtained from in-field data collection.
arXiv Detail & Related papers (2020-07-28T23:33:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.