Action-Based Representation Learning for Autonomous Driving
- URL: http://arxiv.org/abs/2008.09417v2
- Date: Mon, 9 Nov 2020 15:45:26 GMT
- Title: Action-Based Representation Learning for Autonomous Driving
- Authors: Yi Xiao, Felipe Codevilla, Christopher Pal, Antonio M. Lopez
- Abstract summary: We propose to use action-based driving data for learning representations.
Our experiments show that an affordance-based driving model pre-trained with this approach can leverage a relatively small amount of weakly annotated imagery.
- Score: 8.296684637620551
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human drivers produce a vast amount of data which could, in principle, be
used to improve autonomous driving systems. Unfortunately, seemingly
straightforward approaches for creating end-to-end driving models that map
sensor data directly into driving actions are problematic in terms of
interpretability, and typically have significant difficulty dealing with
spurious correlations. Alternatively, we propose to use this kind of
action-based driving data for learning representations. Our experiments show
that an affordance-based driving model pre-trained with this approach can
leverage a relatively small amount of weakly annotated imagery and outperform
pure end-to-end driving models, while being more interpretable. Further, we
demonstrate how this strategy outperforms previous methods based on learning
inverse dynamics models as well as other methods based on heavy human
supervision (ImageNet).
Related papers
- GenFollower: Enhancing Car-Following Prediction with Large Language Models [11.847589952558566]
We propose GenFollower, a novel zero-shot prompting approach that leverages large language models (LLMs) to address these challenges.
We reframe car-following behavior as a language modeling problem and integrate heterogeneous inputs into structured prompts for LLMs.
Experiments on Open datasets demonstrate GenFollower's superior performance and ability to provide interpretable insights.
arXiv Detail & Related papers (2024-07-08T04:54:42Z) - MetaFollower: Adaptable Personalized Autonomous Car Following [63.90050686330677]
We propose an adaptable personalized car-following framework - MetaFollower.
We first utilize Model-Agnostic Meta-Learning (MAML) to extract common driving knowledge from various CF events.
We additionally combine Long Short-Term Memory (LSTM) and Intelligent Driver Model (IDM) to reflect temporal heterogeneity with high interpretability.
arXiv Detail & Related papers (2024-06-23T15:30:40Z) - Guiding Attention in End-to-End Driving Models [49.762868784033785]
Vision-based end-to-end driving models trained by imitation learning can lead to affordable solutions for autonomous driving.
We study how to guide the attention of these models to improve their driving quality by adding a loss term during training.
In contrast to previous work, our method does not require these salient semantic maps to be available during testing time.
arXiv Detail & Related papers (2024-04-30T23:18:51Z) - Evaluation of Differentially Constrained Motion Models for Graph-Based
Trajectory Prediction [1.1947990549568765]
This research investigates the performance of various motion models in combination with numerical solvers for the prediction task.
The study shows that simpler models, such as low-order integrator models, are preferred over more complex, e.g., kinematic models, to achieve accurate predictions.
arXiv Detail & Related papers (2023-04-11T10:15:20Z) - FBLNet: FeedBack Loop Network for Driver Attention Prediction [75.83518507463226]
Nonobjective driving experience is difficult to model.
In this paper, we propose a FeedBack Loop Network (FBLNet) which attempts to model the driving experience accumulation procedure.
Under the guidance of the incremental knowledge, our model fuses the CNN feature and Transformer feature that are extracted from the input image to predict driver attention.
arXiv Detail & Related papers (2022-12-05T08:25:09Z) - Learning Interactive Driving Policies via Data-driven Simulation [125.97811179463542]
Data-driven simulators promise high data-efficiency for driving policy learning.
Small underlying datasets often lack interesting and challenging edge cases for learning interactive driving.
We propose a simulation method that uses in-painted ado vehicles for learning robust driving policies.
arXiv Detail & Related papers (2021-11-23T20:14:02Z) - A Hybrid Rule-Based and Data-Driven Approach to Driver Modeling through
Particle Filtering [6.9485501711137525]
We propose a methodology that combines rule-based modeling with data-driven learning.
Our results show that driver models based on our hybrid rule-based and data-driven approach can accurately capture real-world driving behavior.
arXiv Detail & Related papers (2021-08-29T11:07:14Z) - Learning to drive from a world on rails [78.28647825246472]
We learn an interactive vision-based driving policy from pre-recorded driving logs via a model-based approach.
A forward model of the world supervises a driving policy that predicts the outcome of any potential driving trajectory.
Our method ranks first on the CARLA leaderboard, attaining a 25% higher driving score while using 40 times less data.
arXiv Detail & Related papers (2021-05-03T05:55:30Z) - Learning Accurate and Human-Like Driving using Semantic Maps and
Attention [152.48143666881418]
This paper investigates how end-to-end driving models can be improved to drive more accurately and human-like.
We exploit semantic and visual maps from HERE Technologies and augment the existing Drive360 dataset with such.
Our models are trained and evaluated on the Drive360 + HERE dataset, which features 60 hours and 3000 km of real-world driving data.
arXiv Detail & Related papers (2020-07-10T22:25:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.