Learning a Directional Soft Lane Affordance Model for Road Scenes Using
Self-Supervision
- URL: http://arxiv.org/abs/2002.11477v2
- Date: Wed, 15 Apr 2020 13:19:45 GMT
- Title: Learning a Directional Soft Lane Affordance Model for Road Scenes Using
Self-Supervision
- Authors: Robin Karlsson, Erik Sjoberg
- Abstract summary: Humans navigate complex environments in an organized yet flexible manner, adapting to the context and implicit social rules.
This work proposes a novel self-supervised method for training a probabilistic network model to estimate the regions humans are most likely to drive in.
The model is shown to successfully generalize to new road scenes, demonstrating potential for real-world application.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Humans navigate complex environments in an organized yet flexible manner,
adapting to the context and implicit social rules. Understanding these
naturally learned patterns of behavior is essential for applications such as
autonomous vehicles. However, algorithmically defining these implicit rules of
human behavior remains difficult. This work proposes a novel self-supervised
method for training a probabilistic network model to estimate the regions
humans are most likely to drive in as well as a multimodal representation of
the inferred direction of travel at each point. The model is trained on
individual human trajectories conditioned on a representation of the driving
environment. The model is shown to successfully generalize to new road scenes,
demonstrating potential for real-world application as a prior for socially
acceptable driving behavior in challenging or ambiguous scenarios which are
poorly handled by explicit traffic rules.
Related papers
- Dream to Drive with Predictive Individual World Model [12.05377034777257]
This paper presents a novel model-based reinforcement learning (MBRL) method with a predictive individual world model (PIWM) for autonomous driving.
PIWM describes the driving environment from an individual-level perspective and captures vehicles' interactive relations and their intentions.
It is trained in PIWM's imagination and effectively navigates in the urban driving scenes leveraging intention-aware latent states.
arXiv Detail & Related papers (2025-01-28T06:18:29Z) - Resolving uncertainty on the fly: Modeling adaptive driving behavior as
active inference [6.935068505791817]
Existing traffic psychology models of adaptive driving behavior either lack computational rigor or only address specific scenarios and/or behavioral phenomena.
This paper proposes such a model based on active inference, a behavioral modeling framework originating in computational neuroscience.
We show how human-like adaptive driving behavior emerges from the single principle of expected free energy minimization.
arXiv Detail & Related papers (2023-11-10T22:40:41Z) - Tackling Real-World Autonomous Driving using Deep Reinforcement Learning [63.3756530844707]
In this work, we propose a model-free Deep Reinforcement Learning Planner training a neural network that predicts acceleration and steering angle.
In order to deploy the system on board the real self-driving car, we also develop a module represented by a tiny neural network.
arXiv Detail & Related papers (2022-07-05T16:33:20Z) - Transferable and Adaptable Driving Behavior Prediction [34.606012573285554]
We propose HATN, a hierarchical framework to generate high-quality, transferable, and adaptable predictions for driving behaviors.
We demonstrate our algorithms in the task of trajectory prediction for real traffic data at intersections and roundabouts from the INTERACTION dataset.
arXiv Detail & Related papers (2022-02-10T16:46:24Z) - Learning Interactive Driving Policies via Data-driven Simulation [125.97811179463542]
Data-driven simulators promise high data-efficiency for driving policy learning.
Small underlying datasets often lack interesting and challenging edge cases for learning interactive driving.
We propose a simulation method that uses in-painted ado vehicles for learning robust driving policies.
arXiv Detail & Related papers (2021-11-23T20:14:02Z) - Calibration of Human Driving Behavior and Preference Using Naturalistic
Traffic Data [5.926030548326619]
We show how the model can be inverted to estimate driver preferences from naturalistic traffic data.
One distinct advantage of our approach is the drastically reduced computational burden.
arXiv Detail & Related papers (2021-05-05T01:20:03Z) - Learning to drive from a world on rails [78.28647825246472]
We learn an interactive vision-based driving policy from pre-recorded driving logs via a model-based approach.
A forward model of the world supervises a driving policy that predicts the outcome of any potential driving trajectory.
Our method ranks first on the CARLA leaderboard, attaining a 25% higher driving score while using 40 times less data.
arXiv Detail & Related papers (2021-05-03T05:55:30Z) - End-to-End Intersection Handling using Multi-Agent Deep Reinforcement
Learning [63.56464608571663]
Navigating through intersections is one of the main challenging tasks for an autonomous vehicle.
In this work, we focus on the implementation of a system able to navigate through intersections where only traffic signs are provided.
We propose a multi-agent system using a continuous, model-free Deep Reinforcement Learning algorithm used to train a neural network for predicting both the acceleration and the steering angle at each time step.
arXiv Detail & Related papers (2021-04-28T07:54:40Z) - Generalizing Decision Making for Automated Driving with an Invariant
Environment Representation using Deep Reinforcement Learning [55.41644538483948]
Current approaches either do not generalize well beyond the training data or are not capable to consider a variable number of traffic participants.
We propose an invariant environment representation from the perspective of the ego vehicle.
We show that the agents are capable to generalize successfully to unseen scenarios, due to the abstraction.
arXiv Detail & Related papers (2021-02-12T20:37:29Z) - Behavioral decision-making for urban autonomous driving in the presence
of pedestrians using Deep Recurrent Q-Network [0.0]
Decision making for autonomous driving in urban environments is challenging due to the complexity of the road structure and the uncertainty in the behavior of diverse road users.
In this work, a deep reinforcement learning based decision-making approach for high-level driving behavior is proposed for urban environments in the presence of pedestrians.
The proposed method is evaluated for dense urban scenarios and compared with a rule-based approach and results show that the proposed DRQN based driving behavior decision maker outperforms the rule-based approach.
arXiv Detail & Related papers (2020-10-26T08:08:06Z) - Goal-Directed Occupancy Prediction for Lane-Following Actors [5.469556349325342]
Predicting the possible future behaviors of vehicles that drive on shared roads is a crucial task for safe autonomous driving.
We propose a new method that leverages the mapped road topology to reason over possible goals and predict the future spatial occupancy of dynamic road actors.
arXiv Detail & Related papers (2020-09-06T20:44:59Z) - Intelligent Roundabout Insertion using Deep Reinforcement Learning [68.8204255655161]
We present a maneuver planning module able to negotiate the entering in busy roundabouts.
The proposed module is based on a neural network trained to predict when and how entering the roundabout throughout the whole duration of the maneuver.
arXiv Detail & Related papers (2020-01-03T11:16:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.