Nocturne: a scalable driving benchmark for bringing multi-agent learning
one step closer to the real world
- URL: http://arxiv.org/abs/2206.09889v1
- Date: Mon, 20 Jun 2022 16:51:44 GMT
- Title: Nocturne: a scalable driving benchmark for bringing multi-agent learning
one step closer to the real world
- Authors: Eugene Vinitsky, Nathan Lichtl\'e, Xiaomeng Yang, Brandon Amos, Jakob
Foerster
- Abstract summary: We introduce textitNocturne, a new 2D driving simulator for investigating multi-agent coordination under partial observability.
The focus of Nocturne is to enable research into inference and theory of mind in real-world multi-agent settings without the computational overhead of computer vision and feature extraction from images.
- Score: 11.069445871185744
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce \textit{Nocturne}, a new 2D driving simulator for investigating
multi-agent coordination under partial observability. The focus of Nocturne is
to enable research into inference and theory of mind in real-world multi-agent
settings without the computational overhead of computer vision and feature
extraction from images. Agents in this simulator only observe an obstructed
view of the scene, mimicking human visual sensing constraints. Unlike existing
benchmarks that are bottlenecked by rendering human-like observations directly
using a camera input, Nocturne uses efficient intersection methods to compute a
vectorized set of visible features in a C++ back-end, allowing the simulator to
run at $2000+$ steps-per-second. Using open-source trajectory and map data, we
construct a simulator to load and replay arbitrary trajectories and scenes from
real-world driving data. Using this environment, we benchmark
reinforcement-learning and imitation-learning agents and demonstrate that the
agents are quite far from human-level coordination ability and deviate
significantly from the expert trajectories.
Related papers
- Learning autonomous driving from aerial imagery [67.06858775696453]
Photogrammetric simulators allow the synthesis of novel views through the transformation of pre-generated assets into novel views.
We use a Neural Radiance Field (NeRF) as an intermediate representation to synthesize novel views from the point of view of a ground vehicle.
arXiv Detail & Related papers (2024-10-18T05:09:07Z) - Autonomous Vehicle Controllers From End-to-End Differentiable Simulation [60.05963742334746]
We propose a differentiable simulator and design an analytic policy gradients (APG) approach to training AV controllers.
Our proposed framework brings the differentiable simulator into an end-to-end training loop, where gradients of environment dynamics serve as a useful prior to help the agent learn a more grounded policy.
We find significant improvements in performance and robustness to noise in the dynamics, as well as overall more intuitive human-like handling.
arXiv Detail & Related papers (2024-09-12T11:50:06Z) - Structured Graph Network for Constrained Robot Crowd Navigation with Low Fidelity Simulation [10.201765067255147]
We investigate the feasibility of deploying reinforcement learning (RL) policies for constrained crowd navigation using a low-fidelity simulator.
We introduce a representation of the dynamic environment, separating human and obstacle representations.
This representation enables RL policies trained in a low-fidelity simulator to deploy in real world with a reduced sim2real gap.
arXiv Detail & Related papers (2024-05-27T04:53:09Z) - Social-Transmotion: Promptable Human Trajectory Prediction [65.80068316170613]
Social-Transmotion is a generic Transformer-based model that exploits diverse and numerous visual cues to predict human behavior.
Our approach is validated on multiple datasets, including JTA, JRDB, Pedestrians and Cyclists in Road Traffic, and ETH-UCY.
arXiv Detail & Related papers (2023-12-26T18:56:49Z) - Video Killed the HD-Map: Predicting Multi-Agent Behavior Directly From
Aerial Images [14.689298253430568]
We propose an aerial image-based map (AIM) representation that requires minimal annotation and provides rich road context information for traffic agents like pedestrians and vehicles.
Our results demonstrate competitive multi-agent trajectory prediction performance especially for pedestrians in the scene when using our AIM representation.
arXiv Detail & Related papers (2023-05-19T17:48:01Z) - TrafficBots: Towards World Models for Autonomous Driving Simulation and
Motion Prediction [149.5716746789134]
We show data-driven traffic simulation can be formulated as a world model.
We present TrafficBots, a multi-agent policy built upon motion prediction and end-to-end driving.
Experiments on the open motion dataset show TrafficBots can simulate realistic multi-agent behaviors.
arXiv Detail & Related papers (2023-03-07T18:28:41Z) - Towards Optimal Strategies for Training Self-Driving Perception Models
in Simulation [98.51313127382937]
We focus on the use of labels in the synthetic domain alone.
Our approach introduces both a way to learn neural-invariant representations and a theoretically inspired view on how to sample the data from the simulator.
We showcase our approach on the bird's-eye-view vehicle segmentation task with multi-sensor data.
arXiv Detail & Related papers (2021-11-15T18:37:43Z) - DriveGAN: Towards a Controllable High-Quality Neural Simulation [147.6822288981004]
We introduce a novel high-quality neural simulator referred to as DriveGAN.
DriveGAN achieves controllability by disentangling different components without supervision.
We train DriveGAN on multiple datasets, including 160 hours of real-world driving data.
arXiv Detail & Related papers (2021-04-30T15:30:05Z) - Imagining The Road Ahead: Multi-Agent Trajectory Prediction via
Differentiable Simulation [17.953880589741438]
We develop a deep generative model built on a fully differentiable simulator for trajectory prediction.
We achieve state-of-the-art results on the INTERACTION dataset, using standard neural architectures and a standard variational training objective.
We name our model ITRA, for "Imagining the Road Ahead"
arXiv Detail & Related papers (2021-04-22T17:48:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.