Lane Graph Estimation for Scene Understanding in Urban Driving
- URL: http://arxiv.org/abs/2105.00195v1
- Date: Sat, 1 May 2021 08:38:18 GMT
- Title: Lane Graph Estimation for Scene Understanding in Urban Driving
- Authors: Jannik Z\"urn, Johan Vertens, Wolfram Burgard
- Abstract summary: We propose a novel approach for lane geometry estimation from bird's-eye-view images.
We train a graph estimation model on multimodal bird's-eye-view data processed from the popular NuScenes dataset.
Our model shows promising performance for most evaluated urban scenes and can serve as a step towards automated generation of HD lane annotations for autonomous driving.
- Score: 34.82775302794312
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Lane-level scene annotations provide invaluable data in autonomous vehicles
for trajectory planning in complex environments such as urban areas and cities.
However, obtaining such data is time-consuming and expensive since lane
annotations have to be annotated manually by humans and are as such hard to
scale to large areas. In this work, we propose a novel approach for lane
geometry estimation from bird's-eye-view images. We formulate the problem of
lane shape and lane connections estimation as a graph estimation problem where
lane anchor points are graph nodes and lane segments are graph edges. We train
a graph estimation model on multimodal bird's-eye-view data processed from the
popular NuScenes dataset and its map expansion pack. We furthermore estimate
the direction of the lane connection for each lane segment with a separate
model which results in a directed lane graph. We illustrate the performance of
our LaneGraphNet model on the challenging NuScenes dataset and provide
extensive qualitative and quantitative evaluation. Our model shows promising
performance for most evaluated urban scenes and can serve as a step towards
automated generation of HD lane annotations for autonomous driving.
Related papers
- LMT-Net: Lane Model Transformer Network for Automated HD Mapping from Sparse Vehicle Observations [11.395749549636868]
Lane Model Transformer Network (LMT-Net) is an encoder-decoder neural network architecture that performs polyline encoding and predicts lane pairs and their connectivity.
We evaluate the performance of LMT-Net on an internal dataset that consists of multiple vehicle observations as well as human annotations as Ground Truth (GT)
arXiv Detail & Related papers (2024-09-19T02:14:35Z) - Prior Based Online Lane Graph Extraction from Single Onboard Camera
Image [133.68032636906133]
We tackle online estimation of the lane graph from a single onboard camera image.
The prior is extracted from the dataset through a transformer based Wasserstein Autoencoder.
The autoencoder is then used to enhance the initial lane graph estimates.
arXiv Detail & Related papers (2023-07-25T08:58:26Z) - AutoGraph: Predicting Lane Graphs from Traffic Observations [35.73868803802196]
We propose to use the motion patterns of traffic participants as lane graph annotations.
Based on the location of these tracklets, we predict the successor lane graph from an initial position.
In a subsequent stage, we show how the individual successor predictions can be aggregated into a consistent lane graph.
arXiv Detail & Related papers (2023-06-27T12:11:22Z) - Online Lane Graph Extraction from Onboard Video [133.68032636906133]
We use the video stream from an onboard camera for online extraction of the surrounding's lane graph.
Using video, instead of a single image, as input poses both benefits and challenges in terms of combining the information from different timesteps.
A single model of this proposed simple, yet effective, method can process any number of images, including one, to produce accurate lane graphs.
arXiv Detail & Related papers (2023-04-03T12:36:39Z) - Learning and Aggregating Lane Graphs for Urban Automated Driving [26.34702432184092]
Lane graph estimation is an essential and highly challenging task in automated driving and HD map learning.
We propose a novel bottom-up approach to lane graph estimation from aerial imagery that aggregates multiple overlapping graphs into a single consistent graph.
We make our large-scale urban lane graph dataset and code publicly available at http://urbanlanegraph.cs.uni-freiburg.de.
arXiv Detail & Related papers (2023-02-13T08:23:35Z) - DAGMapper: Learning to Map by Discovering Lane Topology [84.12949740822117]
We focus on drawing the lane boundaries of complex highways with many lanes that contain topology changes due to forks and merges.
We formulate the problem as inference in a directed acyclic graphical model (DAG), where the nodes of the graph encode geometric and topological properties of the local regions of the lane boundaries.
We show the effectiveness of our approach on two major North American Highways in two different states and show high precision and recall as well as 89% correct topology.
arXiv Detail & Related papers (2020-12-22T21:58:57Z) - Road Scene Graph: A Semantic Graph-Based Scene Representation Dataset
for Intelligent Vehicles [72.04891523115535]
We propose road scene graph,a special scene-graph for intelligent vehicles.
It provides not only object proposals but also their pair-wise relationships.
By organizing them in a topological graph, these data are explainable, fully-connected, and could be easily processed by GCNs.
arXiv Detail & Related papers (2020-11-27T07:33:11Z) - Learning Lane Graph Representations for Motion Forecasting [92.88572392790623]
We construct a lane graph from raw map data to preserve the map structure.
We exploit a fusion network consisting of four types of interactions, actor-to-lane, lane-to-lane, lane-to-actor and actor-to-actor.
Our approach significantly outperforms the state-of-the-art on the large scale Argoverse motion forecasting benchmark.
arXiv Detail & Related papers (2020-07-27T17:59:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.