PillarFlow: End-to-end Birds-eye-view Flow Estimation for Autonomous
Driving
- URL: http://arxiv.org/abs/2008.01179v3
- Date: Sat, 29 Aug 2020 13:35:09 GMT
- Title: PillarFlow: End-to-end Birds-eye-view Flow Estimation for Autonomous
Driving
- Authors: Kuan-Hui Lee, Matthew Kliemann, Adrien Gaidon, Jie Li, Chao Fang,
Sudeep Pillai, Wolfram Burgard
- Abstract summary: We propose an end-to-end deep learning framework for LIDAR-based flow estimation in bird's eye view (BeV)
Our method takes consecutive point cloud pairs as input and produces a 2-D BeV flow grid describing the dynamic state of each cell.
The experimental results show that the proposed method not only estimates 2-D BeV flow accurately but also improves tracking performance of both dynamic and static objects.
- Score: 42.8479177012748
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In autonomous driving, accurately estimating the state of surrounding
obstacles is critical for safe and robust path planning. However, this
perception task is difficult, particularly for generic obstacles/objects, due
to appearance and occlusion changes. To tackle this problem, we propose an
end-to-end deep learning framework for LIDAR-based flow estimation in bird's
eye view (BeV). Our method takes consecutive point cloud pairs as input and
produces a 2-D BeV flow grid describing the dynamic state of each cell. The
experimental results show that the proposed method not only estimates 2-D BeV
flow accurately but also improves tracking performance of both dynamic and
static objects.
Related papers
- SeFlow: A Self-Supervised Scene Flow Method in Autonomous Driving [18.88208422580103]
Scene flow estimation predicts the 3D motion at each point in successive LiDAR scans.
Current state-of-the-art methods require annotated data to train scene flow networks.
We propose SeFlow, a self-supervised method that integrates efficient dynamic classification into a learning-based scene flow pipeline.
arXiv Detail & Related papers (2024-07-01T18:22:54Z) - OOSTraj: Out-of-Sight Trajectory Prediction With Vision-Positioning Denoising [49.86409475232849]
Trajectory prediction is fundamental in computer vision and autonomous driving.
Existing approaches in this field often assume precise and complete observational data.
We present a novel method for out-of-sight trajectory prediction that leverages a vision-positioning technique.
arXiv Detail & Related papers (2024-04-02T18:30:29Z) - ICP-Flow: LiDAR Scene Flow Estimation with ICP [2.9290232815049926]
Scene flow characterizes the 3D motion between two LiDAR scans captured by an autonomous vehicle at nearby timesteps.
We propose ICP-Flow, a learning-free flow estimator, to associate objects over scans and then estimate the locally rigid transformations.
We outperform state-of-the-art baselines, including supervised models, on the dataset and perform competitively on Argoverse-v2 and nuScenes.
arXiv Detail & Related papers (2024-02-27T09:41:59Z) - VADv2: End-to-End Vectorized Autonomous Driving via Probabilistic
Planning [42.681012361021224]
VADv2 is an end-to-end driving model based on probabilistic planning.
It runs stably in a fully end-to-end manner, even without the rule-based wrapper.
arXiv Detail & Related papers (2024-02-20T18:55:09Z) - Implicit Occupancy Flow Fields for Perception and Prediction in
Self-Driving [68.95178518732965]
A self-driving vehicle (SDV) must be able to perceive its surroundings and predict the future behavior of other traffic participants.
Existing works either perform object detection followed by trajectory of the detected objects, or predict dense occupancy and flow grids for the whole scene.
This motivates our unified approach to perception and future prediction that implicitly represents occupancy and flow over time with a single neural network.
arXiv Detail & Related papers (2023-08-02T23:39:24Z) - BEVScope: Enhancing Self-Supervised Depth Estimation Leveraging
Bird's-Eye-View in Dynamic Scenarios [12.079195812249747]
Current self-supervised depth estimation methods grapple with several limitations.
We present BEVScope, an innovative approach to self-supervised depth estimation.
We propose an adaptive loss function, specifically designed to mitigate the complexities associated with moving objects.
arXiv Detail & Related papers (2023-06-20T15:16:35Z) - An Effective Motion-Centric Paradigm for 3D Single Object Tracking in
Point Clouds [50.19288542498838]
3D single object tracking in LiDAR point clouds (LiDAR SOT) plays a crucial role in autonomous driving.
Current approaches all follow the Siamese paradigm based on appearance matching.
We introduce a motion-centric paradigm to handle LiDAR SOT from a new perspective.
arXiv Detail & Related papers (2023-03-21T17:28:44Z) - StreamYOLO: Real-time Object Detection for Streaming Perception [84.2559631820007]
We endow the models with the capacity of predicting the future, significantly improving the results for streaming perception.
We consider multiple velocities driving scene and propose Velocity-awared streaming AP (VsAP) to jointly evaluate the accuracy.
Our simple method achieves the state-of-the-art performance on Argoverse-HD dataset and improves the sAP and VsAP by 4.7% and 8.2% respectively.
arXiv Detail & Related papers (2022-07-21T12:03:02Z) - Do not trust the neighbors! Adversarial Metric Learning for
Self-Supervised Scene Flow Estimation [0.0]
Scene flow is the task of estimating 3D motion vectors to individual points of a dynamic 3D scene.
We propose a 3D scene flow benchmark and a novel self-supervised setup for training flow models.
We find that our setup is able to keep motion coherence and preserve local geometries, which many self-supervised baselines fail to grasp.
arXiv Detail & Related papers (2020-11-01T17:41:32Z) - Cascaded Regression Tracking: Towards Online Hard Distractor
Discrimination [202.2562153608092]
We propose a cascaded regression tracker with two sequential stages.
In the first stage, we filter out abundant easily-identified negative candidates.
In the second stage, a discrete sampling based ridge regression is designed to double-check the remaining ambiguous hard samples.
arXiv Detail & Related papers (2020-06-18T07:48:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.