LEF: Late-to-Early Temporal Fusion for LiDAR 3D Object Detection
- URL: http://arxiv.org/abs/2309.16870v1
- Date: Thu, 28 Sep 2023 21:58:25 GMT
- Title: LEF: Late-to-Early Temporal Fusion for LiDAR 3D Object Detection
- Authors: Tong He, Pei Sun, Zhaoqi Leng, Chenxi Liu, Dragomir Anguelov, Mingxing
Tan
- Abstract summary: We propose a late-to-early recurrent feature fusion scheme for 3D object detection using temporal LiDAR point clouds.
Our main motivation is fusing object-aware latent embeddings into the early stages of a 3D object detector.
- Score: 40.267769862404684
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a late-to-early recurrent feature fusion scheme for 3D object
detection using temporal LiDAR point clouds. Our main motivation is fusing
object-aware latent embeddings into the early stages of a 3D object detector.
This feature fusion strategy enables the model to better capture the shapes and
poses for challenging objects, compared with learning from raw points directly.
Our method conducts late-to-early feature fusion in a recurrent manner. This is
achieved by enforcing window-based attention blocks upon temporally calibrated
and aligned sparse pillar tokens. Leveraging bird's eye view foreground pillar
segmentation, we reduce the number of sparse history features that our model
needs to fuse into its current frame by 10$\times$. We also propose a
stochastic-length FrameDrop training technique, which generalizes the model to
variable frame lengths at inference for improved performance without
retraining. We evaluate our method on the widely adopted Waymo Open Dataset and
demonstrate improvement on 3D object detection against the baseline model,
especially for the challenging category of large objects.
Related papers
- Future Does Matter: Boosting 3D Object Detection with Temporal Motion Estimation in Point Cloud Sequences [25.74000325019015]
We introduce a novel LiDAR 3D object detection framework, namely LiSTM, to facilitate spatial-temporal feature learning with cross-frame motion forecasting information.
We have conducted experiments on the aggregation and nuScenes datasets to demonstrate that the proposed framework achieves superior 3D detection performance.
arXiv Detail & Related papers (2024-09-06T16:29:04Z) - PTT: Point-Trajectory Transformer for Efficient Temporal 3D Object Detection [66.94819989912823]
We propose a point-trajectory transformer with long short-term memory for efficient temporal 3D object detection.
We use point clouds of current-frame objects and their historical trajectories as input to minimize the memory bank storage requirement.
We conduct extensive experiments on the large-scale dataset to demonstrate that our approach performs well against state-of-the-art methods.
arXiv Detail & Related papers (2023-12-13T18:59:13Z) - What You See Is What You Detect: Towards better Object Densification in
3D detection [2.3436632098950456]
The widely-used full-shape completion approach actually leads to a higher error-upper bound especially for far away objects and small objects like pedestrians.
We introduce a visible part completion method that requires only 11.3% of the prediction points that previous methods generate.
To recover the dense representation, we propose a mesh-deformation-based method to augment the point set associated with visible foreground objects.
arXiv Detail & Related papers (2023-10-27T01:46:37Z) - DetZero: Rethinking Offboard 3D Object Detection with Long-term
Sequential Point Clouds [55.755450273390004]
Existing offboard 3D detectors always follow a modular pipeline design to take advantage of unlimited sequential point clouds.
We have found that the full potential of offboard 3D detectors is not explored mainly due to two reasons: (1) the onboard multi-object tracker cannot generate sufficient complete object trajectories, and (2) the motion state of objects poses an inevitable challenge for the object-centric refining stage.
To tackle these problems, we propose a novel paradigm of offboard 3D object detection, named DetZero.
arXiv Detail & Related papers (2023-06-09T16:42:00Z) - Once Detected, Never Lost: Surpassing Human Performance in Offline LiDAR
based 3D Object Detection [50.959453059206446]
This paper aims for high-performance offline LiDAR-based 3D object detection.
We first observe that experienced human annotators annotate objects from a track-centric perspective.
We propose a high-performance offline detector in a track-centric perspective instead of the conventional object-centric perspective.
arXiv Detail & Related papers (2023-04-24T17:59:05Z) - RBGNet: Ray-based Grouping for 3D Object Detection [104.98776095895641]
We propose the RBGNet framework, a voting-based 3D detector for accurate 3D object detection from point clouds.
We propose a ray-based feature grouping module, which aggregates the point-wise features on object surfaces using a group of determined rays.
Our model achieves state-of-the-art 3D detection performance on ScanNet V2 and SUN RGB-D with remarkable performance gains.
arXiv Detail & Related papers (2022-04-05T14:42:57Z) - CVFNet: Real-time 3D Object Detection by Learning Cross View Features [11.402076835949824]
We present a real-time view-based single stage 3D object detector, namely CVFNet.
We first propose a novel Point-Range feature fusion module that deeply integrates point and range view features in multiple stages.
Then, a special Slice Pillar is designed to well maintain the 3D geometry when transforming the obtained deep point-view features into bird's eye view.
arXiv Detail & Related papers (2022-03-13T06:23:18Z) - Lifting 2D Object Locations to 3D by Discounting LiDAR Outliers across
Objects and Views [70.1586005070678]
We present a system for automatically converting 2D mask object predictions and raw LiDAR point clouds into full 3D bounding boxes of objects.
Our method significantly outperforms previous work despite the fact that those methods use significantly more complex pipelines, 3D models and additional human-annotated external sources of prior information.
arXiv Detail & Related papers (2021-09-16T13:01:13Z) - Temp-Frustum Net: 3D Object Detection with Temporal Fusion [0.0]
3D object detection is a core component of automated driving systems.
Frame-by-frame 3D object detection suffers from noise, field-of-view obstruction, and sparsity.
We propose a novel Temporal Fusion Module to mitigate these problems.
arXiv Detail & Related papers (2021-04-25T09:08:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.