VoteFlow: Enforcing Local Rigidity in Self-Supervised Scene Flow
- URL: http://arxiv.org/abs/2503.22328v2
- Date: Wed, 16 Apr 2025 07:36:24 GMT
- Title: VoteFlow: Enforcing Local Rigidity in Self-Supervised Scene Flow
- Authors: Yancong Lin, Shiming Wang, Liangliang Nan, Julian Kooij, Holger Caesar,
- Abstract summary: Scene flow estimation aims to recover per-point motion from two adjacent LiDAR scans.<n>In real-world applications such as autonomous driving, points rarely move independently of others.<n>We introduce a lightweight add-on module in neural network design, enabling end-to-end learning.
- Score: 4.515315183243291
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Scene flow estimation aims to recover per-point motion from two adjacent LiDAR scans. However, in real-world applications such as autonomous driving, points rarely move independently of others, especially for nearby points belonging to the same object, which often share the same motion. Incorporating this locally rigid motion constraint has been a key challenge in self-supervised scene flow estimation, which is often addressed by post-processing or appending extra regularization. While these approaches are able to improve the rigidity of predicted flows, they lack an architectural inductive bias for local rigidity within the model structure, leading to suboptimal learning efficiency and inferior performance. In contrast, we enforce local rigidity with a lightweight add-on module in neural network design, enabling end-to-end learning. We design a discretized voting space that accommodates all possible translations and then identify the one shared by nearby points by differentiable voting. Additionally, to ensure computational efficiency, we operate on pillars rather than points and learn representative features for voting per pillar. We plug the Voting Module into popular model designs and evaluate its benefit on Argoverse 2 and Waymo datasets. We outperform baseline works with only marginal compute overhead. Code is available at https://github.com/tudelft-iv/VoteFlow.
Related papers
- SuperFlow++: Enhanced Spatiotemporal Consistency for Cross-Modal Data Pretraining [62.433137130087445]
SuperFlow++ is a novel framework that integrates pretraining and downstream tasks using consecutive camera pairs.<n>We show that SuperFlow++ outperforms state-of-the-art methods across diverse tasks and driving conditions.<n>With strong generalizability and computational efficiency, SuperFlow++ establishes a new benchmark for data-efficient LiDAR-based perception in autonomous driving.
arXiv Detail & Related papers (2025-03-25T17:59:57Z) - SeFlow: A Self-Supervised Scene Flow Method in Autonomous Driving [18.88208422580103]
Scene flow estimation predicts the 3D motion at each point in successive LiDAR scans.
Current state-of-the-art methods require annotated data to train scene flow networks.
We propose SeFlow, a self-supervised method that integrates efficient dynamic classification into a learning-based scene flow pipeline.
arXiv Detail & Related papers (2024-07-01T18:22:54Z) - STARFlow: Spatial Temporal Feature Re-embedding with Attentive Learning for Real-world Scene Flow [5.476991379461233]
We propose global attentive flow embedding to match all-to-all point pairs in both Euclidean space.
We leverage novel domain adaptive losses to bridge the gap of motion inference from synthetic to real-world.
Our approach achieves state-of-the-art performance across various datasets, with particularly outstanding results on real-world LiDAR-scanned datasets.
arXiv Detail & Related papers (2024-03-11T04:56:10Z) - Self-Supervised 3D Scene Flow Estimation and Motion Prediction using
Local Rigidity Prior [100.98123802027847]
We investigate self-supervised 3D scene flow estimation and class-agnostic motion prediction on point clouds.
We generate pseudo scene flow labels for self-supervised learning through piecewise rigid motion estimation.
Our method achieves new state-of-the-art performance in self-supervised scene flow learning.
arXiv Detail & Related papers (2023-10-17T14:06:55Z) - ContrastMotion: Self-supervised Scene Motion Learning for Large-Scale
LiDAR Point Clouds [21.6511040107249]
We propose a novel self-supervised motion estimator for LiDAR-based autonomous driving via BEV representation.
We predict scene motion via feature-level consistency between pillars in consecutive frames, which can eliminate the effect caused by noise points and view-changing point clouds in dynamic scenes.
arXiv Detail & Related papers (2023-04-25T05:46:24Z) - Self-Point-Flow: Self-Supervised Scene Flow Estimation from Point Clouds
with Optimal Transport and Random Walk [59.87525177207915]
We develop a self-supervised method to establish correspondences between two point clouds to approximate scene flow.
Our method achieves state-of-the-art performance among self-supervised learning methods.
arXiv Detail & Related papers (2021-05-18T03:12:42Z) - LiDAR-based Panoptic Segmentation via Dynamic Shifting Network [56.71765153629892]
LiDAR-based panoptic segmentation aims to parse both objects and scenes in a unified manner.
We propose the Dynamic Shifting Network (DS-Net), which serves as an effective panoptic segmentation framework in the point cloud realm.
Our proposed DS-Net achieves superior accuracies over current state-of-the-art methods.
arXiv Detail & Related papers (2020-11-24T08:44:46Z) - LoCo: Local Contrastive Representation Learning [93.98029899866866]
We show that by overlapping local blocks stacking on top of each other, we effectively increase the decoder depth and allow upper blocks to implicitly send feedbacks to lower blocks.
This simple design closes the performance gap between local learning and end-to-end contrastive learning algorithms for the first time.
arXiv Detail & Related papers (2020-08-04T05:41:29Z) - Region-based Non-local Operation for Video Classification [11.746833714322154]
This paper presents region-based non-local (RNL) operations as a family of self-attention mechanisms.
By combining a channel attention module with the proposed RNL, we design an attention chain, which can be integrated into the off-the-shelf CNNs for end-to-end training.
The experimental results of our method outperform other attention mechanisms, and we achieve state-of-the-art performance on the Something-Something V1 dataset.
arXiv Detail & Related papers (2020-07-17T14:57:05Z) - Scope Head for Accurate Localization in Object Detection [135.9979405835606]
We propose a novel detector coined as ScopeNet, which models anchors of each location as a mutually dependent relationship.
With our concise and effective design, the proposed ScopeNet achieves state-of-the-art results on COCO.
arXiv Detail & Related papers (2020-05-11T04:00:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.