EgoFlowNet: Non-Rigid Scene Flow from Point Clouds with Ego-Motion Support
- URL: http://arxiv.org/abs/2407.02920v1
- Date: Wed, 3 Jul 2024 08:53:50 GMT
- Title: EgoFlowNet: Non-Rigid Scene Flow from Point Clouds with Ego-Motion Support
- Authors: Ramy Battrawy, René Schuster, Didier Stricker,
- Abstract summary: We propose a point-level scene flow estimation network trained in a weakly-supervised manner and without object-based abstraction.
Our approach predicts a binary segmentation mask that implicitly drives two parallel branches for ego-motion and scene flow.
On realistic KITTI scenes, we show that our EgoFlowNet performs better than state-of-the-art methods in the presence of ground surface points.
- Score: 15.138542932078916
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent weakly-supervised methods for scene flow estimation from LiDAR point clouds are limited to explicit reasoning on object-level. These methods perform multiple iterative optimizations for each rigid object, which makes them vulnerable to clustering robustness. In this paper, we propose our EgoFlowNet - a point-level scene flow estimation network trained in a weakly-supervised manner and without object-based abstraction. Our approach predicts a binary segmentation mask that implicitly drives two parallel branches for ego-motion and scene flow. Unlike previous methods, we provide both branches with all input points and carefully integrate the binary mask into the feature extraction and losses. We also use a shared cost volume with local refinement that is updated at multiple scales without explicit clustering or rigidity assumptions. On realistic KITTI scenes, we show that our EgoFlowNet performs better than state-of-the-art methods in the presence of ground surface points.
Related papers
- SeFlow: A Self-Supervised Scene Flow Method in Autonomous Driving [18.88208422580103]
Scene flow estimation predicts the 3D motion at each point in successive LiDAR scans.
Current state-of-the-art methods require annotated data to train scene flow networks.
We propose SeFlow, a self-supervised method that integrates efficient dynamic classification into a learning-based scene flow pipeline.
arXiv Detail & Related papers (2024-07-01T18:22:54Z) - Let-It-Flow: Simultaneous Optimization of 3D Flow and Object Clustering [2.763111962660262]
We study the problem of self-supervised 3D scene flow estimation from real large-scale raw point cloud sequences.
We propose a novel clustering approach that allows for combination of overlapping soft clusters as well as non-overlapping rigid clusters.
Our method especially excels in resolving flow in complicated dynamic scenes with multiple independently moving objects close to each other.
arXiv Detail & Related papers (2024-04-12T10:04:03Z) - CPCM: Contextual Point Cloud Modeling for Weakly-supervised Point Cloud
Semantic Segmentation [60.0893353960514]
We study the task of weakly-supervised point cloud semantic segmentation with sparse annotations.
We propose a Contextual Point Cloud Modeling ( CPCM) method that consists of two parts: a region-wise masking (RegionMask) strategy and a contextual masked training (CMT) method.
arXiv Detail & Related papers (2023-07-19T04:41:18Z) - FreePoint: Unsupervised Point Cloud Instance Segmentation [72.64540130803687]
We propose FreePoint, for underexplored unsupervised class-agnostic instance segmentation on point clouds.
We represent point features by combining coordinates, colors, and self-supervised deep features.
Based on the point features, we segment point clouds into coarse instance masks as pseudo labels, which are used to train a point cloud instance segmentation model.
arXiv Detail & Related papers (2023-05-11T16:56:26Z) - PointFlowHop: Green and Interpretable Scene Flow Estimation from
Consecutive Point Clouds [49.7285297470392]
An efficient 3D scene flow estimation method called PointFlowHop is proposed in this work.
PointFlowHop takes two consecutive point clouds and determines the 3D flow vectors for every point in the first point cloud.
It decomposes the scene flow estimation task into a set of subtasks, including ego-motion compensation, object association and object-wise motion estimation.
arXiv Detail & Related papers (2023-02-27T23:06:01Z) - Learning Scene Flow in 3D Point Clouds with Noisy Pseudo Labels [71.11151016581806]
We propose a novel scene flow method that captures 3D motions from point clouds without relying on ground-truth scene flow annotations.
Our method not only outperforms state-of-the-art self-supervised approaches, but also outperforms some supervised approaches that use accurate ground-truth flows.
arXiv Detail & Related papers (2022-03-23T18:20:03Z) - Weakly Supervised Learning of Rigid 3D Scene Flow [81.37165332656612]
We propose a data-driven scene flow estimation algorithm exploiting the observation that many 3D scenes can be explained by a collection of agents moving as rigid bodies.
We showcase the effectiveness and generalization capacity of our method on four different autonomous driving datasets.
arXiv Detail & Related papers (2021-02-17T18:58:02Z) - Adversarial Self-Supervised Scene Flow Estimation [15.278302535191866]
This work proposes a metric learning approach for self-supervised scene flow estimation.
We outline a benchmark for self-supervised scene flow estimation: the Scene Flow Sandbox.
arXiv Detail & Related papers (2020-11-01T16:37:37Z) - Self-Supervised Learning of Non-Rigid Residual Flow and Ego-Motion [63.18340058854517]
We present an alternative method for end-to-end scene flow learning by joint estimation of non-rigid residual flow and ego-motion flow for dynamic 3D scenes.
We extend the supervised framework with self-supervisory signals based on the temporal consistency property of a point cloud sequence.
arXiv Detail & Related papers (2020-09-22T11:39:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.