Scene Flow from Point Clouds with or without Learning
- URL: http://arxiv.org/abs/2011.00320v1
- Date: Sat, 31 Oct 2020 17:24:48 GMT
- Title: Scene Flow from Point Clouds with or without Learning
- Authors: Jhony Kaesemodel Pontes, James Hays, and Simon Lucey
- Abstract summary: Scene flow is the three-dimensional (3D) motion field of a scene.
Current learning-based approaches seek to estimate the scene flow directly from point clouds.
We present a simple and interpretable objective function to recover the scene flow from point clouds.
- Score: 47.03163552693887
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Scene flow is the three-dimensional (3D) motion field of a scene. It provides
information about the spatial arrangement and rate of change of objects in
dynamic environments. Current learning-based approaches seek to estimate the
scene flow directly from point clouds and have achieved state-of-the-art
performance. However, supervised learning methods are inherently domain
specific and require a large amount of labeled data. Annotation of scene flow
on real-world point clouds is expensive and challenging, and the lack of such
datasets has recently sparked interest in self-supervised learning methods. How
to accurately and robustly learn scene flow representations without labeled
real-world data is still an open problem. Here we present a simple and
interpretable objective function to recover the scene flow from point clouds.
We use the graph Laplacian of a point cloud to regularize the scene flow to be
"as-rigid-as-possible". Our proposed objective function can be used with or
without learning---as a self-supervisory signal to learn scene flow
representations, or as a non-learning-based method in which the scene flow is
optimized during runtime. Our approach outperforms related works in many
datasets. We also show the immediate applications of our proposed method for
two applications: motion segmentation and point cloud densification.
Related papers
- SeFlow: A Self-Supervised Scene Flow Method in Autonomous Driving [18.88208422580103]
Scene flow estimation predicts the 3D motion at each point in successive LiDAR scans.
Current state-of-the-art methods require annotated data to train scene flow networks.
We propose SeFlow, a self-supervised method that integrates efficient dynamic classification into a learning-based scene flow pipeline.
arXiv Detail & Related papers (2024-07-01T18:22:54Z) - PointFlowHop: Green and Interpretable Scene Flow Estimation from
Consecutive Point Clouds [49.7285297470392]
An efficient 3D scene flow estimation method called PointFlowHop is proposed in this work.
PointFlowHop takes two consecutive point clouds and determines the 3D flow vectors for every point in the first point cloud.
It decomposes the scene flow estimation task into a set of subtasks, including ego-motion compensation, object association and object-wise motion estimation.
arXiv Detail & Related papers (2023-02-27T23:06:01Z) - SCOOP: Self-Supervised Correspondence and Optimization-Based Scene Flow [25.577386156273256]
Scene flow estimation is a long-standing problem in computer vision, where the goal is to find the 3D motion of a scene from its consecutive observations.
We introduce SCOOP, a new method for scene flow estimation that can be learned on a small amount of data without employing ground-truth flow supervision.
arXiv Detail & Related papers (2022-11-25T10:52:02Z) - Unsupervised Learning of 3D Scene Flow with 3D Odometry Assistance [20.735976558587588]
Scene flow estimation is used in various applications such as autonomous driving fields, activity recognition, and virtual reality fields.
It is challenging to annotate scene flow with ground truth for real-world data.
We propose to use odometry information to assist the unsupervised learning of scene flow and use real-world LiDAR data to train our network.
arXiv Detail & Related papers (2022-09-11T21:53:43Z) - CO^3: Cooperative Unsupervised 3D Representation Learning for Autonomous
Driving [57.16921612272783]
We propose CO3, namely Cooperative Contrastive Learning and Contextual Shape Prediction, to learn 3D representation for outdoor-scene point clouds in an unsupervised manner.
We believe CO3 will facilitate understanding LiDAR point clouds in outdoor scene.
arXiv Detail & Related papers (2022-06-08T17:37:58Z) - Learning Scene Flow in 3D Point Clouds with Noisy Pseudo Labels [71.11151016581806]
We propose a novel scene flow method that captures 3D motions from point clouds without relying on ground-truth scene flow annotations.
Our method not only outperforms state-of-the-art self-supervised approaches, but also outperforms some supervised approaches that use accurate ground-truth flows.
arXiv Detail & Related papers (2022-03-23T18:20:03Z) - SCTN: Sparse Convolution-Transformer Network for Scene Flow Estimation [71.2856098776959]
Estimating 3D motions for point clouds is challenging, since a point cloud is unordered and its density is significantly non-uniform.
We propose a novel architecture named Sparse Convolution-Transformer Network (SCTN) that equips the sparse convolution with the transformer.
We show that the learned relation-based contextual information is rich and helpful for matching corresponding points, benefiting scene flow estimation.
arXiv Detail & Related papers (2021-05-10T15:16:14Z) - Weakly Supervised Learning of Rigid 3D Scene Flow [81.37165332656612]
We propose a data-driven scene flow estimation algorithm exploiting the observation that many 3D scenes can be explained by a collection of agents moving as rigid bodies.
We showcase the effectiveness and generalization capacity of our method on four different autonomous driving datasets.
arXiv Detail & Related papers (2021-02-17T18:58:02Z) - Adversarial Self-Supervised Scene Flow Estimation [15.278302535191866]
This work proposes a metric learning approach for self-supervised scene flow estimation.
We outline a benchmark for self-supervised scene flow estimation: the Scene Flow Sandbox.
arXiv Detail & Related papers (2020-11-01T16:37:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.