RCP: Recurrent Closest Point for Scene Flow Estimation on 3D Point
Clouds
- URL: http://arxiv.org/abs/2205.11028v2
- Date: Tue, 24 May 2022 04:11:44 GMT
- Title: RCP: Recurrent Closest Point for Scene Flow Estimation on 3D Point
Clouds
- Authors: Xiaodong Gu, Chengzhou Tang, Weihao Yuan, Zuozhuo Dai, Siyu Zhu, Ping
Tan
- Abstract summary: 3D motion estimation including scene flow and point cloud registration has drawn increasing interest.
Recent methods employ deep neural networks to construct the cost volume for estimating accurate 3D flow.
We decompose the problem into two interlaced stages, where the 3D flows are optimized point-wisely at the first stage and then globally regularized in a recurrent network at the second stage.
- Score: 44.034836961967144
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: 3D motion estimation including scene flow and point cloud registration has
drawn increasing interest. Inspired by 2D flow estimation, recent methods
employ deep neural networks to construct the cost volume for estimating
accurate 3D flow. However, these methods are limited by the fact that it is
difficult to define a search window on point clouds because of the irregular
data structure. In this paper, we avoid this irregularity by a simple yet
effective method.We decompose the problem into two interlaced stages, where the
3D flows are optimized point-wisely at the first stage and then globally
regularized in a recurrent network at the second stage. Therefore, the
recurrent network only receives the regular point-wise information as the
input. In the experiments, we evaluate the proposed method on both the 3D scene
flow estimation and the point cloud registration task. For 3D scene flow
estimation, we make comparisons on the widely used FlyingThings3D and
KITTIdatasets. For point cloud registration, we follow previous works and
evaluate the data pairs with large pose and partially overlapping from
ModelNet40. The results show that our method outperforms the previous method
and achieves a new state-of-the-art performance on both 3D scene flow
estimation and point cloud registration, which demonstrates the superiority of
the proposed zero-order method on irregular point cloud data.
Related papers
- PointRegGPT: Boosting 3D Point Cloud Registration using Generative Point-Cloud Pairs for Training [90.06520673092702]
We present PointRegGPT, boosting 3D point cloud registration using generative point-cloud pairs for training.
To our knowledge, this is the first generative approach that explores realistic data generation for indoor point cloud registration.
arXiv Detail & Related papers (2024-07-19T06:29:57Z) - DELFlow: Dense Efficient Learning of Scene Flow for Large-Scale Point
Clouds [42.64433313672884]
We regularize raw points to a dense format by storing 3D coordinates in 2D grids.
Unlike the sampling operation commonly used in existing works, the dense 2D representation preserves most points.
We also present a novel warping projection technique to alleviate the information loss problem.
arXiv Detail & Related papers (2023-08-08T16:37:24Z) - PointFlowHop: Green and Interpretable Scene Flow Estimation from
Consecutive Point Clouds [49.7285297470392]
An efficient 3D scene flow estimation method called PointFlowHop is proposed in this work.
PointFlowHop takes two consecutive point clouds and determines the 3D flow vectors for every point in the first point cloud.
It decomposes the scene flow estimation task into a set of subtasks, including ego-motion compensation, object association and object-wise motion estimation.
arXiv Detail & Related papers (2023-02-27T23:06:01Z) - SCOOP: Self-Supervised Correspondence and Optimization-Based Scene Flow [25.577386156273256]
Scene flow estimation is a long-standing problem in computer vision, where the goal is to find the 3D motion of a scene from its consecutive observations.
We introduce SCOOP, a new method for scene flow estimation that can be learned on a small amount of data without employing ground-truth flow supervision.
arXiv Detail & Related papers (2022-11-25T10:52:02Z) - 3D Scene Flow Estimation on Pseudo-LiDAR: Bridging the Gap on Estimating
Point Motion [19.419030878019974]
3D scene flow characterizes how the points at the current time flow to the next time in the 3D Euclidean space.
The stability of the predicted scene flow is improved by introducing the dense nature of 2D pixels into the 3D space.
Disparity consistency loss is proposed to achieve more effective unsupervised learning of 3D scene flow.
arXiv Detail & Related papers (2022-09-27T03:27:09Z) - What Matters for 3D Scene Flow Network [44.02710380584977]
3D scene flow estimation from point clouds is a low-level 3D motion perception task in computer vision.
We propose a novel all-to-all flow embedding layer with backward reliability validation during the initial scene flow estimation.
Our proposed model surpasses all existing methods by at least 38.2% on FlyingThings3D dataset and 24.7% on KITTI Scene Flow dataset for EPE3D metric.
arXiv Detail & Related papers (2022-07-19T09:27:05Z) - PointAttN: You Only Need Attention for Point Cloud Completion [89.88766317412052]
Point cloud completion refers to completing 3D shapes from partial 3D point clouds.
We propose a novel neural network for processing point cloud in a per-point manner to eliminate kNNs.
The proposed framework, namely PointAttN, is simple, neat and effective, which can precisely capture the structural information of 3D shapes.
arXiv Detail & Related papers (2022-03-16T09:20:01Z) - A Conditional Point Diffusion-Refinement Paradigm for 3D Point Cloud
Completion [69.32451612060214]
Real-scanned 3D point clouds are often incomplete, and it is important to recover complete point clouds for downstream applications.
Most existing point cloud completion methods use Chamfer Distance (CD) loss for training.
We propose a novel Point Diffusion-Refinement (PDR) paradigm for point cloud completion.
arXiv Detail & Related papers (2021-12-07T06:59:06Z) - SCTN: Sparse Convolution-Transformer Network for Scene Flow Estimation [71.2856098776959]
Estimating 3D motions for point clouds is challenging, since a point cloud is unordered and its density is significantly non-uniform.
We propose a novel architecture named Sparse Convolution-Transformer Network (SCTN) that equips the sparse convolution with the transformer.
We show that the learned relation-based contextual information is rich and helpful for matching corresponding points, benefiting scene flow estimation.
arXiv Detail & Related papers (2021-05-10T15:16:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.