Bi-PointFlowNet: Bidirectional Learning for Point Cloud Based Scene Flow
Estimation
- URL: http://arxiv.org/abs/2207.07522v1
- Date: Fri, 15 Jul 2022 15:14:53 GMT
- Title: Bi-PointFlowNet: Bidirectional Learning for Point Cloud Based Scene Flow
Estimation
- Authors: Wencan Cheng and Jong Hwan Ko
- Abstract summary: This paper presents a novel scene flow estimation architecture using bidirectional flow embedding layers.
The proposed bidirectional layer learns features along both forward and backward directions, enhancing the estimation performance.
In addition, hierarchical feature extraction and warping improve the performance and reduce computational overhead.
- Score: 3.1869033681682124
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Scene flow estimation, which extracts point-wise motion between scenes, is
becoming a crucial task in many computer vision tasks. However, all of the
existing estimation methods utilize only the unidirectional features,
restricting the accuracy and generality. This paper presents a novel scene flow
estimation architecture using bidirectional flow embedding layers. The proposed
bidirectional layer learns features along both forward and backward directions,
enhancing the estimation performance. In addition, hierarchical feature
extraction and warping improve the performance and reduce computational
overhead. Experimental results show that the proposed architecture achieved a
new state-of-the-art record by outperforming other approaches with large margin
in both FlyingThings3D and KITTI benchmarks. Codes are available at
https://github.com/cwc1260/BiFlow.
Related papers
- RMS-FlowNet++: Efficient and Robust Multi-Scale Scene Flow Estimation for Large-Scale Point Clouds [15.138542932078916]
RMS-FlowNet++ is a novel end-to-end learning-based architecture for accurate and efficient scene flow estimation.
Our architecture provides a faster prediction than state-of-the-art methods, avoids high memory requirements and enables efficient scene flow on dense point clouds of more than 250K points at once.
arXiv Detail & Related papers (2024-07-01T09:51:17Z) - Self-Supervised Monocular Depth Estimation by Direction-aware Cumulative
Convolution Network [80.19054069988559]
We find that self-supervised monocular depth estimation shows a direction sensitivity and environmental dependency.
We propose a new Direction-aware Cumulative Convolution Network (DaCCN), which improves the depth representation in two aspects.
Experiments show that our method achieves significant improvements on three widely used benchmarks.
arXiv Detail & Related papers (2023-08-10T14:32:18Z) - Dynamic Perceiver for Efficient Visual Recognition [87.08210214417309]
We propose Dynamic Perceiver (Dyn-Perceiver) to decouple the feature extraction procedure and the early classification task.
A feature branch serves to extract image features, while a classification branch processes a latent code assigned for classification tasks.
Early exits are placed exclusively within the classification branch, thus eliminating the need for linear separability in low-level features.
arXiv Detail & Related papers (2023-06-20T03:00:22Z) - PointFlowHop: Green and Interpretable Scene Flow Estimation from
Consecutive Point Clouds [49.7285297470392]
An efficient 3D scene flow estimation method called PointFlowHop is proposed in this work.
PointFlowHop takes two consecutive point clouds and determines the 3D flow vectors for every point in the first point cloud.
It decomposes the scene flow estimation task into a set of subtasks, including ego-motion compensation, object association and object-wise motion estimation.
arXiv Detail & Related papers (2023-02-27T23:06:01Z) - What Matters for 3D Scene Flow Network [44.02710380584977]
3D scene flow estimation from point clouds is a low-level 3D motion perception task in computer vision.
We propose a novel all-to-all flow embedding layer with backward reliability validation during the initial scene flow estimation.
Our proposed model surpasses all existing methods by at least 38.2% on FlyingThings3D dataset and 24.7% on KITTI Scene Flow dataset for EPE3D metric.
arXiv Detail & Related papers (2022-07-19T09:27:05Z) - FlowNAS: Neural Architecture Search for Optical Flow Estimation [65.44079917247369]
We propose a neural architecture search method named FlowNAS to automatically find the better encoder architecture for flow estimation task.
Experimental results show that the discovered architecture with the weights inherited from the super-network achieves 4.67% F1-all error on KITTI.
arXiv Detail & Related papers (2022-07-04T09:05:25Z) - RMS-FlowNet: Efficient and Robust Multi-Scale Scene Flow Estimation for
Large-Scale Point Clouds [13.62166506575236]
RMS-FlowNet is a novel end-to-end learning-based architecture for accurate and efficient scene flow estimation.
We show that our model presents a competitive ability to generalize towards the real-world scenes of KITTI data set without fine-tuning.
arXiv Detail & Related papers (2022-04-01T11:02:58Z) - Densely Nested Top-Down Flows for Salient Object Detection [137.74130900326833]
This paper revisits the role of top-down modeling in salient object detection.
It designs a novel densely nested top-down flows (DNTDF)-based framework.
In every stage of DNTDF, features from higher levels are read in via the progressive compression shortcut paths (PCSP)
arXiv Detail & Related papers (2021-02-18T03:14:02Z) - FlowStep3D: Model Unrolling for Self-Supervised Scene Flow Estimation [87.74617110803189]
Estimating the 3D motion of points in a scene, known as scene flow, is a core problem in computer vision.
We present a recurrent architecture that learns a single step of an unrolled iterative alignment procedure for refining scene flow predictions.
arXiv Detail & Related papers (2020-11-19T23:23:48Z) - Hierarchical Attention Learning of Scene Flow in 3D Point Clouds [28.59260783047209]
This paper studies the problem of scene flow estimation from two consecutive 3D point clouds.
A novel hierarchical neural network with double attention is proposed for learning the correlation of point features in adjacent frames.
Experiments show that the proposed network outperforms the state-of-the-art performance of 3D scene flow estimation.
arXiv Detail & Related papers (2020-10-12T14:56:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.