ScaleRAFT: Cross-Scale Recurrent All-Pairs Field Transforms for 3D Motion Estimation
- URL: http://arxiv.org/abs/2407.09797v1
- Date: Sat, 13 Jul 2024 07:58:48 GMT
- Title: ScaleRAFT: Cross-Scale Recurrent All-Pairs Field Transforms for 3D Motion Estimation
- Authors: Han Ling, Quansen Sun,
- Abstract summary: We propose a normalized scene flow framework, ScaleRAFT, based on cross-scale matching.
Our method has achieved the best foreground performance so far in motion estimation tasks in driving scenarios.
- Score: 15.629496237910999
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: In this paper, we study the problem of estimating the 3D motion of dense pixels from continuous image pairs. Most previous methods are based on mature optical flow baselines and depth values, projecting the 2D motion on pixel planes into 3D space, and further optimizing the results by combining depth-motion-branch and other sub-modules. This stacked framework cannot leverage the complementarity between optical flow and other modules nor escape the dependence on accurate depth information. To address the above challenges, we propose a normalized scene flow framework, ScaleRAFT, based on cross-scale matching. Its core feature is directly matching objects between two frames in 3D scale space, i.e. matching features at the correct location and scale. Unlike previous methods, ScaleRAFT integrates optical flow and deep motion estimation into a unified architecture, allowing the optical flow pipeline and deep motion estimation to promote each other mutually. Moreover, ScaleRAFT estimates motion in the depth direction based on feature matching, breaking away from the dependence on accurate depth information. Experimentally, our method has achieved the best foreground performance so far in motion estimation tasks in driving scenarios, and has significantly improved various downstream 3D tasks.
Related papers
- Motion-aware 3D Gaussian Splatting for Efficient Dynamic Scene Reconstruction [89.53963284958037]
We propose a novel motion-aware enhancement framework for dynamic scene reconstruction.
Specifically, we first establish a correspondence between 3D Gaussian movements and pixel-level flow.
For the prevalent deformation-based paradigm that presents a harder optimization problem, a transient-aware deformation auxiliary module is proposed.
arXiv Detail & Related papers (2024-03-18T03:46:26Z) - DO3D: Self-supervised Learning of Decomposed Object-aware 3D Motion and
Depth from Monocular Videos [76.01906393673897]
We propose a self-supervised method to jointly learn 3D motion and depth from monocular videos.
Our system contains a depth estimation module to predict depth, and a new decomposed object-wise 3D motion (DO3D) estimation module to predict ego-motion and 3D object motion.
Our model delivers superior performance in all evaluated settings.
arXiv Detail & Related papers (2024-03-09T12:22:46Z) - UFD-PRiME: Unsupervised Joint Learning of Optical Flow and Stereo Depth
through Pixel-Level Rigid Motion Estimation [4.445751695675388]
Both optical flow and stereo disparities are image matches and can therefore benefit from joint training.
We design a first network that estimates flow and disparity jointly and is trained without supervision.
A second network, trained with optical flow from the first as pseudo-labels, takes disparities from the first network, estimates 3D rigid motion at every pixel, and reconstructs optical flow again.
arXiv Detail & Related papers (2023-10-07T07:08:25Z) - OPA-3D: Occlusion-Aware Pixel-Wise Aggregation for Monocular 3D Object
Detection [51.153003057515754]
OPA-3D is a single-stage, end-to-end, Occlusion-Aware Pixel-Wise Aggregation network.
It jointly estimates dense scene depth with depth-bounding box residuals and object bounding boxes.
It outperforms state-of-the-art methods on the main Car category.
arXiv Detail & Related papers (2022-11-02T14:19:13Z) - DeepMLE: A Robust Deep Maximum Likelihood Estimator for Two-view
Structure from Motion [9.294501649791016]
Two-view structure from motion (SfM) is the cornerstone of 3D reconstruction and visual SLAM (vSLAM)
We formulate the two-view SfM problem as a maximum likelihood estimation (MLE) and solve it with the proposed framework, denoted as DeepMLE.
Our method significantly outperforms the state-of-the-art end-to-end two-view SfM approaches in accuracy and generalization capability.
arXiv Detail & Related papers (2022-10-11T15:07:25Z) - DiffPoseNet: Direct Differentiable Camera Pose Estimation [11.941057800943653]
We introduce a network NFlowNet, for normal flow estimation which is used to enforce robust and direct constraints.
We perform extensive qualitative and quantitative evaluation of the proposed DiffPoseNet's sensitivity to noise and its generalization across datasets.
arXiv Detail & Related papers (2022-03-21T17:54:30Z) - Deep Two-View Structure-from-Motion Revisited [83.93809929963969]
Two-view structure-from-motion (SfM) is the cornerstone of 3D reconstruction and visual SLAM.
We propose to revisit the problem of deep two-view SfM by leveraging the well-posedness of the classic pipeline.
Our method consists of 1) an optical flow estimation network that predicts dense correspondences between two frames; 2) a normalized pose estimation module that computes relative camera poses from the 2D optical flow correspondences, and 3) a scale-invariant depth estimation network that leverages epipolar geometry to reduce the search space, refine the dense correspondences, and estimate relative depth maps.
arXiv Detail & Related papers (2021-04-01T15:31:20Z) - Learning Monocular Depth in Dynamic Scenes via Instance-Aware Projection
Consistency [114.02182755620784]
We present an end-to-end joint training framework that explicitly models 6-DoF motion of multiple dynamic objects, ego-motion and depth in a monocular camera setup without supervision.
Our framework is shown to outperform the state-of-the-art depth and motion estimation methods.
arXiv Detail & Related papers (2021-02-04T14:26:42Z) - Learning to Segment Rigid Motions from Two Frames [72.14906744113125]
We propose a modular network, motivated by a geometric analysis of what independent object motions can be recovered from an egomotion field.
It takes two consecutive frames as input and predicts segmentation masks for the background and multiple rigidly moving objects, which are then parameterized by 3D rigid transformations.
Our method achieves state-of-the-art performance for rigid motion segmentation on KITTI and Sintel.
arXiv Detail & Related papers (2021-01-11T04:20:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.