DEFLOW: Self-supervised 3D Motion Estimation of Debris Flow
- URL: http://arxiv.org/abs/2304.02569v1
- Date: Wed, 5 Apr 2023 16:40:14 GMT
- Title: DEFLOW: Self-supervised 3D Motion Estimation of Debris Flow
- Authors: Liyuan Zhu, Yuru Jia, Shengyu Huang, Nicholas Meyer, Andreas Wieser,
Konrad Schindler, Jordan Aaron
- Abstract summary: We propose DEFLOW, a model for 3D motion estimation of debris flows.
We adopt a novel multi-level sensor fusion architecture and self-supervision to incorporate the inductive biases of the scene.
Our model achieves state-of-the-art optical flow and depth estimation on our dataset, and fully automates the motion estimation for debris flows.
- Score: 19.240172015210586
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Existing work on scene flow estimation focuses on autonomous driving and
mobile robotics, while automated solutions are lacking for motion in nature,
such as that exhibited by debris flows. We propose DEFLOW, a model for 3D
motion estimation of debris flows, together with a newly captured dataset. We
adopt a novel multi-level sensor fusion architecture and self-supervision to
incorporate the inductive biases of the scene. We further adopt a multi-frame
temporal processing module to enable flow speed estimation over time. Our model
achieves state-of-the-art optical flow and depth estimation on our dataset, and
fully automates the motion estimation for debris flows. The source code and
dataset are available at project page.
Related papers
- Let Occ Flow: Self-Supervised 3D Occupancy Flow Prediction [14.866463843514156]
Let Occ Flow is the first self-supervised work for joint 3D occupancy and occupancy flow prediction using only camera inputs.
Our approach incorporates a novel attention-based temporal fusion module to capture dynamic object dependencies.
Our method extends differentiable rendering to 3D volumetric flow fields.
arXiv Detail & Related papers (2024-07-10T12:20:11Z) - DO3D: Self-supervised Learning of Decomposed Object-aware 3D Motion and
Depth from Monocular Videos [76.01906393673897]
We propose a self-supervised method to jointly learn 3D motion and depth from monocular videos.
Our system contains a depth estimation module to predict depth, and a new decomposed object-wise 3D motion (DO3D) estimation module to predict ego-motion and 3D object motion.
Our model delivers superior performance in all evaluated settings.
arXiv Detail & Related papers (2024-03-09T12:22:46Z) - DetFlowTrack: 3D Multi-object Tracking based on Simultaneous
Optimization of Object Detection and Scene Flow Estimation [23.305159598648924]
We propose a 3D MOT framework based on simultaneous optimization of object detection and scene flow estimation.
For more accurate scene flow label especially in the case of motion with rotation, a box-transformation-based scene flow ground truth calculation method is proposed.
Experimental results on the KITTI MOT dataset show competitive results over the state-of-the-arts and the robustness under extreme motion with rotation.
arXiv Detail & Related papers (2022-03-04T07:06:47Z) - Occlusion Guided Self-supervised Scene Flow Estimation on 3D Point
Clouds [4.518012967046983]
Understanding the flow in 3D space of sparsely sampled points between two consecutive time frames is the core stone of modern geometric-driven systems.
This work presents a new self-supervised training method and an architecture for the 3D scene flow estimation under occlusions.
arXiv Detail & Related papers (2021-04-10T09:55:19Z) - Optical Flow Estimation from a Single Motion-blurred Image [66.2061278123057]
Motion blur in an image may have practical interests in fundamental computer vision problems.
We propose a novel framework to estimate optical flow from a single motion-blurred image in an end-to-end manner.
arXiv Detail & Related papers (2021-03-04T12:45:18Z) - Weakly Supervised Learning of Rigid 3D Scene Flow [81.37165332656612]
We propose a data-driven scene flow estimation algorithm exploiting the observation that many 3D scenes can be explained by a collection of agents moving as rigid bodies.
We showcase the effectiveness and generalization capacity of our method on four different autonomous driving datasets.
arXiv Detail & Related papers (2021-02-17T18:58:02Z) - Learning Monocular Depth in Dynamic Scenes via Instance-Aware Projection
Consistency [114.02182755620784]
We present an end-to-end joint training framework that explicitly models 6-DoF motion of multiple dynamic objects, ego-motion and depth in a monocular camera setup without supervision.
Our framework is shown to outperform the state-of-the-art depth and motion estimation methods.
arXiv Detail & Related papers (2021-02-04T14:26:42Z) - IntentNet: Learning to Predict Intention from Raw Sensor Data [86.74403297781039]
In this paper, we develop a one-stage detector and forecaster that exploits both 3D point clouds produced by a LiDAR sensor as well as dynamic maps of the environment.
Our multi-task model achieves better accuracy than the respective separate modules while saving computation, which is critical to reducing reaction time in self-driving applications.
arXiv Detail & Related papers (2021-01-20T00:31:52Z) - Do not trust the neighbors! Adversarial Metric Learning for
Self-Supervised Scene Flow Estimation [0.0]
Scene flow is the task of estimating 3D motion vectors to individual points of a dynamic 3D scene.
We propose a 3D scene flow benchmark and a novel self-supervised setup for training flow models.
We find that our setup is able to keep motion coherence and preserve local geometries, which many self-supervised baselines fail to grasp.
arXiv Detail & Related papers (2020-11-01T17:41:32Z) - Self-Supervised Learning of Non-Rigid Residual Flow and Ego-Motion [63.18340058854517]
We present an alternative method for end-to-end scene flow learning by joint estimation of non-rigid residual flow and ego-motion flow for dynamic 3D scenes.
We extend the supervised framework with self-supervisory signals based on the temporal consistency property of a point cloud sequence.
arXiv Detail & Related papers (2020-09-22T11:39:19Z) - Any Motion Detector: Learning Class-agnostic Scene Dynamics from a
Sequence of LiDAR Point Clouds [4.640835690336654]
We propose a novel real-time approach of temporal context aggregation for motion detection and motion parameters estimation.
We introduce an ego-motion compensation layer to achieve real-time inference with performance comparable to a naive odometric transform of the original point cloud sequence.
arXiv Detail & Related papers (2020-04-24T10:40:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.