FLOT: Scene Flow on Point Clouds Guided by Optimal Transport
- URL: http://arxiv.org/abs/2007.11142v1
- Date: Wed, 22 Jul 2020 00:15:30 GMT
- Title: FLOT: Scene Flow on Point Clouds Guided by Optimal Transport
- Authors: Gilles Puy and Alexandre Boulch and Renaud Marlet
- Abstract summary: We propose and study a method called FLOT that estimates scene flow on point clouds.
Inspired by recent works on graph matching, we build a method to find these correspondences by borrowing tools from optimal transport.
Our main finding is that FLOT can perform as well as the best existing methods on synthetic and real-world datasets.
- Score: 82.86743909483312
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose and study a method called FLOT that estimates scene flow on point
clouds. We start the design of FLOT by noticing that scene flow estimation on
point clouds reduces to estimating a permutation matrix in a perfect world.
Inspired by recent works on graph matching, we build a method to find these
correspondences by borrowing tools from optimal transport. Then, we relax the
transport constraints to take into account real-world imperfections. The
transport cost between two points is given by the pairwise similarity between
deep features extracted by a neural network trained under full supervision
using synthetic datasets. Our main finding is that FLOT can perform as well as
the best existing methods on synthetic and real-world datasets while requiring
much less parameters and without using multiscale analysis. Our second finding
is that, on the training datasets considered, most of the performance can be
explained by the learned transport cost. This yields a simpler method,
FLOT$_0$, which is obtained using a particular choice of optimal transport
parameters and performs nearly as well as FLOT.
Related papers
- RMS-FlowNet++: Efficient and Robust Multi-Scale Scene Flow Estimation for Large-Scale Point Clouds [15.138542932078916]
RMS-FlowNet++ is a novel end-to-end learning-based architecture for accurate and efficient scene flow estimation.
Our architecture provides a faster prediction than state-of-the-art methods, avoids high memory requirements and enables efficient scene flow on dense point clouds of more than 250K points at once.
arXiv Detail & Related papers (2024-07-01T09:51:17Z) - Dense Optical Tracking: Connecting the Dots [82.79642869586587]
DOT is a novel, simple and efficient method for solving the problem of point tracking in a video.
We show that DOT is significantly more accurate than current optical flow techniques, outperforms sophisticated "universal trackers" like OmniMotion, and is on par with, or better than, the best point tracking algorithms like CoTracker.
arXiv Detail & Related papers (2023-12-01T18:59:59Z) - PointFlowHop: Green and Interpretable Scene Flow Estimation from
Consecutive Point Clouds [49.7285297470392]
An efficient 3D scene flow estimation method called PointFlowHop is proposed in this work.
PointFlowHop takes two consecutive point clouds and determines the 3D flow vectors for every point in the first point cloud.
It decomposes the scene flow estimation task into a set of subtasks, including ego-motion compensation, object association and object-wise motion estimation.
arXiv Detail & Related papers (2023-02-27T23:06:01Z) - SCOOP: Self-Supervised Correspondence and Optimization-Based Scene Flow [25.577386156273256]
Scene flow estimation is a long-standing problem in computer vision, where the goal is to find the 3D motion of a scene from its consecutive observations.
We introduce SCOOP, a new method for scene flow estimation that can be learned on a small amount of data without employing ground-truth flow supervision.
arXiv Detail & Related papers (2022-11-25T10:52:02Z) - InfoOT: Information Maximizing Optimal Transport [58.72713603244467]
InfoOT is an information-theoretic extension of optimal transport.
It maximizes the mutual information between domains while minimizing geometric distances.
This formulation yields a new projection method that is robust to outliers and generalizes to unseen samples.
arXiv Detail & Related papers (2022-10-06T18:55:41Z) - Manifold Interpolating Optimal-Transport Flows for Trajectory Inference [64.94020639760026]
We present a method called Manifold Interpolating Optimal-Transport Flow (MIOFlow)
MIOFlow learns, continuous population dynamics from static snapshot samples taken at sporadic timepoints.
We evaluate our method on simulated data with bifurcations and merges, as well as scRNA-seq data from embryoid body differentiation, and acute myeloid leukemia treatment.
arXiv Detail & Related papers (2022-06-29T22:19:03Z) - RMS-FlowNet: Efficient and Robust Multi-Scale Scene Flow Estimation for
Large-Scale Point Clouds [13.62166506575236]
RMS-FlowNet is a novel end-to-end learning-based architecture for accurate and efficient scene flow estimation.
We show that our model presents a competitive ability to generalize towards the real-world scenes of KITTI data set without fine-tuning.
arXiv Detail & Related papers (2022-04-01T11:02:58Z) - Fast and Scalable Optimal Transport for Brain Tractograms [4.610968512889579]
We present a new multiscale algorithm for solving regularized Optimal Transport problems on a linear memory footprint.
We show the effectiveness of this approach on brain tractograms modeled either as bundles of fibers or as track density maps.
arXiv Detail & Related papers (2021-07-05T13:28:41Z) - Feature Robust Optimal Transport for High-dimensional Data [125.04654605998618]
We propose feature-robust optimal transport (FROT) for high-dimensional data, which solves high-dimensional OT problems using feature selection to avoid the curse of dimensionality.
We show that the FROT algorithm achieves state-of-the-art performance in real-world semantic correspondence datasets.
arXiv Detail & Related papers (2020-05-25T14:07:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.