Particle Videos Revisited: Tracking Through Occlusions Using Point
Trajectories
- URL: http://arxiv.org/abs/2204.04153v1
- Date: Fri, 8 Apr 2022 16:05:48 GMT
- Title: Particle Videos Revisited: Tracking Through Occlusions Using Point
Trajectories
- Authors: Adam W. Harley, Zhaoyuan Fang, Katerina Fragkiadaki
- Abstract summary: We revisit Sand and Teller's "particle video" approach, and study pixel tracking as a long-range motion estimation problem.
We re-build this classic approach using components that drive the current state-of-the-art in flow and object tracking.
We train our models using long-range amodal point trajectories mined from existing optical flow datasets.
- Score: 29.258861811749103
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Tracking pixels in videos is typically studied as an optical flow estimation
problem, where every pixel is described with a displacement vector that locates
it in the next frame. Even though wider temporal context is freely available,
prior efforts to take this into account have yielded only small gains over
2-frame methods. In this paper, we revisit Sand and Teller's "particle video"
approach, and study pixel tracking as a long-range motion estimation problem,
where every pixel is described with a trajectory that locates it in multiple
future frames. We re-build this classic approach using components that drive
the current state-of-the-art in flow and object tracking, such as dense cost
maps, iterative optimization, and learned appearance updates. We train our
models using long-range amodal point trajectories mined from existing optical
flow datasets that we synthetically augment with occlusions. We test our
approach in trajectory estimation benchmarks and in keypoint label propagation
tasks, and compare favorably against state-of-the-art optical flow and feature
tracking methods.
Related papers
- LEAP-VO: Long-term Effective Any Point Tracking for Visual Odometry [52.131996528655094]
We present the Long-term Effective Any Point Tracking (LEAP) module.
LEAP innovatively combines visual, inter-track, and temporal cues with mindfully selected anchors for dynamic track estimation.
Based on these traits, we develop LEAP-VO, a robust visual odometry system adept at handling occlusions and dynamic scenes.
arXiv Detail & Related papers (2024-01-03T18:57:27Z) - Dense Optical Tracking: Connecting the Dots [82.79642869586587]
DOT is a novel, simple and efficient method for solving the problem of point tracking in a video.
We show that DOT is significantly more accurate than current optical flow techniques, outperforms sophisticated "universal trackers" like OmniMotion, and is on par with, or better than, the best point tracking algorithms like CoTracker.
arXiv Detail & Related papers (2023-12-01T18:59:59Z) - Tracking Everything Everywhere All at Once [111.00807055441028]
We present a new test-time optimization method for estimating dense and long-range motion from a video sequence.
We propose a complete and globally consistent motion representation, dubbed OmniMotion.
Our approach outperforms prior state-of-the-art methods by a large margin both quantitatively and qualitatively.
arXiv Detail & Related papers (2023-06-08T17:59:29Z) - ParticleSfM: Exploiting Dense Point Trajectories for Localizing Moving
Cameras in the Wild [57.37891682117178]
We present a robust dense indirect structure-from-motion method for videos that is based on dense correspondence from pairwise optical flow.
A novel neural network architecture is proposed for processing irregular point trajectory data.
Experiments on MPI Sintel dataset show that our system produces significantly more accurate camera trajectories.
arXiv Detail & Related papers (2022-07-19T09:19:45Z) - Dense Continuous-Time Optical Flow from Events and Frames [27.1850072968441]
We show that it is possible to compute per-pixel, continuous-time optical flow using events from an event camera.
We leverage these benefits to predict pixel trajectories densely in continuous time via parameterized B'ezier curves.
Our model is the first method that can regress dense pixel trajectories from event data.
arXiv Detail & Related papers (2022-03-25T14:29:41Z) - Motion-from-Blur: 3D Shape and Motion Estimation of Motion-blurred
Objects in Videos [115.71874459429381]
We propose a method for jointly estimating the 3D motion, 3D shape, and appearance of highly motion-blurred objects from a video.
Experiments on benchmark datasets demonstrate that our method outperforms previous methods for fast moving object deblurring and 3D reconstruction.
arXiv Detail & Related papers (2021-11-29T11:25:14Z) - Optical Flow Estimation from a Single Motion-blurred Image [66.2061278123057]
Motion blur in an image may have practical interests in fundamental computer vision problems.
We propose a novel framework to estimate optical flow from a single motion-blurred image in an end-to-end manner.
arXiv Detail & Related papers (2021-03-04T12:45:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.