Joint Unsupervised Learning of Optical Flow and Egomotion with Bi-Level
Optimization
- URL: http://arxiv.org/abs/2002.11826v1
- Date: Wed, 26 Feb 2020 22:28:00 GMT
- Title: Joint Unsupervised Learning of Optical Flow and Egomotion with Bi-Level
Optimization
- Authors: Shihao Jiang, Dylan Campbell, Miaomiao Liu, Stephen Gould, Richard
Hartley
- Abstract summary: We exploit the global relationship between optical flow and camera motion using epipolar geometry.
We use implicit differentiation to enable back-propagation through the lower-level geometric optimization layer independent of its implementation.
- Score: 59.9673626329892
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We address the problem of joint optical flow and camera motion estimation in
rigid scenes by incorporating geometric constraints into an unsupervised deep
learning framework. Unlike existing approaches which rely on brightness
constancy and local smoothness for optical flow estimation, we exploit the
global relationship between optical flow and camera motion using epipolar
geometry. In particular, we formulate the prediction of optical flow and camera
motion as a bi-level optimization problem, consisting of an upper-level problem
to estimate the flow that conforms to the predicted camera motion, and a
lower-level problem to estimate the camera motion given the predicted optical
flow. We use implicit differentiation to enable back-propagation through the
lower-level geometric optimization layer independent of its implementation,
allowing end-to-end training of the network. With globally-enforced geometric
constraints, we are able to improve the quality of the estimated optical flow
in challenging scenarios and obtain better camera motion estimates compared to
other unsupervised learning methods.
Related papers
- Motion-prior Contrast Maximization for Dense Continuous-Time Motion Estimation [34.529280562470746]
We introduce a novel self-supervised loss combining the Contrast Maximization framework with a non-linear motion prior in the form of pixel-level trajectories.
Their effectiveness is demonstrated in two scenarios: In dense continuous-time motion estimation, our method improves the zero-shot performance of a synthetically trained model by 29%.
arXiv Detail & Related papers (2024-07-15T15:18:28Z) - Skin the sheep not only once: Reusing Various Depth Datasets to Drive
the Learning of Optical Flow [25.23550076996421]
We propose to leverage the geometric connection between optical flow estimation and stereo matching.
We turn the monocular depth datasets into stereo ones via virtual disparity.
We also introduce virtual camera motion into stereo data to produce additional flows along the vertical direction.
arXiv Detail & Related papers (2023-10-03T06:56:07Z) - Joint Self-supervised Depth and Optical Flow Estimation towards Dynamic
Objects [3.794605440322862]
In this work, we construct a joint inter-frame-supervised depth and optical flow estimation framework.
For motion segmentation, we adaptively segment the preliminary estimated optical flow map with large areas of connectivity.
Our proposed joint depth and optical flow estimation outperforms existing depth estimators on the KITTI Depth dataset.
arXiv Detail & Related papers (2023-09-07T04:00:52Z) - Unsupervised Learning Optical Flow in Multi-frame Dynamic Environment
Using Temporal Dynamic Modeling [7.111443975103329]
In this paper, we explore the optical flow estimation from multiple-frame sequences of dynamic scenes.
We use motion priors of the adjacent frames to provide more reliable supervision of the occluded regions.
Experiments on KITTI 2012, KITTI 2015, Sintel Clean, and Sintel Final datasets demonstrate the effectiveness of our methods.
arXiv Detail & Related papers (2023-04-14T14:32:02Z) - Dimensions of Motion: Learning to Predict a Subspace of Optical Flow
from a Single Image [50.9686256513627]
We introduce the problem of predicting, from a single video frame, a low-dimensional subspace of optical flow which includes the actual instantaneous optical flow.
We show how several natural scene assumptions allow us to identify an appropriate flow subspace via a set of basis flow fields parameterized by disparity.
This provides a new approach to learning these tasks in an unsupervised fashion using monocular input video without requiring camera intrinsics or poses.
arXiv Detail & Related papers (2021-12-02T18:52:54Z) - Dense Optical Flow from Event Cameras [55.79329250951028]
We propose to incorporate feature correlation and sequential processing into dense optical flow estimation from event cameras.
Our proposed approach computes dense optical flow and reduces the end-point error by 23% on MVSEC.
arXiv Detail & Related papers (2021-08-24T07:39:08Z) - Unsupervised Motion Representation Enhanced Network for Action
Recognition [4.42249337449125]
Motion representation between consecutive frames has proven to have great promotion to video understanding.
TV-L1 method, an effective optical flow solver, is time-consuming and expensive in storage for caching the extracted optical flow.
We propose UF-TSN, a novel end-to-end action recognition approach enhanced with an embedded lightweight unsupervised optical flow estimator.
arXiv Detail & Related papers (2021-03-05T04:14:32Z) - Optical Flow Estimation from a Single Motion-blurred Image [66.2061278123057]
Motion blur in an image may have practical interests in fundamental computer vision problems.
We propose a novel framework to estimate optical flow from a single motion-blurred image in an end-to-end manner.
arXiv Detail & Related papers (2021-03-04T12:45:18Z) - Pushing the Envelope of Rotation Averaging for Visual SLAM [69.7375052440794]
We propose a novel optimization backbone for visual SLAM systems.
We leverage averaging to improve the accuracy, efficiency and robustness of conventional monocular SLAM systems.
Our approach can exhibit up to 10x faster with comparable accuracy against the state-art on public benchmarks.
arXiv Detail & Related papers (2020-11-02T18:02:26Z) - What Matters in Unsupervised Optical Flow [51.45112526506455]
We compare and analyze a set of key components in unsupervised optical flow.
We construct a number of novel improvements to unsupervised flow models.
We present a new unsupervised flow technique that significantly outperforms the previous state-of-the-art.
arXiv Detail & Related papers (2020-06-08T19:36:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.