OccInpFlow: Occlusion-Inpainting Optical Flow Estimation by Unsupervised
Learning
- URL: http://arxiv.org/abs/2006.16637v1
- Date: Tue, 30 Jun 2020 10:01:32 GMT
- Title: OccInpFlow: Occlusion-Inpainting Optical Flow Estimation by Unsupervised
Learning
- Authors: Kunming Luo, Chuan Wang, Nianjin Ye, Shuaicheng Liu, Jue Wang
- Abstract summary: Occlusion is an inevitable and critical problem in unsupervised optical flow learning.
We present OccInpFlow, an occlusion-inpainting framework to make full use of Occlusion regions.
We conduct experiments on leading flow benchmark data sets such as Flying Chairs, KITTI and MPI-Sintel.
- Score: 29.802404790103665
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Occlusion is an inevitable and critical problem in unsupervised optical flow
learning. Existing methods either treat occlusions equally as non-occluded
regions or simply remove them to avoid incorrectness. However, the occlusion
regions can provide effective information for optical flow learning. In this
paper, we present OccInpFlow, an occlusion-inpainting framework to make full
use of occlusion regions. Specifically, a new appearance-flow network is
proposed to inpaint occluded flows based on the image content. Moreover, a
boundary warp is proposed to deal with occlusions caused by displacement beyond
image border. We conduct experiments on multiple leading flow benchmark data
sets such as Flying Chairs, KITTI and MPI-Sintel, which demonstrate that the
performance is significantly improved by our proposed occlusion handling
framework.
Related papers
- Rethink Predicting the Optical Flow with the Kinetics Perspective [1.7901503554839604]
Optical flow estimation is one of the fundamental tasks in low-level computer vision.
From the apparent aspect, the optical flow can be viewed as the correlation between the pixels in consecutive frames.
We propose a method combining the apparent and kinetics information from this motivation.
arXiv Detail & Related papers (2024-05-21T05:47:42Z) - CMU-Flownet: Exploring Point Cloud Scene Flow Estimation in Occluded Scenario [10.852258389804984]
Occlusions hinder point cloud frame alignment in LiDAR data, a challenge inadequately addressed by scene flow models.
We introduce the Correlation Matrix Upsampling Flownet (CMU-Flownet), incorporating an occlusion estimation module within its cost volume layer.
CMU-Flownet establishes state-of-the-art performance within the realms of occluded Flyingthings3D and KITTY datasets.
arXiv Detail & Related papers (2024-04-16T13:47:21Z) - OCAI: Improving Optical Flow Estimation by Occlusion and Consistency Aware Interpolation [55.676358801492114]
We propose OCAI, a method that supports robust frame ambiguities by generating intermediate video frames alongside optical flows in between.
Our evaluations demonstrate superior quality and enhanced optical flow accuracy on established benchmarks such as Sintel and KITTI.
arXiv Detail & Related papers (2024-03-26T20:23:48Z) - Deep Dynamic Scene Deblurring from Optical Flow [53.625999196063574]
Deblurring can provide visually more pleasant pictures and make photography more convenient.
It is difficult to model the non-uniform blur mathematically.
We develop a convolutional neural network (CNN) to restore the sharp images from the deblurred features.
arXiv Detail & Related papers (2023-01-18T06:37:21Z) - Locality-aware Channel-wise Dropout for Occluded Face Recognition [116.2355331029041]
Face recognition is a challenging task in unconstrained scenarios, especially when faces are partially occluded.
We propose a novel and elegant occlusion-simulation method via dropping the activations of a group of neurons in some elaborately selected channel.
Experiments on various benchmarks show that the proposed method outperforms state-of-the-art methods with a remarkable improvement.
arXiv Detail & Related papers (2021-07-20T05:53:14Z) - NccFlow: Unsupervised Learning of Optical Flow With Non-occlusion from
Geometry [11.394559627312743]
This paper reveals novel geometric laws of optical flow based on the insight and detailed definition of non-occlusion.
Two loss functions are proposed for the unsupervised learning of optical flow based on the geometric laws of non-occlusion.
arXiv Detail & Related papers (2021-07-08T05:19:54Z) - Learning to Estimate Hidden Motions with Global Motion Aggregation [71.12650817490318]
Occlusions pose a significant challenge to optical flow algorithms that rely on local evidences.
We introduce a global motion aggregation module to find long-range dependencies between pixels in the first image.
We demonstrate that the optical flow estimates in the occluded regions can be significantly improved without damaging the performance in non-occluded regions.
arXiv Detail & Related papers (2021-04-06T10:32:03Z) - OAS-Net: Occlusion Aware Sampling Network for Accurate Optical Flow [4.42249337449125]
Existing deep networks have achieved satisfactory results by mostly employing a pyramidal coarse-to-fine paradigm.
We propose a lightweight yet efficient optical flow network, named OAS-Net, for accurate optical flow.
Experiments on Sintel and KITTI datasets demonstrate the effectiveness of proposed approaches.
arXiv Detail & Related papers (2021-01-31T03:30:31Z) - What Matters in Unsupervised Optical Flow [51.45112526506455]
We compare and analyze a set of key components in unsupervised optical flow.
We construct a number of novel improvements to unsupervised flow models.
We present a new unsupervised flow technique that significantly outperforms the previous state-of-the-art.
arXiv Detail & Related papers (2020-06-08T19:36:26Z) - Learning to See Through Obstructions [117.77024641706451]
We present a learning-based approach for removing unwanted obstructions from a short sequence of images captured by a moving camera.
Our method leverages the motion differences between the background and the obstructing elements to recover both layers.
We show that training on synthetically generated data transfers well to real images.
arXiv Detail & Related papers (2020-04-02T17:59:12Z) - Unsupervised Learning of Depth, Optical Flow and Pose with Occlusion
from 3D Geometry [29.240108776329045]
In this paper, pixels in the middle frame are modeled into three parts: the rigid region, the non-rigid region, and the occluded region.
In joint unsupervised training of depth and pose, we can segment the occluded region explicitly.
In the occluded region, as depth and camera motion can provide more reliable motion estimation, they can be used to instruct unsupervised learning of optical flow.
arXiv Detail & Related papers (2020-03-02T11:18:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.