DistractFlow: Improving Optical Flow Estimation via Realistic
Distractions and Pseudo-Labeling
- URL: http://arxiv.org/abs/2303.14078v1
- Date: Fri, 24 Mar 2023 15:42:54 GMT
- Title: DistractFlow: Improving Optical Flow Estimation via Realistic
Distractions and Pseudo-Labeling
- Authors: Jisoo Jeong, Hong Cai, Risheek Garrepalli, Fatih Porikli
- Abstract summary: We propose a novel data augmentation approach, DistractFlow, for training optical flow estimation models.
We combine one of the frames in the pair with a distractor image depicting a similar domain, which allows for inducing visual perturbations congruent with natural objects and scenes.
Our approach allows increasing the number of available training pairs significantly without requiring additional annotations.
- Score: 49.46842536813477
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a novel data augmentation approach, DistractFlow, for training
optical flow estimation models by introducing realistic distractions to the
input frames. Based on a mixing ratio, we combine one of the frames in the pair
with a distractor image depicting a similar domain, which allows for inducing
visual perturbations congruent with natural objects and scenes. We refer to
such pairs as distracted pairs. Our intuition is that using semantically
meaningful distractors enables the model to learn related variations and attain
robustness against challenging deviations, compared to conventional
augmentation schemes focusing only on low-level aspects and modifications. More
specifically, in addition to the supervised loss computed between the estimated
flow for the original pair and its ground-truth flow, we include a second
supervised loss defined between the distracted pair's flow and the original
pair's ground-truth flow, weighted with the same mixing ratio. Furthermore,
when unlabeled data is available, we extend our augmentation approach to
self-supervised settings through pseudo-labeling and cross-consistency
regularization. Given an original pair and its distracted version, we enforce
the estimated flow on the distracted pair to agree with the flow of the
original pair. Our approach allows increasing the number of available training
pairs significantly without requiring additional annotations. It is agnostic to
the model architecture and can be applied to training any optical flow
estimation models. Our extensive evaluations on multiple benchmarks, including
Sintel, KITTI, and SlowFlow, show that DistractFlow improves existing models
consistently, outperforming the latest state of the art.
Related papers
- OCAI: Improving Optical Flow Estimation by Occlusion and Consistency Aware Interpolation [55.676358801492114]
We propose OCAI, a method that supports robust frame ambiguities by generating intermediate video frames alongside optical flows in between.
Our evaluations demonstrate superior quality and enhanced optical flow accuracy on established benchmarks such as Sintel and KITTI.
arXiv Detail & Related papers (2024-03-26T20:23:48Z) - Guided Flows for Generative Modeling and Decision Making [55.42634941614435]
We show that Guided Flows significantly improves the sample quality in conditional image generation and zero-shot text synthesis-to-speech.
Notably, we are first to apply flow models for plan generation in the offline reinforcement learning setting ax speedup in compared to diffusion models.
arXiv Detail & Related papers (2023-11-22T15:07:59Z) - The Surprising Effectiveness of Diffusion Models for Optical Flow and
Monocular Depth Estimation [42.48819460873482]
Denoising diffusion probabilistic models have transformed image generation with their impressive fidelity and diversity.
We show that they also excel in estimating optical flow and monocular depth, surprisingly, without task-specific architectures and loss functions.
arXiv Detail & Related papers (2023-06-02T21:26:20Z) - Imposing Consistency for Optical Flow Estimation [73.53204596544472]
Imposing consistency through proxy tasks has been shown to enhance data-driven learning.
This paper introduces novel and effective consistency strategies for optical flow estimation.
arXiv Detail & Related papers (2022-04-14T22:58:30Z) - Attentive Contractive Flow with Lipschitz-constrained Self-Attention [25.84621883831624]
We introduce a novel approach called Attentive Contractive Flow (ACF)
ACF utilizes a special category of flow-based generative models - contractive flows.
We demonstrate that ACF can be introduced into a variety of state of the art flow models in a plug-and-play manner.
arXiv Detail & Related papers (2021-09-24T18:02:49Z) - DeFlow: Learning Complex Image Degradations from Unpaired Data with
Conditional Flows [145.83812019515818]
We propose DeFlow, a method for learning image degradations from unpaired data.
We model the degradation process in the latent space of a shared flow-decoder network.
We validate our DeFlow formulation on the task of joint image restoration and super-resolution.
arXiv Detail & Related papers (2021-01-14T18:58:01Z) - What Matters in Unsupervised Optical Flow [51.45112526506455]
We compare and analyze a set of key components in unsupervised optical flow.
We construct a number of novel improvements to unsupervised flow models.
We present a new unsupervised flow technique that significantly outperforms the previous state-of-the-art.
arXiv Detail & Related papers (2020-06-08T19:36:26Z) - Closing the Dequantization Gap: PixelCNN as a Single-Layer Flow [16.41460104376002]
We introduce subset flows, a class of flows that can transform finite volumes and allow exact computation of likelihoods for discrete data.
We identify ordinal discrete autoregressive models, including WaveNets, PixelCNNs and Transformers, as single-layer flows.
We demonstrate state-of-the-art results on CIFAR-10 for flow models trained with dequantization.
arXiv Detail & Related papers (2020-02-06T22:58:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.