Learning by Analogy: Reliable Supervision from Transformations for
Unsupervised Optical Flow Estimation
- URL: http://arxiv.org/abs/2003.13045v2
- Date: Sun, 29 Nov 2020 12:26:25 GMT
- Title: Learning by Analogy: Reliable Supervision from Transformations for
Unsupervised Optical Flow Estimation
- Authors: Liang Liu, Jiangning Zhang, Ruifei He, Yong Liu, Yabiao Wang, Ying
Tai, Donghao Luo, Chengjie Wang, Jilin Li, Feiyue Huang
- Abstract summary: Unsupervised learning of optical flow has emerged as a promising alternative to supervised methods.
We present a framework to use more reliable supervision from transformations.
Our method consistently gets a leap of performance on several benchmarks with the best accuracy among deep unsupervised methods.
- Score: 83.23707895728995
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised learning of optical flow, which leverages the supervision from
view synthesis, has emerged as a promising alternative to supervised methods.
However, the objective of unsupervised learning is likely to be unreliable in
challenging scenes. In this work, we present a framework to use more reliable
supervision from transformations. It simply twists the general unsupervised
learning pipeline by running another forward pass with transformed data from
augmentation, along with using transformed predictions of original data as the
self-supervision signal. Besides, we further introduce a lightweight network
with multiple frames by a highly-shared flow decoder. Our method consistently
gets a leap of performance on several benchmarks with the best accuracy among
deep unsupervised methods. Also, our method achieves competitive results to
recent fully supervised methods while with much fewer parameters.
Related papers
- CL-Flow:Strengthening the Normalizing Flows by Contrastive Learning for
Better Anomaly Detection [1.951082473090397]
We propose a self-supervised anomaly detection approach that combines contrastive learning with 2D-Flow.
Compared to mainstream unsupervised approaches, our self-supervised method demonstrates superior detection accuracy, fewer additional model parameters, and faster inference speed.
Our approach showcases new state-of-the-art results, achieving a performance of 99.6% in image-level AUROC on the MVTecAD dataset and 96.8% in image-level AUROC on the BTAD dataset.
arXiv Detail & Related papers (2023-11-12T10:07:03Z) - MDFlow: Unsupervised Optical Flow Learning by Reliable Mutual Knowledge
Distillation [12.249680550252327]
Current approaches impose an augmentation regularization term for continual self-supervision.
We propose a novel mutual distillation framework to transfer reliable knowledge back and forth between the teacher and student networks.
Our approach, termed MDFlow, achieves state-of-the-art real-time accuracy and generalization ability on challenging benchmarks.
arXiv Detail & Related papers (2022-11-11T05:56:46Z) - Semi-Supervised Learning of Optical Flow by Flow Supervisor [16.406213579356795]
We propose a practical fine tuning method to adapt a pretrained model to a target dataset without ground truth flows.
This design is aimed at stable convergence and better accuracy over conventional self-supervision methods.
We achieve meaningful improvements over state-of-the-art optical flow models on Sintel and KITTI benchmarks.
arXiv Detail & Related papers (2022-07-21T06:11:52Z) - Unsupervised Learning of Accurate Siamese Tracking [68.58171095173056]
We present a novel unsupervised tracking framework, in which we can learn temporal correspondence both on the classification branch and regression branch.
Our tracker outperforms preceding unsupervised methods by a substantial margin, performing on par with supervised methods on large-scale datasets such as TrackingNet and LaSOT.
arXiv Detail & Related papers (2022-04-04T13:39:43Z) - Self-supervised Transformer for Deepfake Detection [112.81127845409002]
Deepfake techniques in real-world scenarios require stronger generalization abilities of face forgery detectors.
Inspired by transfer learning, neural networks pre-trained on other large-scale face-related tasks may provide useful features for deepfake detection.
In this paper, we propose a self-supervised transformer based audio-visual contrastive learning method.
arXiv Detail & Related papers (2022-03-02T17:44:40Z) - Learning by Distillation: A Self-Supervised Learning Framework for
Optical Flow Estimation [71.76008290101214]
DistillFlow is a knowledge distillation approach to learning optical flow.
It achieves state-of-the-art unsupervised learning performance on both KITTI and Sintel datasets.
Our models ranked 1st among all monocular methods on the KITTI 2015 benchmark, and outperform all published methods on the Sintel Final benchmark.
arXiv Detail & Related papers (2021-06-08T09:13:34Z) - Joint Generative and Contrastive Learning for Unsupervised Person
Re-identification [15.486689594217273]
Recent self-supervised contrastive learning provides an effective approach for unsupervised person re-identification (ReID)
In this paper, we incorporate a Generative Adversarial Network (GAN) and a contrastive learning module into one joint training framework.
arXiv Detail & Related papers (2020-12-16T16:49:57Z) - Deep Shells: Unsupervised Shape Correspondence with Optimal Transport [52.646396621449]
We propose a novel unsupervised learning approach to 3D shape correspondence.
We show that the proposed method significantly improves over the state-of-the-art on multiple datasets.
arXiv Detail & Related papers (2020-10-28T22:24:07Z) - What Matters in Unsupervised Optical Flow [51.45112526506455]
We compare and analyze a set of key components in unsupervised optical flow.
We construct a number of novel improvements to unsupervised flow models.
We present a new unsupervised flow technique that significantly outperforms the previous state-of-the-art.
arXiv Detail & Related papers (2020-06-08T19:36:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.