Learning by Distillation: A Self-Supervised Learning Framework for
Optical Flow Estimation
- URL: http://arxiv.org/abs/2106.04195v1
- Date: Tue, 8 Jun 2021 09:13:34 GMT
- Title: Learning by Distillation: A Self-Supervised Learning Framework for
Optical Flow Estimation
- Authors: Pengpeng Liu and Michael R. Lyu and Irwin King and Jia Xu
- Abstract summary: DistillFlow is a knowledge distillation approach to learning optical flow.
It achieves state-of-the-art unsupervised learning performance on both KITTI and Sintel datasets.
Our models ranked 1st among all monocular methods on the KITTI 2015 benchmark, and outperform all published methods on the Sintel Final benchmark.
- Score: 71.76008290101214
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present DistillFlow, a knowledge distillation approach to learning optical
flow. DistillFlow trains multiple teacher models and a student model, where
challenging transformations are applied to the input of the student model to
generate hallucinated occlusions as well as less confident predictions. Then, a
self-supervised learning framework is constructed: confident predictions from
teacher models are served as annotations to guide the student model to learn
optical flow for those less confident predictions. The self-supervised learning
framework enables us to effectively learn optical flow from unlabeled data, not
only for non-occluded pixels, but also for occluded pixels. DistillFlow
achieves state-of-the-art unsupervised learning performance on both KITTI and
Sintel datasets. Our self-supervised pre-trained model also provides an
excellent initialization for supervised fine-tuning, suggesting an alternate
training paradigm in contrast to current supervised learning methods that
highly rely on pre-training on synthetic data. At the time of writing, our
fine-tuned models ranked 1st among all monocular methods on the KITTI 2015
benchmark, and outperform all published methods on the Sintel Final benchmark.
More importantly, we demonstrate the generalization capability of DistillFlow
in three aspects: framework generalization, correspondence generalization and
cross-dataset generalization.
Related papers
- Self-Supervised Radio Pre-training: Toward Foundational Models for Spectrogram Learning [6.1339395157466425]
Foundational deep learning (DL) models are general models, trained on diverse, diverse, and unlabelled datasets.
We introduce Masked Spectrogram Modeling, a novel self-supervised learning approach for pretraining foundational DL models on radio signals.
arXiv Detail & Related papers (2024-11-14T23:56:57Z) - OCAI: Improving Optical Flow Estimation by Occlusion and Consistency Aware Interpolation [55.676358801492114]
We propose OCAI, a method that supports robust frame ambiguities by generating intermediate video frames alongside optical flows in between.
Our evaluations demonstrate superior quality and enhanced optical flow accuracy on established benchmarks such as Sintel and KITTI.
arXiv Detail & Related papers (2024-03-26T20:23:48Z) - An Information Theoretic Approach to Machine Unlearning [45.600917449314444]
Key challenge in unlearning is forgetting the necessary data in a timely manner, while preserving model performance.
In this work, we address the zero-shot unlearning scenario, whereby an unlearning algorithm must be able to remove data given only a trained model and the data to be forgotten.
We derive a simple but principled zero-shot unlearning method based on the geometry of the model.
arXiv Detail & Related papers (2024-02-02T13:33:30Z) - Distilling Knowledge from Self-Supervised Teacher by Embedding Graph
Alignment [52.704331909850026]
We formulate a new knowledge distillation framework to transfer the knowledge from self-supervised pre-trained models to any other student network.
Inspired by the spirit of instance discrimination in self-supervised learning, we model the instance-instance relations by a graph formulation in the feature embedding space.
Our distillation scheme can be flexibly applied to transfer the self-supervised knowledge to enhance representation learning on various student networks.
arXiv Detail & Related papers (2022-11-23T19:27:48Z) - Self-Distillation for Further Pre-training of Transformers [83.84227016847096]
We propose self-distillation as a regularization for a further pre-training stage.
We empirically validate the efficacy of self-distillation on a variety of benchmark datasets for image and text classification tasks.
arXiv Detail & Related papers (2022-09-30T02:25:12Z) - Semi-Supervised Learning of Optical Flow by Flow Supervisor [16.406213579356795]
We propose a practical fine tuning method to adapt a pretrained model to a target dataset without ground truth flows.
This design is aimed at stable convergence and better accuracy over conventional self-supervision methods.
We achieve meaningful improvements over state-of-the-art optical flow models on Sintel and KITTI benchmarks.
arXiv Detail & Related papers (2022-07-21T06:11:52Z) - Imposing Consistency for Optical Flow Estimation [73.53204596544472]
Imposing consistency through proxy tasks has been shown to enhance data-driven learning.
This paper introduces novel and effective consistency strategies for optical flow estimation.
arXiv Detail & Related papers (2022-04-14T22:58:30Z) - Distill on the Go: Online knowledge distillation in self-supervised
learning [1.1470070927586016]
Recent works have shown that wider and deeper models benefit more from self-supervised learning than smaller models.
We propose Distill-on-the-Go (DoGo), a self-supervised learning paradigm using single-stage online knowledge distillation.
Our results show significant performance gain in the presence of noisy and limited labels.
arXiv Detail & Related papers (2021-04-20T09:59:23Z) - Distilling Interpretable Models into Human-Readable Code [71.11328360614479]
Human-readability is an important and desirable standard for machine-learned model interpretability.
We propose to train interpretable models using conventional methods, and then distill them into concise, human-readable code.
We describe a piecewise-linear curve-fitting algorithm that produces high-quality results efficiently and reliably across a broad range of use cases.
arXiv Detail & Related papers (2021-01-21T01:46:36Z) - Distilling Visual Priors from Self-Supervised Learning [24.79633121345066]
Convolutional Neural Networks (CNNs) are prone to overfit small training datasets.
We present a novel two-phase pipeline that leverages self-supervised learning and knowledge distillation to improve the generalization ability of CNN models for image classification under the data-deficient setting.
arXiv Detail & Related papers (2020-08-01T13:07:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.