MDFlow: Unsupervised Optical Flow Learning by Reliable Mutual Knowledge
Distillation
- URL: http://arxiv.org/abs/2211.06018v1
- Date: Fri, 11 Nov 2022 05:56:46 GMT
- Title: MDFlow: Unsupervised Optical Flow Learning by Reliable Mutual Knowledge
Distillation
- Authors: Lingtong Kong, Jie Yang
- Abstract summary: Current approaches impose an augmentation regularization term for continual self-supervision.
We propose a novel mutual distillation framework to transfer reliable knowledge back and forth between the teacher and student networks.
Our approach, termed MDFlow, achieves state-of-the-art real-time accuracy and generalization ability on challenging benchmarks.
- Score: 12.249680550252327
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent works have shown that optical flow can be learned by deep networks
from unlabelled image pairs based on brightness constancy assumption and
smoothness prior. Current approaches additionally impose an augmentation
regularization term for continual self-supervision, which has been proved to be
effective on difficult matching regions. However, this method also amplify the
inevitable mismatch in unsupervised setting, blocking the learning process
towards optimal solution. To break the dilemma, we propose a novel mutual
distillation framework to transfer reliable knowledge back and forth between
the teacher and student networks for alternate improvement. Concretely, taking
estimation of off-the-shelf unsupervised approach as pseudo labels, our insight
locates at defining a confidence selection mechanism to extract relative good
matches, and then add diverse data augmentation for distilling adequate and
reliable knowledge from teacher to student. Thanks to the decouple nature of
our method, we can choose a stronger student architecture for sufficient
learning. Finally, better student prediction is adopted to transfer knowledge
back to the efficient teacher without additional costs in real deployment.
Rather than formulating it as a supervised task, we find that introducing an
extra unsupervised term for multi-target learning achieves best final results.
Extensive experiments show that our approach, termed MDFlow, achieves
state-of-the-art real-time accuracy and generalization ability on challenging
benchmarks. Code is available at https://github.com/ltkong218/MDFlow.
Related papers
- Deep Boosting Learning: A Brand-new Cooperative Approach for Image-Text Matching [53.05954114863596]
We propose a brand-new Deep Boosting Learning (DBL) algorithm for image-text matching.
An anchor branch is first trained to provide insights into the data properties.
A target branch is concurrently tasked with more adaptive margin constraints to further enlarge the relative distance between matched and unmatched samples.
arXiv Detail & Related papers (2024-04-28T08:44:28Z) - Less or More From Teacher: Exploiting Trilateral Geometry For Knowledge
Distillation [21.913044821863636]
We introduce a novel adaptive method for learning a sample-wise knowledge fusion ratio.
We exploit both the correctness of teacher and student, as well as how well the student mimics the teacher on each sample.
A simple neural network then learns the implicit mapping from the intra- and inter-sample relations to an adaptive, sample-wise knowledge fusion ratio.
arXiv Detail & Related papers (2023-12-22T23:16:13Z) - Uncertainty-aware Label Distribution Learning for Facial Expression
Recognition [13.321770808076398]
We propose a new uncertainty-aware label distribution learning method to improve the robustness of deep models against uncertainty and ambiguity.
Our method can be easily integrated into a deep network to obtain more training supervision and improve recognition accuracy.
arXiv Detail & Related papers (2022-09-21T15:48:41Z) - Contrastive Learning with Boosted Memorization [36.957895270908324]
Self-supervised learning has achieved a great success in the representation learning of visual and textual data.
Recent attempts to consider self-supervised long-tailed learning are made by rebalancing in the loss perspective or the model perspective.
We propose a novel Boosted Contrastive Learning (BCL) method to enhance the long-tailed learning in the label-unaware context.
arXiv Detail & Related papers (2022-05-25T11:54:22Z) - Better Supervisory Signals by Observing Learning Paths [10.044413937134237]
We explain two existing label refining methods, label smoothing and knowledge distillation, in terms of our proposed criterion.
We observe the learning path, i.e., the trajectory of the model's predictions during training, for each training sample.
We find that the model can spontaneously refine "bad" labels through a "zig-zag" learning path, which occurs on both toy and real datasets.
arXiv Detail & Related papers (2022-03-04T18:31:23Z) - An Adaptive Framework for Learning Unsupervised Depth Completion [59.17364202590475]
We present a method to infer a dense depth map from a color image and associated sparse depth measurements.
We show that regularization and co-visibility are related via the fitness of the model to data and can be unified into a single framework.
arXiv Detail & Related papers (2021-06-06T02:27:55Z) - Deep Semi-supervised Knowledge Distillation for Overlapping Cervical
Cell Instance Segmentation [54.49894381464853]
We propose to leverage both labeled and unlabeled data for instance segmentation with improved accuracy by knowledge distillation.
We propose a novel Mask-guided Mean Teacher framework with Perturbation-sensitive Sample Mining.
Experiments show that the proposed method improves the performance significantly compared with the supervised method learned from labeled data only.
arXiv Detail & Related papers (2020-07-21T13:27:09Z) - Knowledge Distillation Meets Self-Supervision [109.6400639148393]
Knowledge distillation involves extracting "dark knowledge" from a teacher network to guide the learning of a student network.
We show that the seemingly different self-supervision task can serve as a simple yet powerful solution.
By exploiting the similarity between those self-supervision signals as an auxiliary task, one can effectively transfer the hidden information from the teacher to the student.
arXiv Detail & Related papers (2020-06-12T12:18:52Z) - Learning by Analogy: Reliable Supervision from Transformations for
Unsupervised Optical Flow Estimation [83.23707895728995]
Unsupervised learning of optical flow has emerged as a promising alternative to supervised methods.
We present a framework to use more reliable supervision from transformations.
Our method consistently gets a leap of performance on several benchmarks with the best accuracy among deep unsupervised methods.
arXiv Detail & Related papers (2020-03-29T14:55:24Z) - Learning From Multiple Experts: Self-paced Knowledge Distillation for
Long-tailed Classification [106.08067870620218]
We propose a self-paced knowledge distillation framework, termed Learning From Multiple Experts (LFME)
We refer to these models as 'Experts', and the proposed LFME framework aggregates the knowledge from multiple 'Experts' to learn a unified student model.
We conduct extensive experiments and demonstrate that our method is able to achieve superior performances compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-01-06T12:57:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.