Semi-Supervised Learning with Mutual Distillation for Monocular Depth
Estimation
- URL: http://arxiv.org/abs/2203.09737v1
- Date: Fri, 18 Mar 2022 04:28:58 GMT
- Title: Semi-Supervised Learning with Mutual Distillation for Monocular Depth
Estimation
- Authors: Jongbeom Baek, Gyeongnyeon Kim, and Seungryong Kim
- Abstract summary: We build two separate network branches for each loss and distilling each other through the mutual distillation loss function.
We conduct experiments to demonstrate the effectiveness of our framework over the latest methods and provide extensive ablation studies.
- Score: 27.782150368174413
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a semi-supervised learning framework for monocular depth
estimation. Compared to existing semi-supervised learning methods, which
inherit limitations of both sparse supervised and unsupervised loss functions,
we achieve the complementary advantages of both loss functions, by building two
separate network branches for each loss and distilling each other through the
mutual distillation loss function. We also present to apply different data
augmentation to each branch, which improves the robustness. We conduct
experiments to demonstrate the effectiveness of our framework over the latest
methods and provide extensive ablation studies.
Related papers
- Anti-Collapse Loss for Deep Metric Learning Based on Coding Rate Metric [99.19559537966538]
DML aims to learn a discriminative high-dimensional embedding space for downstream tasks like classification, clustering, and retrieval.
To maintain the structure of embedding space and avoid feature collapse, we propose a novel loss function called Anti-Collapse Loss.
Comprehensive experiments on benchmark datasets demonstrate that our proposed method outperforms existing state-of-the-art methods.
arXiv Detail & Related papers (2024-07-03T13:44:20Z) - Provable Contrastive Continual Learning [7.6989463205452555]
We establish theoretical performance guarantees, which reveal how the performance of the model is bounded by training losses of previous tasks.
Inspired by our theoretical analysis of these guarantees, we propose a novel contrastive continual learning algorithm called CILA.
Our method shows great improvement on standard benchmarks and achieves new state-of-the-art performance.
arXiv Detail & Related papers (2024-05-29T04:48:11Z) - The Curse of Diversity in Ensemble-Based Exploration [7.209197316045156]
Training a diverse ensemble of data-sharing agents can significantly impair the performance of the individual ensemble members.
We name this phenomenon the curse of diversity.
We demonstrate the potential of representation learning to counteract the curse of diversity.
arXiv Detail & Related papers (2024-05-07T14:14:50Z) - Robust Contrastive Learning With Theory Guarantee [25.57187964518637]
Contrastive learning (CL) is a self-supervised training paradigm that allows us to extract meaningful features without any label information.
Our work develops rigorous theories to dissect and identify which components in the unsupervised loss can help improve the robust supervised loss.
arXiv Detail & Related papers (2023-11-16T08:39:58Z) - Tuned Contrastive Learning [77.67209954169593]
We propose a novel contrastive loss function -- Tuned Contrastive Learning (TCL) loss.
TCL generalizes to multiple positives and negatives in a batch and offers parameters to tune and improve the gradient responses from hard positives and hard negatives.
We show how to extend TCL to self-supervised setting and empirically compare it with various SOTA self-supervised learning methods.
arXiv Detail & Related papers (2023-05-18T03:26:37Z) - Contrastive Bayesian Analysis for Deep Metric Learning [30.21464199249958]
We develop a contrastive Bayesian analysis to characterize and model the posterior probabilities of image labels conditioned by their features similarity.
This contrastive Bayesian analysis leads to a new loss function for deep metric learning.
Our experimental results and ablation studies demonstrate that the proposed contrastive Bayesian metric learning method significantly improves the performance of deep metric learning.
arXiv Detail & Related papers (2022-10-10T02:24:21Z) - Stain based contrastive co-training for histopathological image analysis [61.87751502143719]
We propose a novel semi-supervised learning approach for classification of histovolution images.
We employ strong supervision with patch-level annotations combined with a novel co-training loss to create a semi-supervised learning framework.
We evaluate our approach in clear cell renal cell and prostate carcinomas, and demonstrate improvement over state-of-the-art semi-supervised learning methods.
arXiv Detail & Related papers (2022-06-24T22:25:31Z) - Deep Bregman Divergence for Contrastive Learning of Visual
Representations [4.994260049719745]
Deep Bregman divergence measures divergence of data points using neural networks which is beyond Euclidean distance.
We aim to enhance contrastive loss used in self-supervised learning by training additional networks based on functional Bregman divergence.
arXiv Detail & Related papers (2021-09-15T17:44:40Z) - Unpaired Adversarial Learning for Single Image Deraining with Rain-Space
Contrastive Constraints [61.40893559933964]
We develop an effective unpaired SID method which explores mutual properties of the unpaired exemplars by a contrastive learning manner in a GAN framework, named as CDR-GAN.
Our method performs favorably against existing unpaired deraining approaches on both synthetic and real-world datasets, even outperforms several fully-supervised or semi-supervised models.
arXiv Detail & Related papers (2021-09-07T10:00:45Z) - Co$^2$L: Contrastive Continual Learning [69.46643497220586]
Recent breakthroughs in self-supervised learning show that such algorithms learn visual representations that can be transferred better to unseen tasks.
We propose a rehearsal-based continual learning algorithm that focuses on continually learning and maintaining transferable representations.
arXiv Detail & Related papers (2021-06-28T06:14:38Z) - Unsupervised Scale-consistent Depth Learning from Video [131.3074342883371]
We propose a monocular depth estimator SC-Depth, which requires only unlabelled videos for training.
Thanks to the capability of scale-consistent prediction, we show that our monocular-trained deep networks are readily integrated into the ORB-SLAM2 system.
The proposed hybrid Pseudo-RGBD SLAM shows compelling results in KITTI, and it generalizes well to the KAIST dataset without additional training.
arXiv Detail & Related papers (2021-05-25T02:17:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.