KD-MVS: Knowledge Distillation Based Self-supervised Learning for
Multi-view Stereo
- URL: http://arxiv.org/abs/2207.10425v2
- Date: Sun, 20 Aug 2023 07:53:22 GMT
- Title: KD-MVS: Knowledge Distillation Based Self-supervised Learning for
Multi-view Stereo
- Authors: Yikang Ding, Qingtian Zhu, Xiangyue Liu, Wentao Yuan, Haotian Zhang
and Chi Zhang
- Abstract summary: Supervised multi-view stereo (MVS) methods have achieved remarkable progress in terms of reconstruction quality, but suffer from the challenge of collecting large-scale ground-truth depth.
We propose a novel self-supervised training pipeline for MVS based on knowledge distillation, termed KD-MVS.
- Score: 18.52931570395043
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Supervised multi-view stereo (MVS) methods have achieved remarkable progress
in terms of reconstruction quality, but suffer from the challenge of collecting
large-scale ground-truth depth. In this paper, we propose a novel
self-supervised training pipeline for MVS based on knowledge distillation,
termed KD-MVS, which mainly consists of self-supervised teacher training and
distillation-based student training. Specifically, the teacher model is trained
in a self-supervised fashion using both photometric and featuremetric
consistency. Then we distill the knowledge of the teacher model to the student
model through probabilistic knowledge transferring. With the supervision of
validated knowledge, the student model is able to outperform its teacher by a
large margin. Extensive experiments performed on multiple datasets show our
method can even outperform supervised methods.
Related papers
- DMT: Comprehensive Distillation with Multiple Self-supervised Teachers [27.037140667247208]
We introduce Comprehensive Distillation with Multiple Self-supervised Teachers (DMT) for pretrained model compression.
Our experimental results on prominent benchmark datasets exhibit that the proposed method significantly surpasses state-of-the-art competitors.
arXiv Detail & Related papers (2023-12-19T08:31:30Z) - Multi-Mode Online Knowledge Distillation for Self-Supervised Visual
Representation Learning [13.057037169495594]
We propose a Multi-mode Online Knowledge Distillation method (MOKD) to boost self-supervised visual representation learning.
In MOKD, two different models learn collaboratively in a self-supervised manner.
In addition, MOKD also outperforms existing SSL-KD methods for both the student and teacher models.
arXiv Detail & Related papers (2023-04-13T12:55:53Z) - MV-MR: multi-views and multi-representations for self-supervised learning and knowledge distillation [4.156535226615695]
We present a new method of self-supervised learning and knowledge distillation based on the multi-views and multi-representations (MV-MR)
MV-MR is based on dependence between learnable embeddings from augmented and non-augmented views.
We show that the proposed method can be used for efficient self-supervised classification and model-agnostic knowledge distillation.
arXiv Detail & Related papers (2023-03-21T18:40:59Z) - CMD: Self-supervised 3D Action Representation Learning with Cross-modal
Mutual Distillation [130.08432609780374]
In 3D action recognition, there exists rich complementary information between skeleton modalities.
We propose a new Cross-modal Mutual Distillation (CMD) framework with the following designs.
Our approach outperforms existing self-supervised methods and sets a series of new records.
arXiv Detail & Related papers (2022-08-26T06:06:09Z) - Learn From the Past: Experience Ensemble Knowledge Distillation [34.561007802532224]
We propose a novel knowledge distillation method by integrating the teacher's experience for knowledge transfer.
We save a moderate number of intermediate models from the training process of the teacher model uniformly, and then integrate the knowledge of these intermediate models by ensemble technique.
A surprising conclusion is found that strong ensemble teachers do not necessarily produce strong students.
arXiv Detail & Related papers (2022-02-25T04:05:09Z) - Semi-Online Knowledge Distillation [2.373824287636486]
Conventional knowledge distillation (KD) is to transfer knowledge from a large and well pre-trained teacher network to a small student network.
Deep mutual learning (DML) has been proposed to help student networks learn collaboratively and simultaneously.
We propose a Semi-Online Knowledge Distillation (SOKD) method that effectively improves the performance of the student and the teacher.
arXiv Detail & Related papers (2021-11-23T09:44:58Z) - Collaborative Teacher-Student Learning via Multiple Knowledge Transfer [79.45526596053728]
We propose a collaborative teacher-student learning via multiple knowledge transfer (CTSL-MKT)
It allows multiple students learn knowledge from both individual instances and instance relations in a collaborative way.
The experiments and ablation studies on four image datasets demonstrate that the proposed CTSL-MKT significantly outperforms the state-of-the-art KD methods.
arXiv Detail & Related papers (2021-01-21T07:17:04Z) - Reinforced Multi-Teacher Selection for Knowledge Distillation [54.72886763796232]
knowledge distillation is a popular method for model compression.
Current methods assign a fixed weight to a teacher model in the whole distillation.
Most of the existing methods allocate an equal weight to every teacher model.
In this paper, we observe that, due to the complexity of training examples and the differences in student model capability, learning differentially from teacher models can lead to better performance of student models distilled.
arXiv Detail & Related papers (2020-12-11T08:56:39Z) - Knowledge Distillation Meets Self-Supervision [109.6400639148393]
Knowledge distillation involves extracting "dark knowledge" from a teacher network to guide the learning of a student network.
We show that the seemingly different self-supervision task can serve as a simple yet powerful solution.
By exploiting the similarity between those self-supervision signals as an auxiliary task, one can effectively transfer the hidden information from the teacher to the student.
arXiv Detail & Related papers (2020-06-12T12:18:52Z) - Heterogeneous Knowledge Distillation using Information Flow Modeling [82.83891707250926]
We propose a novel KD method that works by modeling the information flow through the various layers of the teacher model.
The proposed method is capable of overcoming the aforementioned limitations by using an appropriate supervision scheme during the different phases of the training process.
arXiv Detail & Related papers (2020-05-02T06:56:56Z) - Learning From Multiple Experts: Self-paced Knowledge Distillation for
Long-tailed Classification [106.08067870620218]
We propose a self-paced knowledge distillation framework, termed Learning From Multiple Experts (LFME)
We refer to these models as 'Experts', and the proposed LFME framework aggregates the knowledge from multiple 'Experts' to learn a unified student model.
We conduct extensive experiments and demonstrate that our method is able to achieve superior performances compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-01-06T12:57:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.