Look One and More: Distilling Hybrid Order Relational Knowledge for Cross-Resolution Image Recognition
- URL: http://arxiv.org/abs/2409.05384v1
- Date: Mon, 9 Sep 2024 07:32:18 GMT
- Title: Look One and More: Distilling Hybrid Order Relational Knowledge for Cross-Resolution Image Recognition
- Authors: Shiming Ge, Kangkai Zhang, Haolin Liu, Yingying Hua, Shengwei Zhao, Xin Jin, Hao Wen,
- Abstract summary: We propose a teacher-student learning approach to facilitate low-resolution image recognition via hybrid order relational knowledge distillation.
The approach refers to three streams: the teacher stream is pretrained to recognize high-resolution images in high accuracy, the student stream is learned to identify low-resolution images by mimicking the teacher's behaviors, and the extra assistant stream is introduced as bridge to help knowledge transfer across the teacher to the student.
- Score: 30.568519905346253
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In spite of great success in many image recognition tasks achieved by recent deep models, directly applying them to recognize low-resolution images may suffer from low accuracy due to the missing of informative details during resolution degradation. However, these images are still recognizable for subjects who are familiar with the corresponding high-resolution ones. Inspired by that, we propose a teacher-student learning approach to facilitate low-resolution image recognition via hybrid order relational knowledge distillation. The approach refers to three streams: the teacher stream is pretrained to recognize high-resolution images in high accuracy, the student stream is learned to identify low-resolution images by mimicking the teacher's behaviors, and the extra assistant stream is introduced as bridge to help knowledge transfer across the teacher to the student. To extract sufficient knowledge for reducing the loss in accuracy, the learning of student is supervised with multiple losses, which preserves the similarities in various order relational structures. In this way, the capability of recovering missing details of familiar low-resolution images can be effectively enhanced, leading to a better knowledge transfer. Extensive experiments on metric learning, low-resolution image classification and low-resolution face recognition tasks show the effectiveness of our approach, while taking reduced models.
Related papers
- Distilling Generative-Discriminative Representations for Very Low-Resolution Face Recognition [19.634712802639356]
Very low-resolution face recognition is challenging due to loss of informative facial details in resolution degradation.
We propose a generative-discriminative representation distillation approach that combines generative representation with cross-resolution aligned knowledge distillation.
Our approach improves the recovery of the missing details in very low-resolution faces and achieves better knowledge transfer.
arXiv Detail & Related papers (2024-09-10T09:53:06Z) - Low-Resolution Object Recognition with Cross-Resolution Relational Contrastive Distillation [22.26932361388872]
We propose a cross-resolution relational contrastive distillation approach to facilitate low-resolution object recognition.
Our approach enables the student model to mimic the behavior of a well-trained teacher model.
In this manner, the capability of recovering missing details of familiar low-resolution objects can be effectively enhanced.
arXiv Detail & Related papers (2024-09-04T09:21:13Z) - Low-Resolution Face Recognition via Adaptable Instance-Relation Distillation [18.709870458307574]
Low-resolution face recognition is a challenging task due to the missing of informative details.
Recent approaches have proven that high-resolution clues can well guide low-resolution face recognition via proper knowledge transfer.
We propose an adaptable instance-relation distillation approach to facilitate low-resolution face recognition.
arXiv Detail & Related papers (2024-09-03T16:53:34Z) - One Step Diffusion-based Super-Resolution with Time-Aware Distillation [60.262651082672235]
Diffusion-based image super-resolution (SR) methods have shown promise in reconstructing high-resolution images with fine details from low-resolution counterparts.
Recent techniques have been devised to enhance the sampling efficiency of diffusion-based SR models via knowledge distillation.
We propose a time-aware diffusion distillation method, named TAD-SR, to accomplish effective and efficient image super-resolution.
arXiv Detail & Related papers (2024-08-14T11:47:22Z) - Attention to detail: inter-resolution knowledge distillation [1.927195358774599]
Development of computer vision solutions for gigapixel images in digital pathology is hampered by the large size of whole slide images.
Recent literature has proposed using knowledge distillation to enhance the model performance at reduced image resolutions.
In this work, we propose to distill this information by incorporating attention maps during training.
arXiv Detail & Related papers (2024-01-11T16:16:20Z) - Exploring Deep Learning Image Super-Resolution for Iris Recognition [50.43429968821899]
We propose the use of two deep learning single-image super-resolution approaches: Stacked Auto-Encoders (SAE) and Convolutional Neural Networks (CNN)
We validate the methods with a database of 1.872 near-infrared iris images with quality assessment and recognition experiments showing the superiority of deep learning approaches over the compared algorithms.
arXiv Detail & Related papers (2023-11-02T13:57:48Z) - One-stage Low-resolution Text Recognition with High-resolution Knowledge
Transfer [53.02254290682613]
Current solutions for low-resolution text recognition typically rely on a two-stage pipeline.
We propose an efficient and effective knowledge distillation framework to achieve multi-level knowledge transfer.
Experiments show that the proposed one-stage pipeline significantly outperforms super-resolution based two-stage frameworks.
arXiv Detail & Related papers (2023-08-05T02:33:45Z) - Cross-resolution Face Recognition via Identity-Preserving Network and
Knowledge Distillation [12.090322373964124]
Cross-resolution face recognition is a challenging problem for modern deep face recognition systems.
This paper proposes a new approach that enforces the network to focus on the discriminative information stored in the low-frequency components of a low-resolution image.
arXiv Detail & Related papers (2023-03-15T14:52:46Z) - Learning Knowledge Representation with Meta Knowledge Distillation for
Single Image Super-Resolution [82.89021683451432]
We propose a model-agnostic meta knowledge distillation method under the teacher-student architecture for the single image super-resolution task.
Experiments conducted on various single image super-resolution datasets demonstrate that our proposed method outperforms existing defined knowledge representation related distillation methods.
arXiv Detail & Related papers (2022-07-18T02:41:04Z) - Multi-Scale Aligned Distillation for Low-Resolution Detection [68.96325141432078]
This paper focuses on boosting the performance of low-resolution models by distilling knowledge from a high- or multi-resolution model.
On several instance-level detection tasks and datasets, the low-resolution models trained via our approach perform competitively with high-resolution models trained via conventional multi-scale training.
arXiv Detail & Related papers (2021-09-14T12:53:35Z) - Learning Student-Friendly Teacher Networks for Knowledge Distillation [50.11640959363315]
We propose a novel knowledge distillation approach to facilitate the transfer of dark knowledge from a teacher to a student.
Contrary to most of the existing methods that rely on effective training of student models given pretrained teachers, we aim to learn the teacher models that are friendly to students.
arXiv Detail & Related papers (2021-02-12T07:00:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.