Low-Resolution Object Recognition with Cross-Resolution Relational Contrastive Distillation
- URL: http://arxiv.org/abs/2409.02555v1
- Date: Wed, 4 Sep 2024 09:21:13 GMT
- Title: Low-Resolution Object Recognition with Cross-Resolution Relational Contrastive Distillation
- Authors: Kangkai Zhang, Shiming Ge, Ruixin Shi, Dan Zeng,
- Abstract summary: We propose a cross-resolution relational contrastive distillation approach to facilitate low-resolution object recognition.
Our approach enables the student model to mimic the behavior of a well-trained teacher model.
In this manner, the capability of recovering missing details of familiar low-resolution objects can be effectively enhanced.
- Score: 22.26932361388872
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recognizing objects in low-resolution images is a challenging task due to the lack of informative details. Recent studies have shown that knowledge distillation approaches can effectively transfer knowledge from a high-resolution teacher model to a low-resolution student model by aligning cross-resolution representations. However, these approaches still face limitations in adapting to the situation where the recognized objects exhibit significant representation discrepancies between training and testing images. In this study, we propose a cross-resolution relational contrastive distillation approach to facilitate low-resolution object recognition. Our approach enables the student model to mimic the behavior of a well-trained teacher model which delivers high accuracy in identifying high-resolution objects. To extract sufficient knowledge, the student learning is supervised with contrastive relational distillation loss, which preserves the similarities in various relational structures in contrastive representation space. In this manner, the capability of recovering missing details of familiar low-resolution objects can be effectively enhanced, leading to a better knowledge transfer. Extensive experiments on low-resolution object classification and low-resolution face recognition clearly demonstrate the effectiveness and adaptability of our approach.
Related papers
- Distilling Generative-Discriminative Representations for Very Low-Resolution Face Recognition [19.634712802639356]
Very low-resolution face recognition is challenging due to loss of informative facial details in resolution degradation.
We propose a generative-discriminative representation distillation approach that combines generative representation with cross-resolution aligned knowledge distillation.
Our approach improves the recovery of the missing details in very low-resolution faces and achieves better knowledge transfer.
arXiv Detail & Related papers (2024-09-10T09:53:06Z) - Look One and More: Distilling Hybrid Order Relational Knowledge for Cross-Resolution Image Recognition [30.568519905346253]
We propose a teacher-student learning approach to facilitate low-resolution image recognition via hybrid order relational knowledge distillation.
The approach refers to three streams: the teacher stream is pretrained to recognize high-resolution images in high accuracy, the student stream is learned to identify low-resolution images by mimicking the teacher's behaviors, and the extra assistant stream is introduced as bridge to help knowledge transfer across the teacher to the student.
arXiv Detail & Related papers (2024-09-09T07:32:18Z) - Low-Resolution Face Recognition via Adaptable Instance-Relation Distillation [18.709870458307574]
Low-resolution face recognition is a challenging task due to the missing of informative details.
Recent approaches have proven that high-resolution clues can well guide low-resolution face recognition via proper knowledge transfer.
We propose an adaptable instance-relation distillation approach to facilitate low-resolution face recognition.
arXiv Detail & Related papers (2024-09-03T16:53:34Z) - One Step Diffusion-based Super-Resolution with Time-Aware Distillation [60.262651082672235]
Diffusion-based image super-resolution (SR) methods have shown promise in reconstructing high-resolution images with fine details from low-resolution counterparts.
Recent techniques have been devised to enhance the sampling efficiency of diffusion-based SR models via knowledge distillation.
We propose a time-aware diffusion distillation method, named TAD-SR, to accomplish effective and efficient image super-resolution.
arXiv Detail & Related papers (2024-08-14T11:47:22Z) - Collaborative Knowledge Infusion for Low-resource Stance Detection [83.88515573352795]
Target-related knowledge is often needed to assist stance detection models.
We propose a collaborative knowledge infusion approach for low-resource stance detection tasks.
arXiv Detail & Related papers (2024-03-28T08:32:14Z) - One-stage Low-resolution Text Recognition with High-resolution Knowledge
Transfer [53.02254290682613]
Current solutions for low-resolution text recognition typically rely on a two-stage pipeline.
We propose an efficient and effective knowledge distillation framework to achieve multi-level knowledge transfer.
Experiments show that the proposed one-stage pipeline significantly outperforms super-resolution based two-stage frameworks.
arXiv Detail & Related papers (2023-08-05T02:33:45Z) - Cross-resolution Face Recognition via Identity-Preserving Network and
Knowledge Distillation [12.090322373964124]
Cross-resolution face recognition is a challenging problem for modern deep face recognition systems.
This paper proposes a new approach that enforces the network to focus on the discriminative information stored in the low-frequency components of a low-resolution image.
arXiv Detail & Related papers (2023-03-15T14:52:46Z) - Learning Knowledge Representation with Meta Knowledge Distillation for
Single Image Super-Resolution [82.89021683451432]
We propose a model-agnostic meta knowledge distillation method under the teacher-student architecture for the single image super-resolution task.
Experiments conducted on various single image super-resolution datasets demonstrate that our proposed method outperforms existing defined knowledge representation related distillation methods.
arXiv Detail & Related papers (2022-07-18T02:41:04Z) - Multi-Scale Aligned Distillation for Low-Resolution Detection [68.96325141432078]
This paper focuses on boosting the performance of low-resolution models by distilling knowledge from a high- or multi-resolution model.
On several instance-level detection tasks and datasets, the low-resolution models trained via our approach perform competitively with high-resolution models trained via conventional multi-scale training.
arXiv Detail & Related papers (2021-09-14T12:53:35Z) - Cross-Resolution Adversarial Dual Network for Person Re-Identification
and Beyond [59.149653740463435]
Person re-identification (re-ID) aims at matching images of the same person across camera views.
Due to varying distances between cameras and persons of interest, resolution mismatch can be expected.
We propose a novel generative adversarial network to address cross-resolution person re-ID.
arXiv Detail & Related papers (2020-02-19T07:21:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.