Efficient Low-Resolution Face Recognition via Bridge Distillation
- URL: http://arxiv.org/abs/2409.11786v1
- Date: Wed, 18 Sep 2024 08:10:35 GMT
- Title: Efficient Low-Resolution Face Recognition via Bridge Distillation
- Authors: Shiming Ge, Shengwei Zhao, Chenyu Li, Yu Zhang, Jia Li,
- Abstract summary: We propose a bridge distillation approach to turn a complex face model pretrained on private high-resolution faces into a light-weight one for low-resolution face recognition.
Experimental results show that the student model performs impressively in recognizing low-resolution faces with only 0.21M parameters and 0.057MB memory.
- Score: 40.823152928253776
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Face recognition in the wild is now advancing towards light-weight models, fast inference speed and resolution-adapted capability. In this paper, we propose a bridge distillation approach to turn a complex face model pretrained on private high-resolution faces into a light-weight one for low-resolution face recognition. In our approach, such a cross-dataset resolution-adapted knowledge transfer problem is solved via two-step distillation. In the first step, we conduct cross-dataset distillation to transfer the prior knowledge from private high-resolution faces to public high-resolution faces and generate compact and discriminative features. In the second step, the resolution-adapted distillation is conducted to further transfer the prior knowledge to synthetic low-resolution faces via multi-task learning. By learning low-resolution face representations and mimicking the adapted high-resolution knowledge, a light-weight student model can be constructed with high efficiency and promising accuracy in recognizing low-resolution faces. Experimental results show that the student model performs impressively in recognizing low-resolution faces with only 0.21M parameters and 0.057MB memory. Meanwhile, its speed reaches up to 14,705, ~934 and 763 faces per second on GPU, CPU and mobile phone, respectively.
Related papers
- Distilling Generative-Discriminative Representations for Very Low-Resolution Face Recognition [19.634712802639356]
Very low-resolution face recognition is challenging due to loss of informative facial details in resolution degradation.
We propose a generative-discriminative representation distillation approach that combines generative representation with cross-resolution aligned knowledge distillation.
Our approach improves the recovery of the missing details in very low-resolution faces and achieves better knowledge transfer.
arXiv Detail & Related papers (2024-09-10T09:53:06Z) - Low-Resolution Face Recognition via Adaptable Instance-Relation Distillation [18.709870458307574]
Low-resolution face recognition is a challenging task due to the missing of informative details.
Recent approaches have proven that high-resolution clues can well guide low-resolution face recognition via proper knowledge transfer.
We propose an adaptable instance-relation distillation approach to facilitate low-resolution face recognition.
arXiv Detail & Related papers (2024-09-03T16:53:34Z) - One Step Diffusion-based Super-Resolution with Time-Aware Distillation [60.262651082672235]
Diffusion-based image super-resolution (SR) methods have shown promise in reconstructing high-resolution images with fine details from low-resolution counterparts.
Recent techniques have been devised to enhance the sampling efficiency of diffusion-based SR models via knowledge distillation.
We propose a time-aware diffusion distillation method, named TAD-SR, to accomplish effective and efficient image super-resolution.
arXiv Detail & Related papers (2024-08-14T11:47:22Z) - Exploring Deep Learning Image Super-Resolution for Iris Recognition [50.43429968821899]
We propose the use of two deep learning single-image super-resolution approaches: Stacked Auto-Encoders (SAE) and Convolutional Neural Networks (CNN)
We validate the methods with a database of 1.872 near-infrared iris images with quality assessment and recognition experiments showing the superiority of deep learning approaches over the compared algorithms.
arXiv Detail & Related papers (2023-11-02T13:57:48Z) - One-stage Low-resolution Text Recognition with High-resolution Knowledge
Transfer [53.02254290682613]
Current solutions for low-resolution text recognition typically rely on a two-stage pipeline.
We propose an efficient and effective knowledge distillation framework to achieve multi-level knowledge transfer.
Experiments show that the proposed one-stage pipeline significantly outperforms super-resolution based two-stage frameworks.
arXiv Detail & Related papers (2023-08-05T02:33:45Z) - Cross-resolution Face Recognition via Identity-Preserving Network and
Knowledge Distillation [12.090322373964124]
Cross-resolution face recognition is a challenging problem for modern deep face recognition systems.
This paper proposes a new approach that enforces the network to focus on the discriminative information stored in the low-frequency components of a low-resolution image.
arXiv Detail & Related papers (2023-03-15T14:52:46Z) - Efficient High-Resolution Deep Learning: A Survey [90.76576712433595]
Cameras in modern devices such as smartphones, satellites and medical equipment are capable of capturing very high resolution images and videos.
Such high-resolution data often need to be processed by deep learning models for cancer detection, automated road navigation, weather prediction, surveillance, optimizing agricultural processes and many other applications.
Using high-resolution images and videos as direct inputs for deep learning models creates many challenges due to their high number of parameters, computation cost, inference latency and GPU memory consumption.
Several works in the literature propose better alternatives in order to deal with the challenges of high-resolution data and improve accuracy and speed while complying with hardware limitations
arXiv Detail & Related papers (2022-07-26T17:13:53Z) - Octuplet Loss: Make Face Recognition Robust to Image Resolution [5.257115841810258]
We propose a novel combination of the popular triplet loss to improve robustness against image resolution.
We leverage the relationship between high-resolution images and their synthetically down-sampled variants jointly with their identity labels.
Fine-tuning several state-of-the-art approaches with our method proves that we can significantly boost performance for cross-resolution (high-to-low resolution) face verification.
arXiv Detail & Related papers (2022-07-14T08:22:58Z) - Multi-Scale Aligned Distillation for Low-Resolution Detection [68.96325141432078]
This paper focuses on boosting the performance of low-resolution models by distilling knowledge from a high- or multi-resolution model.
On several instance-level detection tasks and datasets, the low-resolution models trained via our approach perform competitively with high-resolution models trained via conventional multi-scale training.
arXiv Detail & Related papers (2021-09-14T12:53:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.