PETALface: Parameter Efficient Transfer Learning for Low-resolution Face Recognition
- URL: http://arxiv.org/abs/2412.07771v1
- Date: Tue, 10 Dec 2024 18:59:45 GMT
- Title: PETALface: Parameter Efficient Transfer Learning for Low-resolution Face Recognition
- Authors: Kartik Narayan, Nithin Gopalakrishnan Nair, Jennifer Xu, Rama Chellappa, Vishal M. Patel,
- Abstract summary: PETALface is the first work leveraging the powers of PEFT for low resolution face recognition.
We introduce two low-rank adaptation modules to the backbone, with weights adjusted based on the input image quality to account for the difference in quality for the gallery and probe images.
Experiments demonstrate that the proposed method outperforms full fine-tuning on low-resolution datasets while preserving performance on high-resolution and mixed-quality datasets.
- Score: 54.642714288448744
- License:
- Abstract: Pre-training on large-scale datasets and utilizing margin-based loss functions have been highly successful in training models for high-resolution face recognition. However, these models struggle with low-resolution face datasets, in which the faces lack the facial attributes necessary for distinguishing different faces. Full fine-tuning on low-resolution datasets, a naive method for adapting the model, yields inferior performance due to catastrophic forgetting of pre-trained knowledge. Additionally the domain difference between high-resolution (HR) gallery images and low-resolution (LR) probe images in low resolution datasets leads to poor convergence for a single model to adapt to both gallery and probe after fine-tuning. To this end, we propose PETALface, a Parameter-Efficient Transfer Learning approach for low-resolution face recognition. Through PETALface, we attempt to solve both the aforementioned problems. (1) We solve catastrophic forgetting by leveraging the power of parameter efficient fine-tuning(PEFT). (2) We introduce two low-rank adaptation modules to the backbone, with weights adjusted based on the input image quality to account for the difference in quality for the gallery and probe images. To the best of our knowledge, PETALface is the first work leveraging the powers of PEFT for low resolution face recognition. Extensive experiments demonstrate that the proposed method outperforms full fine-tuning on low-resolution datasets while preserving performance on high-resolution and mixed-quality datasets, all while using only 0.48% of the parameters. Code: https://kartik-3004.github.io/PETALface/
Related papers
- Perceptual-Distortion Balanced Image Super-Resolution is a Multi-Objective Optimization Problem [23.833099288826045]
Training Single-Image Super-Resolution (SISR) models using pixel-based regression losses can achieve high distortion metrics scores.
However, they often results in blurry images due to insufficient recovery of high-frequency details.
We propose a novel method that incorporates Multi-Objective Optimization (MOO) into the training process of SISR models to balance perceptual quality and distortion.
arXiv Detail & Related papers (2024-09-05T02:14:04Z) - Assessing UHD Image Quality from Aesthetics, Distortions, and Saliency [51.36674160287799]
We design a multi-branch deep neural network (DNN) to assess the quality of UHD images from three perspectives.
aesthetic features are extracted from low-resolution images downsampled from the UHD ones.
Technical distortions are measured using a fragment image composed of mini-patches cropped from UHD images.
The salient content of UHD images is detected and cropped to extract quality-aware features from the salient regions.
arXiv Detail & Related papers (2024-09-01T15:26:11Z) - OmniSSR: Zero-shot Omnidirectional Image Super-Resolution using Stable Diffusion Model [6.83367289911244]
Omnidirectional images (ODIs) are commonly used in real-world visual tasks, and high-resolution ODIs help improve the performance of related visual tasks.
Most existing super-resolution methods for ODIs use end-to-end learning strategies, resulting in inferior realness of generated images.
arXiv Detail & Related papers (2024-04-16T06:39:37Z) - MetaF2N: Blind Image Super-Resolution by Learning Efficient Model
Adaptation from Faces [51.42949911178461]
We propose a method dubbed MetaF2N to fine-tune model parameters for adapting to the whole Natural image in a Meta-learning framework.
Considering the gaps between the recovered faces and ground-truths, we deploy a MaskNet for adaptively predicting loss weights at different positions to reduce the impact of low-confidence areas.
arXiv Detail & Related papers (2023-09-15T02:45:21Z) - Perception-Distortion Balanced ADMM Optimization for Single-Image
Super-Resolution [29.19388490351459]
We propose a novel super-resolution model with a low-frequency constraint (LFc-SR)
We introduce an ADMM-based alternating optimization method for the non-trivial learning of the constrained model.
Experiments showed that our method, without cumbersome post-processing procedures, achieved the state-of-the-art performance.
arXiv Detail & Related papers (2022-08-05T05:37:55Z) - AdaFace: Quality Adaptive Margin for Face Recognition [56.99208144386127]
We introduce another aspect of adaptiveness in the loss function, namely the image quality.
We propose a new loss function that emphasizes samples of different difficulties based on their image quality.
Our method, AdaFace, improves the face recognition performance over the state-of-the-art (SoTA) on four datasets.
arXiv Detail & Related papers (2022-04-03T01:23:41Z) - Uncovering the Over-smoothing Challenge in Image Super-Resolution: Entropy-based Quantification and Contrastive Optimization [67.99082021804145]
We propose an explicit solution to the COO problem, called Detail Enhanced Contrastive Loss (DECLoss)
DECLoss utilizes the clustering property of contrastive learning to directly reduce the variance of the potential high-resolution distribution.
We evaluate DECLoss on multiple super-resolution benchmarks and demonstrate that it improves the perceptual quality of PSNR-oriented models.
arXiv Detail & Related papers (2022-01-04T08:30:09Z) - Boosting High-Level Vision with Joint Compression Artifacts Reduction
and Super-Resolution [10.960291115491504]
We generate an artifact-free high-resolution image from a low-resolution one compressed with an arbitrary quality factor.
A context-aware joint CAR and SR neural network (CAJNN) integrates both local and non-local features to solve CAR and SR in one-stage.
A deep reconstruction network is adopted to predict high quality and high-resolution images.
arXiv Detail & Related papers (2020-10-18T04:17:08Z) - PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of
Generative Models [77.32079593577821]
PULSE (Photo Upsampling via Latent Space Exploration) generates high-resolution, realistic images at resolutions previously unseen in the literature.
Our method outperforms state-of-the-art methods in perceptual quality at higher resolutions and scale factors than previously possible.
arXiv Detail & Related papers (2020-03-08T16:44:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.