Test your samples jointly: Pseudo-reference for image quality evaluation
- URL: http://arxiv.org/abs/2304.03766v1
- Date: Fri, 7 Apr 2023 17:59:27 GMT
- Title: Test your samples jointly: Pseudo-reference for image quality evaluation
- Authors: Marcelin Tworski and St\'ephane Lathuili\`ere
- Abstract summary: We propose to jointly model different images depicting the same content to improve the precision of quality estimation.
Our experiments show that at test-time, our method successfully combines the features from multiple images depicting the same new content, improving estimation quality.
- Score: 3.2634122554914
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we address the well-known image quality assessment problem but
in contrast from existing approaches that predict image quality independently
for every images, we propose to jointly model different images depicting the
same content to improve the precision of quality estimation. This proposal is
motivated by the idea that multiple distorted images can provide information to
disambiguate image features related to content and quality. To this aim, we
combine the feature representations from the different images to estimate a
pseudo-reference that we use to enhance score prediction. Our experiments show
that at test-time, our method successfully combines the features from multiple
images depicting the same new content, improving estimation quality.
Related papers
- Q-Ground: Image Quality Grounding with Large Multi-modality Models [61.72022069880346]
We introduce Q-Ground, the first framework aimed at tackling fine-scale visual quality grounding.
Q-Ground combines large multi-modality models with detailed visual quality analysis.
Central to our contribution is the introduction of the QGround-100K dataset.
arXiv Detail & Related papers (2024-07-24T06:42:46Z) - Quality-guided Skin Tone Enhancement for Portrait Photography [46.55401398142088]
We propose a quality-guided image enhancement paradigm that enables image enhancement models to learn the distribution of images with various quality ratings.
Our method can adjust the skin tone corresponding to different quality requirements.
arXiv Detail & Related papers (2024-06-22T13:36:30Z) - Dual-Branch Network for Portrait Image Quality Assessment [76.27716058987251]
We introduce a dual-branch network for portrait image quality assessment (PIQA)
We utilize two backbone networks (textiti.e., Swin Transformer-B) to extract the quality-aware features from the entire portrait image and the facial image cropped from it.
We leverage LIQE, an image scene classification and quality assessment model, to capture the quality-aware and scene-specific features as the auxiliary features.
arXiv Detail & Related papers (2024-05-14T12:43:43Z) - Progressive Feature Fusion Network for Enhancing Image Quality
Assessment [8.06731856250435]
We propose a new image quality assessment framework to decide which image is better in an image group.
To capture the subtle differences, a fine-grained network is adopted to acquire multi-scale features.
Experimental results show that compared with the current mainstream image quality assessment methods, the proposed network can achieve more accurate image quality assessment.
arXiv Detail & Related papers (2024-01-13T06:34:32Z) - ARNIQA: Learning Distortion Manifold for Image Quality Assessment [28.773037051085318]
No-Reference Image Quality Assessment (NR-IQA) aims to develop methods to measure image quality in alignment with human perception without the need for a high-quality reference image.
We propose a self-supervised approach named ARNIQA for modeling the image distortion manifold to obtain quality representations in an intrinsic manner.
arXiv Detail & Related papers (2023-10-20T17:22:25Z) - Feedback is Needed for Retakes: An Explainable Poor Image Notification
Framework for the Visually Impaired [6.0158981171030685]
Our framework first determines the quality of images and then generates captions using only those images that are determined to be of high quality.
The user is notified by the flaws feature to retake if image quality is low, and this cycle is repeated until the input image is deemed to be of high quality.
arXiv Detail & Related papers (2022-11-17T09:22:28Z) - Multi-Scale Features and Parallel Transformers Based Image Quality
Assessment [0.6554326244334866]
We propose a new architecture for image quality assessment using transformer networks and multi-scale feature extraction.
Our experimentation on various datasets, including the PIPAL dataset, demonstrates that the proposed integration technique outperforms existing algorithms.
arXiv Detail & Related papers (2022-04-20T20:38:23Z) - Image Quality Assessment using Contrastive Learning [50.265638572116984]
We train a deep Convolutional Neural Network (CNN) using a contrastive pairwise objective to solve the auxiliary problem.
We show through extensive experiments that CONTRIQUE achieves competitive performance when compared to state-of-the-art NR image quality models.
Our results suggest that powerful quality representations with perceptual relevance can be obtained without requiring large labeled subjective image quality datasets.
arXiv Detail & Related papers (2021-10-25T21:01:00Z) - Learning Conditional Knowledge Distillation for Degraded-Reference Image
Quality Assessment [157.1292674649519]
We propose a practical solution named degraded-reference IQA (DR-IQA)
DR-IQA exploits the inputs of IR models, degraded images, as references.
Our results can even be close to the performance of full-reference settings.
arXiv Detail & Related papers (2021-08-18T02:35:08Z) - Compare and Reweight: Distinctive Image Captioning Using Similar Images
Sets [52.3731631461383]
We aim to improve the distinctiveness of image captions through training with sets of similar images.
Our metric shows that the human annotations of each image are not equivalent based on distinctiveness.
arXiv Detail & Related papers (2020-07-14T07:40:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.