An Image Quality Assessment Dataset for Portraits
- URL: http://arxiv.org/abs/2304.05772v1
- Date: Wed, 12 Apr 2023 11:30:06 GMT
- Title: An Image Quality Assessment Dataset for Portraits
- Authors: Nicolas Chahine, Ana-Stefania Calarasanu, Davide Garcia-Civiero, Theo
Cayla, Sira Ferradans, Jean Ponce (NYU)
- Abstract summary: This paper introduces PIQ23, a portrait-specific IQA dataset of 5116 images of 50 scenarios acquired by 100 smartphones.
The dataset includes individuals of various genders and ethnicities who have given explicit and informed consent for their photographs to be used in public research.
An in-depth statistical analysis of these annotations allows us to evaluate their consistency over PIQ23.
- Score: 0.9786690381850354
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Year after year, the demand for ever-better smartphone photos continues to
grow, in particular in the domain of portrait photography. Manufacturers thus
use perceptual quality criteria throughout the development of smartphone
cameras. This costly procedure can be partially replaced by automated
learning-based methods for image quality assessment (IQA). Due to its
subjective nature, it is necessary to estimate and guarantee the consistency of
the IQA process, a characteristic lacking in the mean opinion scores (MOS)
widely used for crowdsourcing IQA. In addition, existing blind IQA (BIQA)
datasets pay little attention to the difficulty of cross-content assessment,
which may degrade the quality of annotations. This paper introduces PIQ23, a
portrait-specific IQA dataset of 5116 images of 50 predefined scenarios
acquired by 100 smartphones, covering a high variety of brands, models, and use
cases. The dataset includes individuals of various genders and ethnicities who
have given explicit and informed consent for their photographs to be used in
public research. It is annotated by pairwise comparisons (PWC) collected from
over 30 image quality experts for three image attributes: face detail
preservation, face target exposure, and overall image quality. An in-depth
statistical analysis of these annotations allows us to evaluate their
consistency over PIQ23. Finally, we show through an extensive comparison with
existing baselines that semantic information (image context) can be used to
improve IQA predictions. The dataset along with the proposed statistical
analysis and BIQA algorithms are available:
https://github.com/DXOMARK-Research/PIQ2023
Related papers
- UHD-IQA Benchmark Database: Pushing the Boundaries of Blind Photo Quality Assessment [4.563959812257119]
We introduce a novel Image Quality Assessment dataset comprising 6073 UHD-1 (4K) images, annotated at a fixed width of 3840 pixels.
Ours focuses on highly aesthetic photos of high technical quality, filling a gap in the literature.
The dataset is annotated with perceptual quality ratings obtained through a crowdsourcing study.
arXiv Detail & Related papers (2024-06-25T11:30:31Z) - DP-IQA: Utilizing Diffusion Prior for Blind Image Quality Assessment in the Wild [54.139923409101044]
Blind image quality assessment (IQA) in the wild presents significant challenges.
Given the difficulty in collecting large-scale training data, leveraging limited data to develop a model with strong generalization remains an open problem.
Motivated by the robust image perception capabilities of pre-trained text-to-image (T2I) diffusion models, we propose a novel IQA method, diffusion priors-based IQA.
arXiv Detail & Related papers (2024-05-30T12:32:35Z) - Descriptive Image Quality Assessment in the Wild [25.503311093471076]
VLM-based Image Quality Assessment (IQA) seeks to describe image quality linguistically to align with human expression.
We introduce Depicted image Quality Assessment in the Wild (DepictQA-Wild)
Our method includes a multi-functional IQA task paradigm that encompasses both assessment and comparison tasks, brief and detailed responses, full-reference and non-reference scenarios.
arXiv Detail & Related papers (2024-05-29T07:49:15Z) - Dual-Branch Network for Portrait Image Quality Assessment [76.27716058987251]
We introduce a dual-branch network for portrait image quality assessment (PIQA)
We utilize two backbone networks (textiti.e., Swin Transformer-B) to extract the quality-aware features from the entire portrait image and the facial image cropped from it.
We leverage LIQE, an image scene classification and quality assessment model, to capture the quality-aware and scene-specific features as the auxiliary features.
arXiv Detail & Related papers (2024-05-14T12:43:43Z) - Understanding and Evaluating Human Preferences for AI Generated Images with Instruction Tuning [58.41087653543607]
We first establish a novel Image Quality Assessment (IQA) database for AIGIs, termed AIGCIQA2023+.
This paper presents a MINT-IQA model to evaluate and explain human preferences for AIGIs from Multi-perspectives with INstruction Tuning.
arXiv Detail & Related papers (2024-05-12T17:45:11Z) - AIGCOIQA2024: Perceptual Quality Assessment of AI Generated Omnidirectional Images [70.42666704072964]
We establish a large-scale AI generated omnidirectional image IQA database named AIGCOIQA2024.
A subjective IQA experiment is conducted to assess human visual preferences from three perspectives.
We conduct a benchmark experiment to evaluate the performance of state-of-the-art IQA models on our database.
arXiv Detail & Related papers (2024-04-01T10:08:23Z) - Generalized Portrait Quality Assessment [26.8378202089832]
This paper presents a learning-based approach to portrait quality assessment (PQA)
The proposed approach is validated by extensive experiments on the PIQ23 benchmark.
The source code of FHIQA will be made publicly available on the PIQ23 GitHub repository.
arXiv Detail & Related papers (2024-02-14T13:47:18Z) - Blind Image Quality Assessment via Vision-Language Correspondence: A
Multitask Learning Perspective [93.56647950778357]
Blind image quality assessment (BIQA) predicts the human perception of image quality without any reference information.
We develop a general and automated multitask learning scheme for BIQA to exploit auxiliary knowledge from other tasks.
arXiv Detail & Related papers (2023-03-27T07:58:09Z) - Going the Extra Mile in Face Image Quality Assessment: A Novel Database
and Model [42.05084438912876]
We introduce the largest annotated IQA database developed to date, which contains 20,000 human faces.
We propose a novel deep learning model to accurately predict face image quality, which, for the first time, explores the use of generative priors for IQA.
arXiv Detail & Related papers (2022-07-11T14:28:18Z) - Conformer and Blind Noisy Students for Improved Image Quality Assessment [80.57006406834466]
Learning-based approaches for perceptual image quality assessment (IQA) usually require both the distorted and reference image for measuring the perceptual quality accurately.
In this work, we explore the performance of transformer-based full-reference IQA models.
We also propose a method for IQA based on semi-supervised knowledge distillation from full-reference teacher models into blind student models.
arXiv Detail & Related papers (2022-04-27T10:21:08Z) - Parameterized Image Quality Score Distribution Prediction [40.397816495489295]
We describe image quality using a parameterized distribution rather than a mean opinion score (MOS)
An objective method is also proposed to predictthe image quality score distribution (IQSD)
arXiv Detail & Related papers (2022-03-02T08:13:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.