Quality Evaluation of Arbitrary Style Transfer: Subjective Study and
Objective Metric
- URL: http://arxiv.org/abs/2208.00623v1
- Date: Mon, 1 Aug 2022 05:50:58 GMT
- Title: Quality Evaluation of Arbitrary Style Transfer: Subjective Study and
Objective Metric
- Authors: Hangwei Chen, Feng Shao, Xiongli Chai, Yuese Gu, Qiuping Jiang,
Xiangchao Meng, Yo-Sung Ho
- Abstract summary: We propose a new sparse representation-based image quality evaluation metric (SRQE) to measure the quality of arbitrary style transfer (AST) images.
Experimental results on the AST-IQAD have demonstrated the superiority of the proposed method.
- Score: 28.898576735547262
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Arbitrary neural style transfer is a vital topic with research value and
industrial application prospect, which strives to render the structure of one
image using the style of another. Recent researches have devoted great efforts
on the task of arbitrary style transfer (AST) for improving the stylization
quality. However, there are very few explorations about the quality evaluation
of AST images, even it can potentially guide the design of different
algorithms. In this paper, we first construct a new AST images quality
assessment database (AST-IQAD) that consists 150 content-style image pairs and
the corresponding 1200 stylized images produced by eight typical AST
algorithms. Then, a subjective study is conducted on our AST-IQAD database,
which obtains the subjective rating scores of all stylized images on the three
subjective evaluations, i.e., content preservation (CP), style resemblance
(SR), and overall visual (OV). To quantitatively measure the quality of AST
image, we proposed a new sparse representation-based image quality evaluation
metric (SRQE), which computes the quality using the sparse feature similarity.
Experimental results on the AST-IQAD have demonstrated the superiority of the
proposed method. The dataset and source code will be released at
https://github.com/Hangwei-Chen/AST-IQAD-SRQE
Related papers
- Q-Ground: Image Quality Grounding with Large Multi-modality Models [61.72022069880346]
We introduce Q-Ground, the first framework aimed at tackling fine-scale visual quality grounding.
Q-Ground combines large multi-modality models with detailed visual quality analysis.
Central to our contribution is the introduction of the QGround-100K dataset.
arXiv Detail & Related papers (2024-07-24T06:42:46Z) - UHD-IQA Benchmark Database: Pushing the Boundaries of Blind Photo Quality Assessment [4.563959812257119]
We introduce a novel Image Quality Assessment dataset comprising 6073 UHD-1 (4K) images, annotated at a fixed width of 3840 pixels.
Ours focuses on highly aesthetic photos of high technical quality, filling a gap in the literature.
The dataset is annotated with perceptual quality ratings obtained through a crowdsourcing study.
arXiv Detail & Related papers (2024-06-25T11:30:31Z) - Descriptive Image Quality Assessment in the Wild [25.503311093471076]
VLM-based Image Quality Assessment (IQA) seeks to describe image quality linguistically to align with human expression.
We introduce Depicted image Quality Assessment in the Wild (DepictQA-Wild)
Our method includes a multi-functional IQA task paradigm that encompasses both assessment and comparison tasks, brief and detailed responses, full-reference and non-reference scenarios.
arXiv Detail & Related papers (2024-05-29T07:49:15Z) - AIGCOIQA2024: Perceptual Quality Assessment of AI Generated Omnidirectional Images [70.42666704072964]
We establish a large-scale AI generated omnidirectional image IQA database named AIGCOIQA2024.
A subjective IQA experiment is conducted to assess human visual preferences from three perspectives.
We conduct a benchmark experiment to evaluate the performance of state-of-the-art IQA models on our database.
arXiv Detail & Related papers (2024-04-01T10:08:23Z) - Pairwise Comparisons Are All You Need [22.798716660911833]
Blind image quality assessment (BIQA) approaches often fall short in real-world scenarios due to their reliance on a generic quality standard applied uniformly across diverse images.
This paper introduces PICNIQ, a pairwise comparison framework designed to bypass the limitations of conventional BIQA.
By employing psychometric scaling algorithms, PICNIQ transforms pairwise comparisons into just-objectionable-difference (JOD) quality scores, offering a granular and interpretable measure of image quality.
arXiv Detail & Related papers (2024-03-13T23:43:36Z) - DeepDC: Deep Distance Correlation as a Perceptual Image Quality
Evaluator [53.57431705309919]
ImageNet pre-trained deep neural networks (DNNs) show notable transferability for building effective image quality assessment (IQA) models.
We develop a novel full-reference IQA (FR-IQA) model based exclusively on pre-trained DNN features.
We conduct comprehensive experiments to demonstrate the superiority of the proposed quality model on five standard IQA datasets.
arXiv Detail & Related papers (2022-11-09T14:57:27Z) - SPQE: Structure-and-Perception-Based Quality Evaluation for Image
Super-Resolution [24.584839578742237]
Super-Resolution technique has greatly improved the visual quality of images by enhancing their resolutions.
It also calls for an efficient SR Image Quality Assessment (SR-IQA) to evaluate those algorithms or their generated images.
In emerging deep-learning-based SR, a generated high-quality, visually pleasing image may have different structures from its corresponding low-quality image.
arXiv Detail & Related papers (2022-05-07T07:52:55Z) - Conformer and Blind Noisy Students for Improved Image Quality Assessment [80.57006406834466]
Learning-based approaches for perceptual image quality assessment (IQA) usually require both the distorted and reference image for measuring the perceptual quality accurately.
In this work, we explore the performance of transformer-based full-reference IQA models.
We also propose a method for IQA based on semi-supervised knowledge distillation from full-reference teacher models into blind student models.
arXiv Detail & Related papers (2022-04-27T10:21:08Z) - Confusing Image Quality Assessment: Towards Better Augmented Reality
Experience [96.29124666702566]
We consider AR technology as the superimposition of virtual scenes and real scenes, and introduce visual confusion as its basic theory.
A ConFusing Image Quality Assessment (CFIQA) database is established, which includes 600 reference images and 300 distorted images generated by mixing reference images in pairs.
An objective metric termed CFIQA is also proposed to better evaluate the confusing image quality.
arXiv Detail & Related papers (2022-04-11T07:03:06Z) - A combined full-reference image quality assessment approach based on
convolutional activation maps [0.0]
The goal of full-reference image quality assessment (FR-IQA) is to predict the quality of an image as perceived by human observers with using its pristine, reference counterpart.
In this study, we explore a novel, combined approach which predicts the perceptual quality of a distorted image by compiling a feature vector from convolutional activation maps.
arXiv Detail & Related papers (2020-10-19T10:00:29Z) - NPRportrait 1.0: A Three-Level Benchmark for Non-Photorealistic
Rendering of Portraits [67.58044348082944]
This paper proposes a new structured, three level, benchmark dataset for the evaluation of stylised portrait images.
Rigorous criteria were used for its construction, and its consistency was validated by user studies.
A new methodology has been developed for evaluating portrait stylisation algorithms.
arXiv Detail & Related papers (2020-09-01T18:04:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.