Quantifying Visual Image Quality: A Bayesian View
- URL: http://arxiv.org/abs/2102.00195v1
- Date: Sat, 30 Jan 2021 09:34:23 GMT
- Title: Quantifying Visual Image Quality: A Bayesian View
- Authors: Zhengfang Duanmu, Wentao Liu, Zhongling Wang, Zhou Wang
- Abstract summary: Image quality assessment (IQA) models aim to establish a quantitative relationship between visual images and their perceptual quality by human observers.
IQA modeling plays a special bridging role between vision science and engineering practice.
- Score: 31.494753153095587
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image quality assessment (IQA) models aim to establish a quantitative
relationship between visual images and their perceptual quality by human
observers. IQA modeling plays a special bridging role between vision science
and engineering practice, both as a test-bed for vision theories and
computational biovision models, and as a powerful tool that could potentially
make profound impact on a broad range of image processing, computer vision, and
computer graphics applications, for design, optimization, and evaluation
purposes. IQA research has enjoyed an accelerated growth in the past two
decades. Here we present an overview of IQA methods from a Bayesian
perspective, with the goals of unifying a wide spectrum of IQA approaches under
a common framework and providing useful references to fundamental concepts
accessible to vision scientists and image processing practitioners. We discuss
the implications of the successes and limitations of modern IQA methods for
biological vision and the prospect for vision science to inform the design of
future artificial vision systems.
Related papers
- A Survey on Image Quality Assessment: Insights, Analysis, and Future Outlook [6.925820483833189]
Image quality assessment (IQA) represents a pivotal challenge in image-focused technologies.
IQA has witnessed a notable surge in innovative research efforts, driven by the emergence of novel architectural paradigms.
This survey delivers an extensive analysis of contemporary IQA methodologies, organized according to their application scenarios.
arXiv Detail & Related papers (2025-02-12T16:24:22Z) - AI-generated Image Quality Assessment in Visual Communication [72.11144790293086]
AIGI-VC is a quality assessment database for AI-generated images in visual communication.
The dataset consists of 2,500 images spanning 14 advertisement topics and 8 emotion types.
It provides coarse-grained human preference annotations and fine-grained preference descriptions, benchmarking the abilities of IQA methods in preference prediction, interpretation, and reasoning.
arXiv Detail & Related papers (2024-12-20T08:47:07Z) - Quality Prediction of AI Generated Images and Videos: Emerging Trends and Opportunities [32.03360188710995]
AI-generated and enhanced content must be visually accurate, adhere to intended use, and maintain high visual quality.
One way to monitor and control the visual "quality" of AI-generated and -enhanced content is by deploying Image Quality Assessment (IQA) and Video Quality Assessment (VQA) models.
This paper examines the current shortcomings and possibilities presented by AI-generated and enhanced image and video content.
arXiv Detail & Related papers (2024-10-11T05:08:44Z) - Quality Assessment for AI Generated Images with Instruction Tuning [58.41087653543607]
We first establish a novel Image Quality Assessment (IQA) database for AIGIs, termed AIGCIQA2023+.
This paper presents a MINT-IQA model to evaluate and explain human preferences for AIGIs from Multi-perspectives with INstruction Tuning.
arXiv Detail & Related papers (2024-05-12T17:45:11Z) - Let's ViCE! Mimicking Human Cognitive Behavior in Image Generation
Evaluation [96.74302670358145]
We introduce an automated method for Visual Concept Evaluation (ViCE) to assess consistency between a generated/edited image and the corresponding prompt/instructions.
ViCE combines the strengths of Large Language Models (LLMs) and Visual Question Answering (VQA) into a unified pipeline, aiming to replicate the human cognitive process in quality assessment.
arXiv Detail & Related papers (2023-07-18T16:33:30Z) - Perceptual Attacks of No-Reference Image Quality Models with
Human-in-the-Loop [113.75573175709573]
We make one of the first attempts to examine the perceptual robustness of NR-IQA models.
We test one knowledge-driven and three data-driven NR-IQA methods under four full-reference IQA models.
We find that all four NR-IQA models are vulnerable to the proposed perceptual attack.
arXiv Detail & Related papers (2022-10-03T13:47:16Z) - Exploring CLIP for Assessing the Look and Feel of Images [87.97623543523858]
We introduce Contrastive Language-Image Pre-training (CLIP) models for assessing both the quality perception (look) and abstract perception (feel) of images in a zero-shot manner.
Our results show that CLIP captures meaningful priors that generalize well to different perceptual assessments.
arXiv Detail & Related papers (2022-07-25T17:58:16Z) - Image Quality Assessment in the Modern Age [53.19271326110551]
This tutorial provides the audience with the basic theories, methodologies, and current progresses of image quality assessment (IQA)
We will first revisit several subjective quality assessment methodologies, with emphasis on how to properly select visual stimuli.
Both hand-engineered and (deep) learning-based methods will be covered.
arXiv Detail & Related papers (2021-10-19T02:38:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.