Deep No-reference Tone Mapped Image Quality Assessment
- URL: http://arxiv.org/abs/2002.03165v1
- Date: Sat, 8 Feb 2020 13:41:18 GMT
- Title: Deep No-reference Tone Mapped Image Quality Assessment
- Authors: Chandra Sekhar Ravuri (1), Rajesh Sureddi (2), Sathya Veera Reddy
Dendi (2), Shanmuganathan Raman (1), Sumohana S. Channappayya (2) ((1)
Department of Electrical Engineering, Indian Institute of Technology
Gandhinagar, India., (2) Department of Electrical Engineering, Indian
Institute of Technology Hyderabad, India.)
- Abstract summary: Tone mapping introduces distortions in the final image which may lead to visual displeasure.
We introduce a novel no-reference quality assessment technique for these tone mapped images.
We show that the proposed technique delivers competitive performance relative to the state-of-the-art techniques.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The process of rendering high dynamic range (HDR) images to be viewed on
conventional displays is called tone mapping. However, tone mapping introduces
distortions in the final image which may lead to visual displeasure. To
quantify these distortions, we introduce a novel no-reference quality
assessment technique for these tone mapped images. This technique is composed
of two stages. In the first stage, we employ a convolutional neural network
(CNN) to generate quality aware maps (also known as distortion maps) from tone
mapped images by training it with the ground truth distortion maps. In the
second stage, we model the normalized image and distortion maps using an
Asymmetric Generalized Gaussian Distribution (AGGD). The parameters of the AGGD
model are then used to estimate the quality score using support vector
regression (SVR). We show that the proposed technique delivers competitive
performance relative to the state-of-the-art techniques. The novelty of this
work is its ability to visualize various distortions as quality maps
(distortion maps), especially in the no-reference setting, and to use these
maps as features to estimate the quality score of tone mapped images.
Related papers
- BELE: Blur Equivalent Linearized Estimator [0.8192907805418581]
This paper introduces a novel parametric model that separates perceptual effects due to strong edge degradations from those caused by texture distortions.
The first is the Blur Equivalent Linearized Estimator, designed to measure blur on strong and isolated edges.
The second is a Complex Peak Signal-to-Noise Ratio, which evaluates distortions affecting texture regions.
arXiv Detail & Related papers (2025-03-01T14:19:08Z) - ARNIQA: Learning Distortion Manifold for Image Quality Assessment [28.773037051085318]
No-Reference Image Quality Assessment (NR-IQA) aims to develop methods to measure image quality in alignment with human perception without the need for a high-quality reference image.
We propose a self-supervised approach named ARNIQA for modeling the image distortion manifold to obtain quality representations in an intrinsic manner.
arXiv Detail & Related papers (2023-10-20T17:22:25Z) - Image Quality Assessment: Learning to Rank Image Distortion Level [0.0]
We learn to compare the image quality of two registered images, with respect to a chosen distortion.
Our method takes advantage of the fact that at times, simulating image distortion and later evaluating its relative image quality, is easier than assessing its absolute value.
arXiv Detail & Related papers (2022-08-04T18:33:33Z) - Image Quality Assessment using Contrastive Learning [50.265638572116984]
We train a deep Convolutional Neural Network (CNN) using a contrastive pairwise objective to solve the auxiliary problem.
We show through extensive experiments that CONTRIQUE achieves competitive performance when compared to state-of-the-art NR image quality models.
Our results suggest that powerful quality representations with perceptual relevance can be obtained without requiring large labeled subjective image quality datasets.
arXiv Detail & Related papers (2021-10-25T21:01:00Z) - CAMERAS: Enhanced Resolution And Sanity preserving Class Activation
Mapping for image saliency [61.40511574314069]
Backpropagation image saliency aims at explaining model predictions by estimating model-centric importance of individual pixels in the input.
We propose CAMERAS, a technique to compute high-fidelity backpropagation saliency maps without requiring any external priors.
arXiv Detail & Related papers (2021-06-20T08:20:56Z) - Ensembling with Deep Generative Views [72.70801582346344]
generative models can synthesize "views" of artificial images that mimic real-world variations, such as changes in color or pose.
Here, we investigate whether such views can be applied to real images to benefit downstream analysis tasks such as image classification.
We use StyleGAN2 as the source of generative augmentations and investigate this setup on classification tasks involving facial attributes, cat faces, and cars.
arXiv Detail & Related papers (2021-04-29T17:58:35Z) - Towards Unsupervised Deep Image Enhancement with Generative Adversarial
Network [92.01145655155374]
We present an unsupervised image enhancement generative network (UEGAN)
It learns the corresponding image-to-image mapping from a set of images with desired characteristics in an unsupervised manner.
Results show that the proposed model effectively improves the aesthetic quality of images.
arXiv Detail & Related papers (2020-12-30T03:22:46Z) - Image Inpainting with Learnable Feature Imputation [8.293345261434943]
A regular convolution layer applying a filter in the same way over known and unknown areas causes visual artifacts in the inpainted image.
We propose (layer-wise) feature imputation of the missing input values to a convolution.
We present comparisons on CelebA-HQ and Places2 to current state-of-the-art to validate our model.
arXiv Detail & Related papers (2020-11-02T16:05:32Z) - A combined full-reference image quality assessment approach based on
convolutional activation maps [0.0]
The goal of full-reference image quality assessment (FR-IQA) is to predict the quality of an image as perceived by human observers with using its pristine, reference counterpart.
In this study, we explore a novel, combined approach which predicts the perceptual quality of a distorted image by compiling a feature vector from convolutional activation maps.
arXiv Detail & Related papers (2020-10-19T10:00:29Z) - Deep Variational Network Toward Blind Image Restoration [60.45350399661175]
Blind image restoration is a common yet challenging problem in computer vision.
We propose a novel blind image restoration method, aiming to integrate both the advantages of them.
Experiments on two typical blind IR tasks, namely image denoising and super-resolution, demonstrate that the proposed method achieves superior performance over current state-of-the-arts.
arXiv Detail & Related papers (2020-08-25T03:30:53Z) - Understanding Integrated Gradients with SmoothTaylor for Deep Neural
Network Attribution [70.78655569298923]
Integrated Gradients as an attribution method for deep neural network models offers simple implementability.
It suffers from noisiness of explanations which affects the ease of interpretability.
The SmoothGrad technique is proposed to solve the noisiness issue and smoothen the attribution maps of any gradient-based attribution method.
arXiv Detail & Related papers (2020-04-22T10:43:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.