Quality Map Fusion for Adversarial Learning
- URL: http://arxiv.org/abs/2110.12338v1
- Date: Sun, 24 Oct 2021 03:01:46 GMT
- Title: Quality Map Fusion for Adversarial Learning
- Authors: Uche Osahor, Nasser M. Nasrabadi
- Abstract summary: We improve image quality adversarially by introducing a novel quality map fusion technique.
We extend the widely adopted l2 Wasserstein distance metric to other preferable quality norms.
We also show that incorporating a perceptual attention mechanism (PAM) that extracts global feature embeddings from the network bottleneck translate to a better image quality.
- Score: 23.465747123791772
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative adversarial models that capture salient low-level features which
convey visual information in correlation with the human visual system (HVS)
still suffer from perceptible image degradations. The inability to convey such
highly informative features can be attributed to mode collapse, convergence
failure and vanishing gradients. In this paper, we improve image quality
adversarially by introducing a novel quality map fusion technique that
harnesses image features similar to the HVS and the perceptual properties of a
deep convolutional neural network (DCNN). We extend the widely adopted l2
Wasserstein distance metric to other preferable quality norms derived from
Banach spaces that capture richer image properties like structure, luminance,
contrast and the naturalness of images. We also show that incorporating a
perceptual attention mechanism (PAM) that extracts global feature embeddings
from the network bottleneck with aggregated perceptual maps derived from
standard image quality metrics translate to a better image quality. We also
demonstrate impressive performance over other methods.
Related papers
- Image Quality Assessment: Enhancing Perceptual Exploration and Interpretation with Collaborative Feature Refinement and Hausdorff distance [47.01352278293561]
Current full-reference image quality assessment (FR-IQA) methods often fuse features from reference and distorted images.
This work introduces a pioneering training-free FR-IQA method that accurately predicts image quality in alignment with the human visual system.
arXiv Detail & Related papers (2024-12-20T12:39:49Z) - DiffLoss: unleashing diffusion model as constraint for training image restoration network [4.8677910801584385]
We introduce a new perspective that implicitly leverages the diffusion model to assist the training of image restoration network, called DiffLoss.
By combining these two designs, the overall loss function is able to improve the perceptual quality of image restoration, resulting in visually pleasing and semantically enhanced outcomes.
arXiv Detail & Related papers (2024-06-27T09:33:24Z) - Diffusion Model Based Visual Compensation Guidance and Visual Difference Analysis for No-Reference Image Quality Assessment [78.21609845377644]
We propose a novel class of state-of-the-art (SOTA) generative model, which exhibits the capability to model intricate relationships.
We devise a new diffusion restoration network that leverages the produced enhanced image and noise-containing images.
Two visual evaluation branches are designed to comprehensively analyze the obtained high-level feature information.
arXiv Detail & Related papers (2024-02-22T09:39:46Z) - ARNIQA: Learning Distortion Manifold for Image Quality Assessment [28.773037051085318]
No-Reference Image Quality Assessment (NR-IQA) aims to develop methods to measure image quality in alignment with human perception without the need for a high-quality reference image.
We propose a self-supervised approach named ARNIQA for modeling the image distortion manifold to obtain quality representations in an intrinsic manner.
arXiv Detail & Related papers (2023-10-20T17:22:25Z) - Distance Weighted Trans Network for Image Completion [52.318730994423106]
We propose a new architecture that relies on Distance-based Weighted Transformer (DWT) to better understand the relationships between an image's components.
CNNs are used to augment the local texture information of coarse priors.
DWT blocks are used to recover certain coarse textures and coherent visual structures.
arXiv Detail & Related papers (2023-10-11T12:46:11Z) - DeepDC: Deep Distance Correlation as a Perceptual Image Quality
Evaluator [53.57431705309919]
ImageNet pre-trained deep neural networks (DNNs) show notable transferability for building effective image quality assessment (IQA) models.
We develop a novel full-reference IQA (FR-IQA) model based exclusively on pre-trained DNN features.
We conduct comprehensive experiments to demonstrate the superiority of the proposed quality model on five standard IQA datasets.
arXiv Detail & Related papers (2022-11-09T14:57:27Z) - Flow-based Visual Quality Enhancer for Super-resolution Magnetic
Resonance Spectroscopic Imaging [13.408365072149795]
We propose a flow-based enhancer network to improve the visual quality of super-resolution MRSI.
Our enhancer network incorporates anatomical information from additional image modalities (MRI) and uses a learnable base distribution.
Our method also allows visual quality adjustment and uncertainty estimation.
arXiv Detail & Related papers (2022-07-20T20:19:44Z) - Textural-Structural Joint Learning for No-Reference Super-Resolution
Image Quality Assessment [59.91741119995321]
We develop a dual stream network to jointly explore the textural and structural information for quality prediction, dubbed TSNet.
By mimicking the human vision system (HVS) that pays more attention to the significant areas of the image, we develop the spatial attention mechanism to make the visual-sensitive areas more distinguishable.
Experimental results show the proposed TSNet predicts the visual quality more accurate than the state-of-the-art IQA methods, and demonstrates better consistency with the human's perspective.
arXiv Detail & Related papers (2022-05-27T09:20:06Z) - Image Quality Assessment using Contrastive Learning [50.265638572116984]
We train a deep Convolutional Neural Network (CNN) using a contrastive pairwise objective to solve the auxiliary problem.
We show through extensive experiments that CONTRIQUE achieves competitive performance when compared to state-of-the-art NR image quality models.
Our results suggest that powerful quality representations with perceptual relevance can be obtained without requiring large labeled subjective image quality datasets.
arXiv Detail & Related papers (2021-10-25T21:01:00Z) - Towards Unsupervised Deep Image Enhancement with Generative Adversarial
Network [92.01145655155374]
We present an unsupervised image enhancement generative network (UEGAN)
It learns the corresponding image-to-image mapping from a set of images with desired characteristics in an unsupervised manner.
Results show that the proposed model effectively improves the aesthetic quality of images.
arXiv Detail & Related papers (2020-12-30T03:22:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.