Locally Adaptive Structure and Texture Similarity for Image Quality
Assessment
- URL: http://arxiv.org/abs/2110.08521v1
- Date: Sat, 16 Oct 2021 09:19:56 GMT
- Title: Locally Adaptive Structure and Texture Similarity for Image Quality
Assessment
- Authors: Keyan Ding, Yi Liu, Xueyi Zou, Shiqi Wang, Kede Ma
- Abstract summary: We describe a locally adaptive structure and texture similarity index for full-reference image quality assessment (IQA)
Specifically, we rely on a single statistical feature, namely the dispersion index, to localize texture regions at different scales.
The resulting A-DISTS is adapted to local image content, and is free of expensive human perceptual scores for supervised training.
- Score: 33.58928017067797
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The latest advances in full-reference image quality assessment (IQA) involve
unifying structure and texture similarity based on deep representations. The
resulting Deep Image Structure and Texture Similarity (DISTS) metric, however,
makes rather global quality measurements, ignoring the fact that natural
photographic images are locally structured and textured across space and scale.
In this paper, we describe a locally adaptive structure and texture similarity
index for full-reference IQA, which we term A-DISTS. Specifically, we rely on a
single statistical feature, namely the dispersion index, to localize texture
regions at different scales. The estimated probability (of one patch being
texture) is in turn used to adaptively pool local structure and texture
measurements. The resulting A-DISTS is adapted to local image content, and is
free of expensive human perceptual scores for supervised training. We
demonstrate the advantages of A-DISTS in terms of correlation with human data
on ten IQA databases and optimization of single image super-resolution methods.
Related papers
- Attention Down-Sampling Transformer, Relative Ranking and Self-Consistency for Blind Image Quality Assessment [17.04649536069553]
No-reference image quality assessment is a challenging domain that addresses estimating image quality without the original reference.
We introduce an improved mechanism to extract local and non-local information from images via different transformer encoders and CNNs.
A self-consistency approach to self-supervision is presented, explicitly addressing the degradation of no-reference image quality assessment (NR-IQA) models.
arXiv Detail & Related papers (2024-09-11T09:08:43Z) - Deep Shape-Texture Statistics for Completely Blind Image Quality
Evaluation [48.278380421089764]
Deep features as visual descriptors have advanced IQA in recent research, but they are discovered to be highly texture-biased and lack of shape-bias.
We find out that image shape and texture cues respond differently towards distortions, and the absence of either one results in an incomplete image representation.
To formulate a well-round statistical description for images, we utilize the shapebiased and texture-biased deep features produced by Deep Neural Networks (DNNs) simultaneously.
arXiv Detail & Related papers (2024-01-16T04:28:09Z) - Spectral Normalization and Dual Contrastive Regularization for
Image-to-Image Translation [9.029227024451506]
We propose a new unpaired I2I translation framework based on dual contrastive regularization and spectral normalization.
We conduct comprehensive experiments to evaluate the effectiveness of SN-DCR, and the results prove that our method achieves SOTA in multiple tasks.
arXiv Detail & Related papers (2023-04-22T05:22:24Z) - DeepDC: Deep Distance Correlation as a Perceptual Image Quality
Evaluator [53.57431705309919]
ImageNet pre-trained deep neural networks (DNNs) show notable transferability for building effective image quality assessment (IQA) models.
We develop a novel full-reference IQA (FR-IQA) model based exclusively on pre-trained DNN features.
We conduct comprehensive experiments to demonstrate the superiority of the proposed quality model on five standard IQA datasets.
arXiv Detail & Related papers (2022-11-09T14:57:27Z) - Textural-Structural Joint Learning for No-Reference Super-Resolution
Image Quality Assessment [59.91741119995321]
We develop a dual stream network to jointly explore the textural and structural information for quality prediction, dubbed TSNet.
By mimicking the human vision system (HVS) that pays more attention to the significant areas of the image, we develop the spatial attention mechanism to make the visual-sensitive areas more distinguishable.
Experimental results show the proposed TSNet predicts the visual quality more accurate than the state-of-the-art IQA methods, and demonstrates better consistency with the human's perspective.
arXiv Detail & Related papers (2022-05-27T09:20:06Z) - Spatially-Adaptive Image Restoration using Distortion-Guided Networks [51.89245800461537]
We present a learning-based solution for restoring images suffering from spatially-varying degradations.
We propose SPAIR, a network design that harnesses distortion-localization information and dynamically adjusts to difficult regions in the image.
arXiv Detail & Related papers (2021-08-19T11:02:25Z) - Global and Local Texture Randomization for Synthetic-to-Real Semantic
Segmentation [40.556020857447535]
We propose two simple yet effective texture randomization mechanisms, Global Randomization (GTR) and Local Texture Randomization (LTR)
GTR is proposed to randomize the texture of source images into diverse texture styles.
LTR is proposed to generate diverse local regions for partially stylizing the source images.
arXiv Detail & Related papers (2021-08-05T05:14:49Z) - Retinal Image Segmentation with a Structure-Texture Demixing Network [62.69128827622726]
The complex structure and texture information are mixed in a retinal image, and distinguishing the information is difficult.
Existing methods handle texture and structure jointly, which may lead biased models toward recognizing textures and thus results in inferior segmentation performance.
We propose a segmentation strategy that seeks to separate structure and texture components and significantly improve the performance.
arXiv Detail & Related papers (2020-07-15T12:19:03Z) - Image Quality Assessment: Unifying Structure and Texture Similarity [38.05659069533254]
We develop the first full-reference image quality model with explicit tolerance to texture resampling.
Using a convolutional neural network, we construct an injective and differentiable function that transforms images to overcomplete representations.
arXiv Detail & Related papers (2020-04-16T16:11:46Z) - Guidance and Evaluation: Semantic-Aware Image Inpainting for Mixed
Scenes [54.836331922449666]
We propose a Semantic Guidance and Evaluation Network (SGE-Net) to update the structural priors and the inpainted image.
It utilizes semantic segmentation map as guidance in each scale of inpainting, under which location-dependent inferences are re-evaluated.
Experiments on real-world images of mixed scenes demonstrated the superiority of our proposed method over state-of-the-art approaches.
arXiv Detail & Related papers (2020-03-15T17:49:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.