QMRNet: Quality Metric Regression for EO Image Quality Assessment and
Super-Resolution
- URL: http://arxiv.org/abs/2210.06618v2
- Date: Fri, 14 Oct 2022 13:46:08 GMT
- Title: QMRNet: Quality Metric Regression for EO Image Quality Assessment and
Super-Resolution
- Authors: David Berga, Pau Gall\'es, Katalin Tak\'ats, Eva Mohedano, Laura
Riordan-Chen, Clara Garcia-Moll, David Vilaseca, Javier Mar\'in
- Abstract summary: We benchmark state-of-the-art Super-Resolution (SR) algorithms for distinct Earth Observation (EO) datasets.
We also propose a novel Quality Metric Regression Network (QMRNet) that is able to predict quality (as a No-Reference metric) by training on any property of the image.
Overall benchmark shows promising results for LIIF, CAR and MSRN and also the potential use of QMRNet as Loss for optimizing SR predictions.
- Score: 2.425299069769717
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Latest advances in Super-Resolution (SR) have been tested with general
purpose images such as faces, landscapes and objects, mainly unused for the
task of super-resolving Earth Observation (EO) images. In this research paper,
we benchmark state-of-the-art SR algorithms for distinct EO datasets using both
Full-Reference and No-Reference Image Quality Assessment (IQA) metrics. We also
propose a novel Quality Metric Regression Network (QMRNet) that is able to
predict quality (as a No-Reference metric) by training on any property of the
image (i.e. its resolution, its distortions...) and also able to optimize SR
algorithms for a specific metric objective. This work is part of the
implementation of the framework IQUAFLOW which has been developed for
evaluating image quality, detection and classification of objects as well as
image compression in EO use cases. We integrated our experimentation and tested
our QMRNet algorithm on predicting features like blur, sharpness, snr, rer and
ground sampling distance (GSD) and obtain validation medRs below 1.0 (out of
N=50) and recall rates above 95\%. Overall benchmark shows promising results
for LIIF, CAR and MSRN and also the potential use of QMRNet as Loss for
optimizing SR predictions. Due to its simplicity, QMRNet could also be used for
other use cases and image domains, as its architecture and data processing is
fully scalable.
Related papers
- Rethinking Image Super-Resolution from Training Data Perspectives [54.28824316574355]
We investigate the understudied effect of the training data used for image super-resolution (SR)
With this, we propose an automated image evaluation pipeline.
We find that datasets with (i) low compression artifacts, (ii) high within-image diversity as judged by the number of different objects, and (iii) a large number of images from ImageNet or PASS all positively affect SR performance.
arXiv Detail & Related papers (2024-09-01T16:25:04Z) - Perception- and Fidelity-aware Reduced-Reference Super-Resolution Image Quality Assessment [25.88845910499606]
We propose a novel dual-branch reduced-reference SR-IQA network, ie, Perception- and Fidelity-aware SR-IQA (PFIQA)
PFIQA outperforms current state-of-the-art models across three widely-used SR-IQA benchmarks.
arXiv Detail & Related papers (2024-05-15T16:09:22Z) - CADyQ: Content-Aware Dynamic Quantization for Image Super-Resolution [55.50793823060282]
We propose a novel Content-Aware Dynamic Quantization (CADyQ) method for image super-resolution (SR) networks.
CADyQ allocates optimal bits to local regions and layers adaptively based on the local contents of an input image.
The pipeline has been tested on various SR networks and evaluated on several standard benchmarks.
arXiv Detail & Related papers (2022-07-21T07:50:50Z) - A No-Reference Deep Learning Quality Assessment Method for
Super-resolution Images Based on Frequency Maps [39.58198651685851]
We propose a no-reference deep-learning image quality assessment method based on frequency maps.
We first obtain the high-frequency map (HM) and low-frequency map (LM) of SRI by using Sobel operator and piecewise smooth image approximation.
Our method outperforms all compared IQA models on the selected three super-resolution quality assessment (SRQA) databases.
arXiv Detail & Related papers (2022-06-09T05:43:37Z) - Textural-Structural Joint Learning for No-Reference Super-Resolution
Image Quality Assessment [59.91741119995321]
We develop a dual stream network to jointly explore the textural and structural information for quality prediction, dubbed TSNet.
By mimicking the human vision system (HVS) that pays more attention to the significant areas of the image, we develop the spatial attention mechanism to make the visual-sensitive areas more distinguishable.
Experimental results show the proposed TSNet predicts the visual quality more accurate than the state-of-the-art IQA methods, and demonstrates better consistency with the human's perspective.
arXiv Detail & Related papers (2022-05-27T09:20:06Z) - SPQE: Structure-and-Perception-Based Quality Evaluation for Image
Super-Resolution [24.584839578742237]
Super-Resolution technique has greatly improved the visual quality of images by enhancing their resolutions.
It also calls for an efficient SR Image Quality Assessment (SR-IQA) to evaluate those algorithms or their generated images.
In emerging deep-learning-based SR, a generated high-quality, visually pleasing image may have different structures from its corresponding low-quality image.
arXiv Detail & Related papers (2022-05-07T07:52:55Z) - Can No-reference features help in Full-reference image quality
estimation? [20.491565297561912]
We study utilization of no-reference features in Full-reference IQA task.
Our model achieves higher SRCC and KRCC scores than a number of state-of-the-art algorithms.
arXiv Detail & Related papers (2022-03-02T03:39:28Z) - Learning Transformer Features for Image Quality Assessment [53.51379676690971]
We propose a unified IQA framework that utilizes CNN backbone and transformer encoder to extract features.
The proposed framework is compatible with both FR and NR modes and allows for a joint training scheme.
arXiv Detail & Related papers (2021-12-01T13:23:00Z) - Image Quality Assessment using Contrastive Learning [50.265638572116984]
We train a deep Convolutional Neural Network (CNN) using a contrastive pairwise objective to solve the auxiliary problem.
We show through extensive experiments that CONTRIQUE achieves competitive performance when compared to state-of-the-art NR image quality models.
Our results suggest that powerful quality representations with perceptual relevance can be obtained without requiring large labeled subjective image quality datasets.
arXiv Detail & Related papers (2021-10-25T21:01:00Z) - Learning Conditional Knowledge Distillation for Degraded-Reference Image
Quality Assessment [157.1292674649519]
We propose a practical solution named degraded-reference IQA (DR-IQA)
DR-IQA exploits the inputs of IR models, degraded images, as references.
Our results can even be close to the performance of full-reference settings.
arXiv Detail & Related papers (2021-08-18T02:35:08Z) - Learning-Based Quality Assessment for Image Super-Resolution [25.76907513611358]
We build a large-scale SR image database using a novel semi-automatic labeling approach.
The resulting SR Image quality database contains 8,400 images of 100 natural scenes.
We train an end-to-end Deep Image SR Quality (DISQ) model by employing two-stream Deep Neural Networks (DNNs) for feature extraction, followed by a feature fusion network for quality prediction.
arXiv Detail & Related papers (2020-12-16T04:06:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.