Multi-pooled Inception features for no-reference image quality
assessment
- URL: http://arxiv.org/abs/2011.05139v1
- Date: Tue, 10 Nov 2020 15:09:49 GMT
- Title: Multi-pooled Inception features for no-reference image quality
assessment
- Authors: Domonkos Varga
- Abstract summary: We propose a new approach for image quality assessment using convolutional neural networks (CNNs)
In contrast to previous methods, we do not take patches from the input image. Instead, the input image is treated as a whole and is run through a pretrained CNN body to extract resolution-independent, multi-level deep features.
We demonstrate that our best proposal - called MultiGAP-NRIQA - is able to provide state-of-the-art results on three benchmark IQA databases.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Image quality assessment (IQA) is an important element of a broad spectrum of
applications ranging from automatic video streaming to display technology.
Furthermore, the measurement of image quality requires a balanced investigation
of image content and features. Our proposed approach extracts visual features
by attaching global average pooling (GAP) layers to multiple Inception modules
of on an ImageNet database pretrained convolutional neural network (CNN). In
contrast to previous methods, we do not take patches from the input image.
Instead, the input image is treated as a whole and is run through a pretrained
CNN body to extract resolution-independent, multi-level deep features. As a
consequence, our method can be easily generalized to any input image size and
pretrained CNNs. Thus, we present a detailed parameter study with respect to
the CNN base architectures and the effectiveness of different deep features. We
demonstrate that our best proposal - called MultiGAP-NRIQA - is able to provide
state-of-the-art results on three benchmark IQA databases. Furthermore, these
results were also confirmed in a cross database test using the LIVE In the Wild
Image Quality Challenge database.
Related papers
- Large Multi-modality Model Assisted AI-Generated Image Quality Assessment [53.182136445844904]
We introduce a large Multi-modality model Assisted AI-Generated Image Quality Assessment (MA-AGIQA) model.
It uses semantically informed guidance to sense semantic information and extract semantic vectors through carefully designed text prompts.
It achieves state-of-the-art performance, and demonstrates its superior generalization capabilities on assessing the quality of AI-generated images.
arXiv Detail & Related papers (2024-04-27T02:40:36Z) - Transformer-based No-Reference Image Quality Assessment via Supervised
Contrastive Learning [36.695247860715874]
We propose a novel Contrastive Learning (SCL) and Transformer-based NR-IQA model SaTQA.
We first train a model on a large-scale synthetic dataset by SCL to extract degradation features of images with various distortion types and levels.
To further extract distortion information from images, we propose a backbone network incorporating the Multi-Stream Block (MSB) by combining the CNN inductive bias and Transformer long-term dependence modeling capability.
Experimental results on seven standard IQA datasets show that SaTQA outperforms the state-of-the-art methods for both synthetic and authentic datasets
arXiv Detail & Related papers (2023-12-12T06:01:41Z) - DeepDC: Deep Distance Correlation as a Perceptual Image Quality
Evaluator [53.57431705309919]
ImageNet pre-trained deep neural networks (DNNs) show notable transferability for building effective image quality assessment (IQA) models.
We develop a novel full-reference IQA (FR-IQA) model based exclusively on pre-trained DNN features.
We conduct comprehensive experiments to demonstrate the superiority of the proposed quality model on five standard IQA datasets.
arXiv Detail & Related papers (2022-11-09T14:57:27Z) - Attentions Help CNNs See Better: Attention-based Hybrid Image Quality
Assessment Network [20.835800149919145]
Image quality assessment (IQA) algorithm aims to quantify the human perception of image quality.
There is a performance drop when assessing distortion images generated by generative adversarial network (GAN) with seemingly realistic texture.
We propose an Attention-based Hybrid Image Quality Assessment Network (AHIQ) to deal with the challenge and get better performance on the GAN-based IQA task.
arXiv Detail & Related papers (2022-04-22T03:59:18Z) - Image Quality Assessment using Contrastive Learning [50.265638572116984]
We train a deep Convolutional Neural Network (CNN) using a contrastive pairwise objective to solve the auxiliary problem.
We show through extensive experiments that CONTRIQUE achieves competitive performance when compared to state-of-the-art NR image quality models.
Our results suggest that powerful quality representations with perceptual relevance can be obtained without requiring large labeled subjective image quality datasets.
arXiv Detail & Related papers (2021-10-25T21:01:00Z) - MUSIQ: Multi-scale Image Quality Transformer [22.908901641767688]
Current state-of-the-art IQA methods are based on convolutional neural networks (CNNs)
We design a multi-scale image quality Transformer (MUSIQ) to process native resolution images with varying sizes and aspect ratios.
With a multi-scale image representation, our proposed method can capture image quality at different granularities.
arXiv Detail & Related papers (2021-08-12T23:36:22Z) - The Mind's Eye: Visualizing Class-Agnostic Features of CNNs [92.39082696657874]
We propose an approach to visually interpret CNN features given a set of images by creating corresponding images that depict the most informative features of a specific layer.
Our method uses a dual-objective activation and distance loss, without requiring a generator network nor modifications to the original model.
arXiv Detail & Related papers (2021-01-29T07:46:39Z) - Combining pretrained CNN feature extractors to enhance clustering of
complex natural images [27.784346095205358]
This paper aims at providing insight on the use of pretrained CNN features for image clustering (IC)
To solve this issue, we propose to rephrase the IC problem as a multi-view clustering (MVC) problem.
We then propose a multi-input neural network architecture that is trained end-to-end to solve the MVC problem effectively.
arXiv Detail & Related papers (2021-01-07T21:23:04Z) - Deep Multi-Scale Features Learning for Distorted Image Quality
Assessment [20.7146855562825]
Existing deep neural networks (DNNs) have shown significant effectiveness for tackling the IQA problem.
We propose to use pyramid features learning to build a DNN with hierarchical multi-scale features for distorted image quality prediction.
Our proposed network is optimized in a deep end-to-end supervision manner.
arXiv Detail & Related papers (2020-12-01T23:39:01Z) - Learning Deep Interleaved Networks with Asymmetric Co-Attention for
Image Restoration [65.11022516031463]
We present a deep interleaved network (DIN) that learns how information at different states should be combined for high-quality (HQ) images reconstruction.
In this paper, we propose asymmetric co-attention (AsyCA) which is attached at each interleaved node to model the feature dependencies.
Our presented DIN can be trained end-to-end and applied to various image restoration tasks.
arXiv Detail & Related papers (2020-10-29T15:32:00Z) - Learning Enriched Features for Real Image Restoration and Enhancement [166.17296369600774]
convolutional neural networks (CNNs) have achieved dramatic improvements over conventional approaches for image restoration task.
We present a novel architecture with the collective goals of maintaining spatially-precise high-resolution representations through the entire network.
Our approach learns an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
arXiv Detail & Related papers (2020-03-15T11:04:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.