Progressive Feature Fusion Network for Enhancing Image Quality
Assessment
- URL: http://arxiv.org/abs/2401.06992v1
- Date: Sat, 13 Jan 2024 06:34:32 GMT
- Title: Progressive Feature Fusion Network for Enhancing Image Quality
Assessment
- Authors: Kaiqun Wu, Xiaoling Jiang, Rui Yu, Yonggang Luo, Tian Jiang, Xi Wu,
Peng Wei
- Abstract summary: We propose a new image quality assessment framework to decide which image is better in an image group.
To capture the subtle differences, a fine-grained network is adopted to acquire multi-scale features.
Experimental results show that compared with the current mainstream image quality assessment methods, the proposed network can achieve more accurate image quality assessment.
- Score: 8.06731856250435
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image compression has been applied in the fields of image storage and video
broadcasting. However, it's formidably tough to distinguish the subtle quality
differences between those distorted images generated by different algorithms.
In this paper, we propose a new image quality assessment framework to decide
which image is better in an image group. To capture the subtle differences, a
fine-grained network is adopted to acquire multi-scale features. Subsequently,
we design a cross subtract block for separating and gathering the information
within positive and negative image pairs. Enabling image comparison in feature
space. After that, a progressive feature fusion block is designed, which fuses
multi-scale features in a novel progressive way. Hierarchical spatial 2D
features can thus be processed gradually. Experimental results show that
compared with the current mainstream image quality assessment methods, the
proposed network can achieve more accurate image quality assessment and ranks
second in the benchmark of CLIC in the image perceptual model track.
Related papers
- Enhance Image Classification via Inter-Class Image Mixup with Diffusion Model [80.61157097223058]
A prevalent strategy to bolster image classification performance is through augmenting the training set with synthetic images generated by T2I models.
In this study, we scrutinize the shortcomings of both current generative and conventional data augmentation techniques.
We introduce an innovative inter-class data augmentation method known as Diff-Mix, which enriches the dataset by performing image translations between classes.
arXiv Detail & Related papers (2024-03-28T17:23:45Z) - Beyond Learned Metadata-based Raw Image Reconstruction [86.1667769209103]
Raw images have distinct advantages over sRGB images, e.g., linearity and fine-grained quantization levels.
They are not widely adopted by general users due to their substantial storage requirements.
We propose a novel framework that learns a compact representation in the latent space, serving as metadata.
arXiv Detail & Related papers (2023-06-21T06:59:07Z) - Multi-cropping Contrastive Learning and Domain Consistency for
Unsupervised Image-to-Image Translation [5.562419999563734]
We propose a novel unsupervised image-to-image translation framework based on multi-cropping contrastive learning and domain consistency, called MCDUT.
In many image-to-image translation tasks, our method achieves state-of-the-art results, and the advantages of our method have been proven through comparison experiments and ablation research.
arXiv Detail & Related papers (2023-04-24T16:20:28Z) - Test your samples jointly: Pseudo-reference for image quality evaluation [3.2634122554914]
We propose to jointly model different images depicting the same content to improve the precision of quality estimation.
Our experiments show that at test-time, our method successfully combines the features from multiple images depicting the same new content, improving estimation quality.
arXiv Detail & Related papers (2023-04-07T17:59:27Z) - Image Quality Assessment with Gradient Siamese Network [8.958447396656581]
We introduce Gradient Siamese Network (GSN) for image quality assessment.
We utilize Central Differential Convolution to obtain both semantic features and detail difference hidden in image pair.
For the low-level, mid-level and high-level features extracted by the network, we innovatively design a multi-level fusion method.
arXiv Detail & Related papers (2022-08-08T12:10:38Z) - Multi-Scale Features and Parallel Transformers Based Image Quality
Assessment [0.6554326244334866]
We propose a new architecture for image quality assessment using transformer networks and multi-scale feature extraction.
Our experimentation on various datasets, including the PIPAL dataset, demonstrates that the proposed integration technique outperforms existing algorithms.
arXiv Detail & Related papers (2022-04-20T20:38:23Z) - Learning Enriched Features for Fast Image Restoration and Enhancement [166.17296369600774]
This paper presents a holistic goal of maintaining spatially-precise high-resolution representations through the entire network.
We learn an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
Our approach achieves state-of-the-art results for a variety of image processing tasks, including defocus deblurring, image denoising, super-resolution, and image enhancement.
arXiv Detail & Related papers (2022-04-19T17:59:45Z) - High-Quality Pluralistic Image Completion via Code Shared VQGAN [51.7805154545948]
We present a novel framework for pluralistic image completion that can achieve both high quality and diversity at much faster inference speed.
Our framework is able to learn semantically-rich discrete codes efficiently and robustly, resulting in much better image reconstruction quality.
arXiv Detail & Related papers (2022-04-05T01:47:35Z) - Towards Unsupervised Deep Image Enhancement with Generative Adversarial
Network [92.01145655155374]
We present an unsupervised image enhancement generative network (UEGAN)
It learns the corresponding image-to-image mapping from a set of images with desired characteristics in an unsupervised manner.
Results show that the proposed model effectively improves the aesthetic quality of images.
arXiv Detail & Related papers (2020-12-30T03:22:46Z) - Blind Quality Assessment for Image Superresolution Using Deep Two-Stream
Convolutional Networks [41.558981828761574]
We propose a no-reference/blind deep neural network-based SR image quality assessor (DeepSRQ)
To learn more discriminative feature representations of various distorted SR images, the proposed DeepSRQ is a two-stream convolutional network.
Experimental results on three publicly available SR image quality databases demonstrate the effectiveness and generalization ability of our proposed DeepSRQ.
arXiv Detail & Related papers (2020-04-13T19:14:28Z) - Learning Enriched Features for Real Image Restoration and Enhancement [166.17296369600774]
convolutional neural networks (CNNs) have achieved dramatic improvements over conventional approaches for image restoration task.
We present a novel architecture with the collective goals of maintaining spatially-precise high-resolution representations through the entire network.
Our approach learns an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
arXiv Detail & Related papers (2020-03-15T11:04:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.