Deep Decomposition and Bilinear Pooling Network for Blind Night-Time
Image Quality Evaluation
- URL: http://arxiv.org/abs/2205.05880v1
- Date: Thu, 12 May 2022 05:16:24 GMT
- Title: Deep Decomposition and Bilinear Pooling Network for Blind Night-Time
Image Quality Evaluation
- Authors: Qiuping Jiang, Jiawu Xu, Wei Zhou, Xiongkuo Min, Guangtao Zhai
- Abstract summary: We propose a novel deep decomposition and bilinear pooling network (DDB-Net) to better address this issue.
The DDB-Net contains three modules, i.e., an image decomposition module, a feature encoding module, and a bilinear pooling module.
The superiority of the proposed DDB-Net is well validated by extensive experiments on two publicly available night-time image databases.
- Score: 46.828620017822644
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Blind image quality assessment (BIQA), which aims to accurately predict the
image quality without any pristine reference information, has been highly
concerned in the past decades. Especially, with the help of deep neural
networks, great progress has been achieved so far. However, it remains less
investigated on BIQA for night-time images (NTIs) which usually suffer from
complicated authentic distortions such as reduced visibility, low contrast,
additive noises, and color distortions. These diverse authentic degradations
particularly challenges the design of effective deep neural network for blind
NTI quality evaluation (NTIQE). In this paper, we propose a novel deep
decomposition and bilinear pooling network (DDB-Net) to better address this
issue. The DDB-Net contains three modules, i.e., an image decomposition module,
a feature encoding module, and a bilinear pooling module. The image
decomposition module is inspired by the Retinex theory and involves decoupling
the input NTI into an illumination layer component responsible for illumination
information and a reflectance layer component responsible for content
information. Then, the feature encoding module involves learning multi-scale
feature representations of degradations that are rooted in the two decoupled
components separately. Finally, by modeling illumination-related and
content-related degradations as two-factor variations, the two multi-scale
feature sets are bilinearly pooled and concatenated together to form a unified
representation for quality prediction. The superiority of the proposed DDB-Net
is well validated by extensive experiments on two publicly available night-time
image databases.
Related papers
- VQCNIR: Clearer Night Image Restoration with Vector-Quantized Codebook [16.20461368096512]
Night photography often struggles with challenges like low light and blurring, stemming from dark environments and prolonged exposures.
We believe in the strength of data-driven high-quality priors and strive to offer a reliable and consistent prior, circumventing the restrictions of manual priors.
We propose Clearer Night Image Restoration with Vector-Quantized Codebook (VQCNIR) to achieve remarkable and consistent restoration outcomes on real-world and synthetic benchmarks.
arXiv Detail & Related papers (2023-12-14T02:16:27Z) - Transformer-based No-Reference Image Quality Assessment via Supervised
Contrastive Learning [36.695247860715874]
We propose a novel Contrastive Learning (SCL) and Transformer-based NR-IQA model SaTQA.
We first train a model on a large-scale synthetic dataset by SCL to extract degradation features of images with various distortion types and levels.
To further extract distortion information from images, we propose a backbone network incorporating the Multi-Stream Block (MSB) by combining the CNN inductive bias and Transformer long-term dependence modeling capability.
Experimental results on seven standard IQA datasets show that SaTQA outperforms the state-of-the-art methods for both synthetic and authentic datasets
arXiv Detail & Related papers (2023-12-12T06:01:41Z) - Multi-task Image Restoration Guided By Robust DINO Features [88.74005987908443]
We propose mboxtextbfDINO-IR, a multi-task image restoration approach leveraging robust features extracted from DINOv2.
We first propose a pixel-semantic fusion (PSF) module to dynamically fuse DINOV2's shallow features.
By formulating these modules into a unified deep model, we propose a DINO perception contrastive loss to constrain the model training.
arXiv Detail & Related papers (2023-12-04T06:59:55Z) - Enhancing Low-light Light Field Images with A Deep Compensation Unfolding Network [52.77569396659629]
This paper presents the deep compensation network unfolding (DCUNet) for restoring light field (LF) images captured under low-light conditions.
The framework uses the intermediate enhanced result to estimate the illumination map, which is then employed in the unfolding process to produce a new enhanced result.
To properly leverage the unique characteristics of LF images, this paper proposes a pseudo-explicit feature interaction module.
arXiv Detail & Related papers (2023-08-10T07:53:06Z) - Learning Enriched Features for Fast Image Restoration and Enhancement [166.17296369600774]
This paper presents a holistic goal of maintaining spatially-precise high-resolution representations through the entire network.
We learn an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
Our approach achieves state-of-the-art results for a variety of image processing tasks, including defocus deblurring, image denoising, super-resolution, and image enhancement.
arXiv Detail & Related papers (2022-04-19T17:59:45Z) - Learning Deep Context-Sensitive Decomposition for Low-Light Image
Enhancement [58.72667941107544]
A typical framework is to simultaneously estimate the illumination and reflectance, but they disregard the scene-level contextual information encapsulated in feature spaces.
We develop a new context-sensitive decomposition network architecture to exploit the scene-level contextual dependencies on spatial scales.
We develop a lightweight CSDNet (named LiteCSDNet) by reducing the number of channels.
arXiv Detail & Related papers (2021-12-09T06:25:30Z) - Towards Unsupervised Deep Image Enhancement with Generative Adversarial
Network [92.01145655155374]
We present an unsupervised image enhancement generative network (UEGAN)
It learns the corresponding image-to-image mapping from a set of images with desired characteristics in an unsupervised manner.
Results show that the proposed model effectively improves the aesthetic quality of images.
arXiv Detail & Related papers (2020-12-30T03:22:46Z) - Deep Attentive Generative Adversarial Network for Photo-Realistic Image
De-Quantization [25.805568996596783]
De-quantization can improve the visual quality of low bit-depth image to display on high bit-depth screen.
This paper proposes DAGAN algorithm to perform super-resolution on image intensity resolution.
DenseResAtt module consists of dense residual blocks armed with self-attention mechanism.
arXiv Detail & Related papers (2020-04-07T06:45:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.