Low-light Enhancement Method Based on Attention Map Net
- URL: http://arxiv.org/abs/2208.09330v1
- Date: Fri, 19 Aug 2022 13:18:35 GMT
- Title: Low-light Enhancement Method Based on Attention Map Net
- Authors: Mengfei Wu, Xucheng Xue, Taiji Lan, Xinwei Xu
- Abstract summary: Low-light image enhancement is a crucial preprocessing task for some complex vision tasks.
Target detection, image segmentation, and image recognition outcomes are all directly impacted by the impact of image enhancement.
We suggest an improved network called BrightenNet that uses U-Net as its primary structure and incorporates a number of different attention mechanisms as a solution.
- Score: 1.2158275183241178
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Low-light image enhancement is a crucial preprocessing task for some complex
vision tasks. Target detection, image segmentation, and image recognition
outcomes are all directly impacted by the impact of image enhancement. However,
the majority of the currently used image enhancement techniques do not produce
satisfactory outcomes, and these enhanced networks have relatively weak
robustness. We suggest an improved network called BrightenNet that uses U-Net
as its primary structure and incorporates a number of different attention
mechanisms as a solution to this issue. In a specific application, we employ
the network as the generator and LSGAN as the training framework to achieve
better enhancement results. We demonstrate the validity of the proposed network
BrightenNet in the experiments that follow in this paper. The results it
produced can both preserve image details and conform to human vision standards.
Related papers
- HAT: Hybrid Attention Transformer for Image Restoration [61.74223315807691]
Transformer-based methods have shown impressive performance in image restoration tasks, such as image super-resolution and denoising.
We propose a new Hybrid Attention Transformer (HAT) to activate more input pixels for better restoration.
Our HAT achieves state-of-the-art performance both quantitatively and qualitatively.
arXiv Detail & Related papers (2023-09-11T05:17:55Z) - SCRNet: a Retinex Structure-based Low-light Enhancement Model Guided by
Spatial Consistency [22.54951703413469]
We present a novel low-light image enhancement model, termed Spatial Consistency Retinex Network (SCRNet)
Our proposed model incorporates three levels of consistency: channel level, semantic level, and texture level, inspired by the principle of spatial consistency.
Extensive evaluations on various low-light image datasets demonstrate that our proposed SCRNet outshines existing state-of-the-art methods.
arXiv Detail & Related papers (2023-05-14T03:32:19Z) - Rethinking Performance Gains in Image Dehazing Networks [25.371802581339576]
We make minimal modifications to popular U-Net to obtain a compact dehazing network.
Specifically, we swap out the convolutional blocks in U-Net for residual blocks with the gating mechanism.
With a significantly reduced overhead, gUNet is superior to state-of-the-art methods on multiple image dehazing datasets.
arXiv Detail & Related papers (2022-09-23T07:14:48Z) - DEANet: Decomposition Enhancement and Adjustment Network for Low-Light
Image Enhancement [8.328470427768695]
This paper proposes a DEANet based on Retinex for low-light image enhancement.
It combines the frequency information and content information of the image into three sub-networks.
Our model has good robust results for all low-light images.
arXiv Detail & Related papers (2022-09-14T03:01:55Z) - Impact of Scaled Image on Robustness of Deep Neural Networks [0.0]
Scaling the raw images creates out-of-distribution data, which makes it a possible adversarial attack to fool the networks.
In this work, we propose a Scaling-distortion dataset ImageNet-CS by Scaling a subset of the ImageNet Challenge dataset by different multiples.
arXiv Detail & Related papers (2022-09-02T08:06:58Z) - Attention based Broadly Self-guided Network for Low light Image
Enhancement [0.0]
We propose Attention based Broadly self-guided network (ABSGN) for real world low-light image Enhancement.
The proposed network is validated by many mainstream benchmark.
Additional experimental results show that the proposed network outperforms most of state-of-the-art low-light image Enhancement solutions.
arXiv Detail & Related papers (2021-12-12T13:11:29Z) - Learning Deep Context-Sensitive Decomposition for Low-Light Image
Enhancement [58.72667941107544]
A typical framework is to simultaneously estimate the illumination and reflectance, but they disregard the scene-level contextual information encapsulated in feature spaces.
We develop a new context-sensitive decomposition network architecture to exploit the scene-level contextual dependencies on spatial scales.
We develop a lightweight CSDNet (named LiteCSDNet) by reducing the number of channels.
arXiv Detail & Related papers (2021-12-09T06:25:30Z) - Enhancing Photorealism Enhancement [83.88433283714461]
We present an approach to enhancing the realism of synthetic images using a convolutional network.
We analyze scene layout distributions in commonly used datasets and find that they differ in important ways.
We report substantial gains in stability and realism in comparison to recent image-to-image translation methods.
arXiv Detail & Related papers (2021-05-10T19:00:49Z) - AttendNets: Tiny Deep Image Recognition Neural Networks for the Edge via
Visual Attention Condensers [81.17461895644003]
We introduce AttendNets, low-precision, highly compact deep neural networks tailored for on-device image recognition.
AttendNets possess deep self-attention architectures based on visual attention condensers.
Results show AttendNets have significantly lower architectural and computational complexity when compared to several deep neural networks.
arXiv Detail & Related papers (2020-09-30T01:53:17Z) - Flexible Example-based Image Enhancement with Task Adaptive Global
Feature Self-Guided Network [162.14579019053804]
We show that our model outperforms the current state of the art in learning a single enhancement mapping.
The model achieves even higher performance on learning multiple mappings simultaneously.
arXiv Detail & Related papers (2020-05-13T22:45:07Z) - A U-Net Based Discriminator for Generative Adversarial Networks [86.67102929147592]
We propose an alternative U-Net based discriminator architecture for generative adversarial networks (GANs)
The proposed architecture allows to provide detailed per-pixel feedback to the generator while maintaining the global coherence of synthesized images.
The novel discriminator improves over the state of the art in terms of the standard distribution and image quality metrics.
arXiv Detail & Related papers (2020-02-28T11:16:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.