AoSRNet: All-in-One Scene Recovery Networks via Multi-knowledge
Integration
- URL: http://arxiv.org/abs/2402.03738v1
- Date: Tue, 6 Feb 2024 06:12:03 GMT
- Title: AoSRNet: All-in-One Scene Recovery Networks via Multi-knowledge
Integration
- Authors: Yuxu Lu, Dong Yang, Yuan Gao, Ryan Wen Liu, Jun Liu, Yu Guo
- Abstract summary: We propose an all-in-one scene recovery network via multi-knowledge integration (termed AoSRNet)
It combines gamma correction (GC) and optimized linear stretching (OLS) to create the detail enhancement module (DEM) and color restoration module ( CRM)
Comprehensive experimental results demonstrate the effectiveness and stability of AoSRNet compared to other state-of-the-art methods.
- Score: 17.070755601209136
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Scattering and attenuation of light in no-homogeneous imaging media or
inconsistent light intensity will cause insufficient contrast and color
distortion in the collected images, which limits the developments such as
vision-driven smart urban, autonomous vehicles, and intelligent robots. In this
paper, we propose an all-in-one scene recovery network via multi-knowledge
integration (termed AoSRNet) to improve the visibility of imaging devices in
typical low-visibility imaging scenes (e.g., haze, sand dust, and low light).
It combines gamma correction (GC) and optimized linear stretching (OLS) to
create the detail enhancement module (DEM) and color restoration module (CRM).
Additionally, we suggest a multi-receptive field extraction module (MEM) to
attenuate the loss of image texture details caused by GC nonlinear and OLS
linear transformations. Finally, we refine the coarse features generated by
DEM, CRM, and MEM through Encoder-Decoder to generate the final restored image.
Comprehensive experimental results demonstrate the effectiveness and stability
of AoSRNet compared to other state-of-the-art methods. The source code is
available at \url{https://github.com/LouisYuxuLu/AoSRNet}.
Related papers
- Reconstructive Visual Instruction Tuning [64.91373889600136]
reconstructive visual instruction tuning (ROSS) is a family of Large Multimodal Models (LMMs) that exploit vision-centric supervision signals.
It reconstructs latent representations of input images, avoiding directly regressing exact raw RGB values.
Empirically, ROSS consistently brings significant improvements across different visual encoders and language models.
arXiv Detail & Related papers (2024-10-12T15:54:29Z) - CRNet: A Detail-Preserving Network for Unified Image Restoration and Enhancement Task [44.14681936953848]
Composite Refinement Network (CRNet) can perform unified image restoration and enhancement.
CRNet explicitly separates and strengthens high and low-frequency information through pooling layers.
Our model secured third place in the first track of the Bracketing Image Restoration and Enhancement Challenge.
arXiv Detail & Related papers (2024-04-22T12:33:18Z) - CodeEnhance: A Codebook-Driven Approach for Low-Light Image Enhancement [97.95330185793358]
Low-light image enhancement (LLIE) aims to improve low-illumination images.
Existing methods face two challenges: uncertainty in restoration from diverse brightness degradations and loss of texture and color information.
We propose a novel enhancement approach, CodeEnhance, by leveraging quantized priors and image refinement.
arXiv Detail & Related papers (2024-04-08T07:34:39Z) - DGNet: Dynamic Gradient-Guided Network for Water-Related Optics Image
Enhancement [77.0360085530701]
Underwater image enhancement (UIE) is a challenging task due to the complex degradation caused by underwater environments.
Previous methods often idealize the degradation process, and neglect the impact of medium noise and object motion on the distribution of image features.
Our approach utilizes predicted images to dynamically update pseudo-labels, adding a dynamic gradient to optimize the network's gradient space.
arXiv Detail & Related papers (2023-12-12T06:07:21Z) - Enhancing Low-light Light Field Images with A Deep Compensation Unfolding Network [52.77569396659629]
This paper presents the deep compensation network unfolding (DCUNet) for restoring light field (LF) images captured under low-light conditions.
The framework uses the intermediate enhanced result to estimate the illumination map, which is then employed in the unfolding process to produce a new enhanced result.
To properly leverage the unique characteristics of LF images, this paper proposes a pseudo-explicit feature interaction module.
arXiv Detail & Related papers (2023-08-10T07:53:06Z) - Ghost-free High Dynamic Range Imaging via Hybrid CNN-Transformer and
Structure Tensor [12.167049432063132]
We present a hybrid model consisting of a convolutional encoder and a Transformer decoder to generate ghost-free HDR images.
In the encoder, a context aggregation network and non-local attention block are adopted to optimize multi-scale features.
The decoder based on Swin Transformer is utilized to improve the reconstruction capability of the proposed model.
arXiv Detail & Related papers (2022-12-01T15:43:32Z) - Modular Degradation Simulation and Restoration for Under-Display Camera [21.048590332029995]
Under-display camera (UDC) provides an elegant solution for full-screen smartphones.
UDC captured images suffer from severe degradation since sensors lie under the display.
We propose a modular network dubbed MPGNet trained using the generative adversarial network (GAN) framework for simulating UDC imaging.
arXiv Detail & Related papers (2022-09-23T07:36:07Z) - Removing Diffraction Image Artifacts in Under-Display Camera via Dynamic
Skip Connection Network [80.67717076541956]
Under-Display Camera (UDC) systems provide a true bezel-less and notch-free viewing experience on smartphones.
In a typical UDC system, the pixel array attenuates and diffracts the incident light on the camera, resulting in significant image quality degradation.
In this work, we aim to analyze and tackle the aforementioned degradation problems.
arXiv Detail & Related papers (2021-04-19T18:41:45Z) - Contrastive Learning for Compact Single Image Dehazing [41.83007400559068]
We propose a novel contrastive regularization (CR) built upon contrastive learning to exploit both the information of hazy images and clear images as negative and positive samples.
CR ensures that the restored image is pulled to closer to the clear image and pushed to far away from the hazy image in the representation space.
Considering trade-off between performance and memory storage, we develop a compact dehazing network based on autoencoder-like framework.
arXiv Detail & Related papers (2021-04-19T14:56:21Z) - Adaptive Gradient Balancing for UndersampledMRI Reconstruction and
Image-to-Image Translation [60.663499381212425]
We enhance the image quality by using a Wasserstein Generative Adversarial Network combined with a novel Adaptive Gradient Balancing technique.
In MRI, our method minimizes artifacts, while maintaining a high-quality reconstruction that produces sharper images than other techniques.
arXiv Detail & Related papers (2021-04-05T13:05:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.