Wavelet-based Decoupling Framework for low-light Stereo Image Enhancement
- URL: http://arxiv.org/abs/2507.12188v1
- Date: Wed, 16 Jul 2025 12:42:27 GMT
- Title: Wavelet-based Decoupling Framework for low-light Stereo Image Enhancement
- Authors: Shuangli Du, Siming Yan, Zhenghao Shi, Zhenzhen You, Lu Sun,
- Abstract summary: This paper proposes a wavelet-based low-light stereo image enhancement method with feature space decoupling.<n>By using wavelet transform the feature space is decomposed into a low-frequency branch for illumination adjustment and multiple high-frequency branches for texture enhancement.<n>To enhance the high-frequency information, a detail and texture enhancement module (DTEM) is proposed based on cross-attention mechanism.
- Score: 4.645709745803925
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Low-light images suffer from complex degradation, and existing enhancement methods often encode all degradation factors within a single latent space. This leads to highly entangled features and strong black-box characteristics, making the model prone to shortcut learning. To mitigate the above issues, this paper proposes a wavelet-based low-light stereo image enhancement method with feature space decoupling. Our insight comes from the following findings: (1) Wavelet transform enables the independent processing of low-frequency and high-frequency information. (2) Illumination adjustment can be achieved by adjusting the low-frequency component of a low-light image, extracted through multi-level wavelet decomposition. Thus, by using wavelet transform the feature space is decomposed into a low-frequency branch for illumination adjustment and multiple high-frequency branches for texture enhancement. Additionally, stereo low-light image enhancement can extract useful cues from another view to improve enhancement. To this end, we propose a novel high-frequency guided cross-view interaction module (HF-CIM) that operates within high-frequency branches rather than across the entire feature space, effectively extracting valuable image details from the other view. Furthermore, to enhance the high-frequency information, a detail and texture enhancement module (DTEM) is proposed based on cross-attention mechanism. The model is trained on a dataset consisting of images with uniform illumination and images with non-uniform illumination. Experimental results on both real and synthetic images indicate that our algorithm offers significant advantages in light adjustment while effectively recovering high-frequency information. The code and dataset are publicly available at: https://github.com/Cherisherr/WDCI-Net.git.
Related papers
- Learning Multi-scale Spatial-frequency Features for Image Denoising [58.883244886588336]
We propose a novel multi-scale adaptive dual-domain network (MADNet) for image denoising.<n>We use image pyramid inputs to restore noise-free results from low-resolution images.<n>In order to realize the interaction of high-frequency and low-frequency information, we design an adaptive spatial-frequency learning unit.
arXiv Detail & Related papers (2025-06-19T13:28:09Z) - Freqformer: Image-Demoiréing Transformer via Efficient Frequency Decomposition [83.40450475728792]
We present Freqformer, a Transformer-based framework specifically designed for image demoir'eing through targeted frequency separation.<n>Our method performs an effective frequency decomposition that explicitly splits moir'e patterns into high-frequency spatially-localized textures and low-frequency scale-robust color distortions.<n>Experiments on various demoir'eing benchmarks demonstrate that Freqformer achieves state-of-the-art performance with a compact model size.
arXiv Detail & Related papers (2025-05-25T12:23:10Z) - DLEN: Dual Branch of Transformer for Low-Light Image Enhancement in Dual Domains [0.0]
Low-light image enhancement (LLE) aims to improve the visual quality of images captured in poorly lit conditions.<n>These issues hinder the performance of computer vision tasks such as object detection, facial recognition, and autonomous driving.<n>We propose the Dual Light Enhance Network (DLEN), a novel architecture that incorporates two distinct attention mechanisms.
arXiv Detail & Related papers (2025-01-21T15:58:16Z) - DMFourLLIE: Dual-Stage and Multi-Branch Fourier Network for Low-Light Image Enhancement [3.2286595119663266]
We propose a Dual-Stage Multi-Branch Fourier Low-Light Image Enhancement (DMFourLLIE) framework to address limitations.<n>The first stage integrates structural information from infrared images to enhance the phase component.<n>The second stage combines multi-scale and Fourier convolutional branches for robust image reconstruction.
arXiv Detail & Related papers (2024-12-01T05:44:50Z) - Wave-Mamba: Wavelet State Space Model for Ultra-High-Definition Low-Light Image Enhancement [7.891750065129094]
We propose Wave-Mamba, a novel approach based on two pivotal insights derived from the wavelet domain.
Our method has demonstrated superior performance, significantly outshining current leading techniques.
arXiv Detail & Related papers (2024-08-02T14:01:34Z) - CodeEnhance: A Codebook-Driven Approach for Low-Light Image Enhancement [97.95330185793358]
Low-light image enhancement (LLIE) aims to improve low-illumination images.<n>Existing methods face two challenges: uncertainty in restoration from diverse brightness degradations and loss of texture and color information.<n>We propose a novel enhancement approach, CodeEnhance, by leveraging quantized priors and image refinement.
arXiv Detail & Related papers (2024-04-08T07:34:39Z) - You Only Need One Color Space: An Efficient Network for Low-light Image Enhancement [50.37253008333166]
Low-Light Image Enhancement (LLIE) task tends to restore the details and visual information from corrupted low-light images.
We propose a novel trainable color space, named Horizontal/Vertical-Intensity (HVI)
It not only decouples brightness and color from RGB channels to mitigate the instability during enhancement but also adapts to low-light images in different illumination ranges due to the trainable parameters.
arXiv Detail & Related papers (2024-02-08T16:47:43Z) - Low-light Stereo Image Enhancement and De-noising in the Low-frequency
Information Enhanced Image Space [5.1569866461097185]
Methods are proposed to perform enhancement and de-noising simultaneously.
Low-frequency information enhanced module (IEM) is proposed to suppress noise and produce a new image space.
Cross-channel and spatial context information mining module (CSM) is proposed to encode long-range spatial dependencies.
An encoder-decoder structure is constructed, incorporating cross-view and cross-scale feature interactions.
arXiv Detail & Related papers (2024-01-15T15:03:32Z) - Seeing Through The Noisy Dark: Toward Real-world Low-Light Image
Enhancement and Denoising [125.56062454927755]
Real-world low-light environment usually suffer from lower visibility and heavier noise, due to insufficient light or hardware limitation.
We propose a novel end-to-end method termed Real-world Low-light Enhancement & Denoising Network (RLED-Net)
arXiv Detail & Related papers (2022-10-02T14:57:23Z) - Cycle-Interactive Generative Adversarial Network for Robust Unsupervised
Low-Light Enhancement [109.335317310485]
Cycle-Interactive Generative Adversarial Network (CIGAN) is capable of not only better transferring illumination distributions between low/normal-light images but also manipulating detailed signals.
In particular, the proposed low-light guided transformation feed-forwards the features of low-light images from the generator of enhancement GAN into the generator of degradation GAN.
arXiv Detail & Related papers (2022-07-03T06:37:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.