LLEMamba: Low-Light Enhancement via Relighting-Guided Mamba with Deep Unfolding Network
- URL: http://arxiv.org/abs/2406.01028v1
- Date: Mon, 3 Jun 2024 06:23:28 GMT
- Title: LLEMamba: Low-Light Enhancement via Relighting-Guided Mamba with Deep Unfolding Network
- Authors: Xuanqi Zhang, Haijin Zeng, Jinwang Pan, Qiangqiang Shen, Yongyong Chen,
- Abstract summary: We propose a novel Low-Light Enhancement method via relighting-guided Mamba with a deep unfolding network (LLEMamba)
Our LLEMamba first constructs a Retinex model with deep priors, embedding the iterative optimization process based on the Alternating Direction Method of Multipliers (ADMM) within a deep unfolding network.
Unlike Transformer, to assist the deep unfolding framework with multiple iterations, the proposed LLEMamba introduces a novel Mamba architecture with lower computational complexity.
- Score: 9.987504237289832
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Transformer-based low-light enhancement methods have yielded promising performance by effectively capturing long-range dependencies in a global context. However, their elevated computational demand limits the scalability of multiple iterations in deep unfolding networks, and hence they have difficulty in flexibly balancing interpretability and distortion. To address this issue, we propose a novel Low-Light Enhancement method via relighting-guided Mamba with a deep unfolding network (LLEMamba), whose theoretical interpretability and fidelity are guaranteed by Retinex optimization and Mamba deep priors, respectively. Specifically, our LLEMamba first constructs a Retinex model with deep priors, embedding the iterative optimization process based on the Alternating Direction Method of Multipliers (ADMM) within a deep unfolding network. Unlike Transformer, to assist the deep unfolding framework with multiple iterations, the proposed LLEMamba introduces a novel Mamba architecture with lower computational complexity, which not only achieves light-dependent global visual context for dark images during reflectance relight but also optimizes to obtain more stable closed-form solutions. Experiments on the benchmarks show that LLEMamba achieves superior quantitative evaluations and lower distortion visual results compared to existing state-of-the-art methods.
Related papers
- Transformer-Progressive Mamba Network for Lightweight Image Super-Resolution [45.74812546007778]
Mamba-based super-resolution (SR) methods have demonstrated the ability to capture global receptive fields with linear complexity.<n>We propose T-PMambaSR, a lightweight SR framework that integrates window-based self-attention with Progressive Mamba.
arXiv Detail & Related papers (2025-11-05T06:46:17Z) - Larger Hausdorff Dimension in Scanning Pattern Facilitates Mamba-Based Methods in Low-Light Image Enhancement [2.9138744171708115]
We propose an innovative enhancement to the Mamba framework by increasing the Hausdorff dimension of its scanning pattern through a novel Hilbert Selective Scan mechanism.<n>This mechanism explores the feature space more effectively, capturing intricate fine-scale details and improving overall coverage.<n>We believe that this refined strategy not only advances the state-of-the-art in low-light image enhancement but also holds promise for broader applications in fields that leverage Mamba-based techniques.
arXiv Detail & Related papers (2025-10-29T22:25:48Z) - Rethinking Efficient Hierarchical Mixing Architecture for Low-light RAW Image Enhancement [70.94252289772685]
We introduce a Hierarchical Mixing Architecture (HiMA) for efficient low-light image signal processing (ISP)<n>HiMA leverages the complementary strengths of Transformer and Mamba modules to handle features at large and small scales.<n>To address uneven illumination with strong local variations, we propose Local Distribution Adjustment (LoDA)<n>In addition, to fully exploit the denoised outputs from the first stage, we design a Multi-prior Fusion (MPF) module.
arXiv Detail & Related papers (2025-10-17T10:09:38Z) - Trained Mamba Emulates Online Gradient Descent in In-Context Linear Regression [90.93281146423378]
Mamba is an efficient Transformer alternative with linear complexity for long-sequence modeling.<n>Recent empirical works demonstrate Mamba's in-context learning (ICL) competitive with Transformers.<n>This paper studies the training dynamics of Mamba on the linear regression ICL task.
arXiv Detail & Related papers (2025-09-28T09:48:49Z) - Differential Mamba [16.613266337054267]
Sequence models like Transformers and RNNs often overallocate attention to irrelevant context, leading to noisy intermediate representations.<n>Recent work has shown that differential design can mitigate this issue in Transformers, improving their effectiveness across various applications.<n>We show that a naive adaptation of differential design to Mamba is insufficient and requires careful architectural modifications.
arXiv Detail & Related papers (2025-07-08T17:30:14Z) - FUDOKI: Discrete Flow-based Unified Understanding and Generation via Kinetic-Optimal Velocities [76.46448367752944]
multimodal large language models (MLLMs) unify visual understanding and image generation within a single framework.<n>Most existing MLLMs rely on autore (AR) architectures, which impose inherent limitations on future development.<n>We introduce FUDOKI, a unified multimodal model purely based on discrete flow matching.
arXiv Detail & Related papers (2025-05-26T15:46:53Z) - MambaStyle: Efficient StyleGAN Inversion for Real Image Editing with State-Space Models [60.110274007388135]
MambaStyle is an efficient single-stage encoder-based approach for GAN inversion and editing.<n>We show that MambaStyle achieves a superior balance among inversion accuracy, editing quality, and computational efficiency.
arXiv Detail & Related papers (2025-05-06T20:03:47Z) - Binarized Mamba-Transformer for Lightweight Quad Bayer HybridEVS Demosaicing [21.15110217419682]
We propose a lightweight Mamba-based binary neural network for efficient demosaicing of HybridEVS RAW images.
Bi-Mamba binarizes all projections while retaining the core Selective Scan in full precision.
We conduct quantitative and qualitative experiments to demonstrate the effectiveness of BMTNet in both performance and computational efficiency.
arXiv Detail & Related papers (2025-03-20T13:32:27Z) - Detail Matters: Mamba-Inspired Joint Unfolding Network for Snapshot Spectral Compressive Imaging [40.80197280147993]
We propose a Mamba-inspired Joint Unfolding Network (MiJUN) to overcome the inherent nonlinear and ill-posed characteristics of HSI reconstruction.
We introduce an accelerated unfolding network scheme, which reduces the reliance on initial optimization stages.
We refine the scanning strategy with Mamba by integrating the tensor mode-$k$ unfolding into the Mamba network.
arXiv Detail & Related papers (2025-01-02T13:56:23Z) - MambaVO: Deep Visual Odometry Based on Sequential Matching Refinement and Training Smoothing [13.827464353174182]
MambaVO conducts robust, Mamba-based matching and training to enhance the matching quality and improve the pose estimation.
On public benchmarks, MambaVO and MambaVO++ demonstrate SOTA performance, while ensuring real-time running.
arXiv Detail & Related papers (2024-12-28T08:42:48Z) - Mamba-SEUNet: Mamba UNet for Monaural Speech Enhancement [54.427965535613886]
Mamba, as a novel state-space model (SSM), has gained widespread application in natural language processing and computer vision.
In this work, we introduce Mamba-SEUNet, an innovative architecture that integrates Mamba with U-Net for SE tasks.
arXiv Detail & Related papers (2024-12-21T13:43:51Z) - MobileMamba: Lightweight Multi-Receptive Visual Mamba Network [51.33486891724516]
Previous research on lightweight models has primarily focused on CNNs and Transformer-based designs.
We propose the MobileMamba framework, which balances efficiency and performance.
MobileMamba achieves up to 83.6% on Top-1, surpassing existing state-of-the-art methods.
arXiv Detail & Related papers (2024-11-24T18:01:05Z) - ECMamba: Consolidating Selective State Space Model with Retinex Guidance for Efficient Multiple Exposure Correction [48.77198487543991]
We introduce a novel framework based on Mamba for Exposure Correction (ECMamba) with dual pathways, each dedicated to the restoration of reflectance and illumination map.
Specifically, we derive the Retinex theory and we train a Retinex estimator capable of mapping inputs into two intermediary spaces.
We develop a novel 2D Selective State-space layer guided by Retinex information (Retinex-SS2D) as the core operator of ECMM.
arXiv Detail & Related papers (2024-10-28T21:02:46Z) - Retinex-RAWMamba: Bridging Demosaicing and Denoising for Low-Light RAW Image Enhancement [71.13353154514418]
Low-light image enhancement, particularly in cross-domain tasks such as mapping from the raw domain to the sRGB domain, remains a significant challenge.<n>We propose a novel Mamba-based method customized for low light RAW images, called RAWMamba, to effectively handle raw images with different CFAs.<n>By bridging demosaicing and denoising, better enhancement for low light RAW images is achieved.
arXiv Detail & Related papers (2024-09-11T06:12:03Z) - ReMamba: Equip Mamba with Effective Long-Sequence Modeling [50.530839868893786]
We propose ReMamba, which enhances Mamba's ability to comprehend long contexts.
ReMamba incorporates selective compression and adaptation techniques within a two-stage re-forward process.
arXiv Detail & Related papers (2024-08-28T02:47:27Z) - Cross-Scan Mamba with Masked Training for Robust Spectral Imaging [51.557804095896174]
We propose the Cross-Scanning Mamba, named CS-Mamba, that employs a Spatial-Spectral SSM for global-local balanced context encoding.
Experiment results show that our CS-Mamba achieves state-of-the-art performance and the masked training method can better reconstruct smooth features to improve the visual quality.
arXiv Detail & Related papers (2024-08-01T15:14:10Z) - ECoFLaP: Efficient Coarse-to-Fine Layer-Wise Pruning for Vision-Language
Models [70.45441031021291]
Large Vision-Language Models (LVLMs) can understand the world comprehensively by integrating rich information from different modalities.
LVLMs are often problematic due to their massive computational/energy costs and carbon consumption.
We propose Efficient Coarse-to-Fine LayerWise Pruning (ECoFLaP), a two-stage coarse-to-fine weight pruning approach for LVLMs.
arXiv Detail & Related papers (2023-10-04T17:34:00Z) - Dual Degradation-Inspired Deep Unfolding Network for Low-Light Image
Enhancement [3.4929041108486185]
We propose a Dual degrAdation-inSpired deep Unfolding network, termed DASUNet, for low-light image enhancement.
It learns two distinct image priors via considering degradation specificity between luminance and chrominance spaces.
Our source code and pretrained model will be publicly available.
arXiv Detail & Related papers (2023-08-05T03:07:11Z) - Low-light Image Enhancement by Retinex Based Algorithm Unrolling and
Adjustment [50.13230641857892]
We propose a new deep learning framework for the low-light image enhancement (LIE) problem.
The proposed framework contains a decomposition network inspired by algorithm unrolling, and adjustment networks considering both global brightness and local brightness sensitivity.
Experiments on a series of typical LIE datasets demonstrated the effectiveness of the proposed method, both quantitatively and visually, as compared with existing methods.
arXiv Detail & Related papers (2022-02-12T03:59:38Z) - NerfingMVS: Guided Optimization of Neural Radiance Fields for Indoor
Multi-view Stereo [97.07453889070574]
We present a new multi-view depth estimation method that utilizes both conventional SfM reconstruction and learning-based priors.
We show that our proposed framework significantly outperforms state-of-the-art methods on indoor scenes.
arXiv Detail & Related papers (2021-09-02T17:54:31Z) - ReLLIE: Deep Reinforcement Learning for Customized Low-Light Image
Enhancement [21.680891925479195]
Low-light image enhancement (LLIE) is a pervasive yet challenging problem.
This paper presents a novel deep reinforcement learning based method, dubbed ReLLIE, for customized low-light enhancement.
arXiv Detail & Related papers (2021-07-13T03:36:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.