KAN See In the Dark
- URL: http://arxiv.org/abs/2409.03404v1
- Date: Thu, 5 Sep 2024 10:41:17 GMT
- Title: KAN See In the Dark
- Authors: Aoxiang Ning, Minglong Xue, Jinhong He, Chengyun Song,
- Abstract summary: Existing low-light image enhancement methods are difficult to fit the complex nonlinear relationship between normal and low-light images due to uneven illumination and noise effects.
The recently proposed Kolmogorov-Arnold networks (KANs) feature spline-based convolutional layers and learnable activation functions, which can effectively capture nonlinear dependencies.
In this paper, we design a KAN-Block based on KANs and innovatively apply it to low-light image enhancement. This method effectively alleviates the limitations of current methods constrained by linear network structures and lack of interpretability.
- Score: 2.9873893715462185
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Existing low-light image enhancement methods are difficult to fit the complex nonlinear relationship between normal and low-light images due to uneven illumination and noise effects. The recently proposed Kolmogorov-Arnold networks (KANs) feature spline-based convolutional layers and learnable activation functions, which can effectively capture nonlinear dependencies. In this paper, we design a KAN-Block based on KANs and innovatively apply it to low-light image enhancement. This method effectively alleviates the limitations of current methods constrained by linear network structures and lack of interpretability, further demonstrating the potential of KANs in low-level vision tasks. Given the poor perception of current low-light image enhancement methods and the stochastic nature of the inverse diffusion process, we further introduce frequency-domain perception for visually oriented enhancement. Extensive experiments demonstrate the competitive performance of our method on benchmark datasets. The code will be available at: https://github.com/AXNing/KSID}{https://github.com/AXNing/KSID.
Related papers
- Unsupervised Low-light Image Enhancement with Lookup Tables and Diffusion Priors [38.96909959677438]
Low-light image enhancement (LIE) aims at precisely and efficiently recovering an image degraded in poor illumination environments.
Recent advanced LIE techniques are using deep neural networks, which require lots of low-normal light image pairs, network parameters, and computational resources.
We devise a novel unsupervised LIE framework based on diffusion priors and lookup tables to achieve efficient low-light image recovery.
arXiv Detail & Related papers (2024-09-27T16:37:27Z) - Zero-LED: Zero-Reference Lighting Estimation Diffusion Model for Low-Light Image Enhancement [2.9873893715462185]
We propose a novel zero-reference lighting estimation diffusion model for low-light image enhancement called Zero-LED.
It utilizes the stable convergence ability of diffusion models to bridge the gap between low-light domains and real normal-light domains.
It successfully alleviates the dependence on pairwise training data via zero-reference learning.
arXiv Detail & Related papers (2024-03-05T11:39:17Z) - Global Structure-Aware Diffusion Process for Low-Light Image Enhancement [64.69154776202694]
This paper studies a diffusion-based framework to address the low-light image enhancement problem.
We advocate for the regularization of its inherent ODE-trajectory.
Experimental evaluations reveal that the proposed framework attains distinguished performance in low-light enhancement.
arXiv Detail & Related papers (2023-10-26T17:01:52Z) - Low-Light Image Enhancement with Wavelet-based Diffusion Models [50.632343822790006]
Diffusion models have achieved promising results in image restoration tasks, yet suffer from time-consuming, excessive computational resource consumption, and unstable restoration.
We propose a robust and efficient Diffusion-based Low-Light image enhancement approach, dubbed DiffLL.
arXiv Detail & Related papers (2023-06-01T03:08:28Z) - Retinexformer: One-stage Retinex-based Transformer for Low-light Image
Enhancement [96.09255345336639]
We formulate a principled One-stage Retinex-based Framework (ORF) to enhance low-light images.
ORF first estimates the illumination information to light up the low-light image and then restores the corruption to produce the enhanced image.
Our algorithm, Retinexformer, significantly outperforms state-of-the-art methods on thirteen benchmarks.
arXiv Detail & Related papers (2023-03-12T16:54:08Z) - Cycle-Interactive Generative Adversarial Network for Robust Unsupervised
Low-Light Enhancement [109.335317310485]
Cycle-Interactive Generative Adversarial Network (CIGAN) is capable of not only better transferring illumination distributions between low/normal-light images but also manipulating detailed signals.
In particular, the proposed low-light guided transformation feed-forwards the features of low-light images from the generator of enhancement GAN into the generator of degradation GAN.
arXiv Detail & Related papers (2022-07-03T06:37:46Z) - SurroundNet: Towards Effective Low-Light Image Enhancement [43.99545410176845]
We present a novel SurroundNet which only involves less than 150$K$ parameters and achieves very competitive performance.
The proposed network comprises several Adaptive Retinex Blocks (ARBlock), which can be viewed as a novel extension of Single Scale Retinex in feature space.
We also introduce a Low-Exposure Denoiser (LED) to smooth the low-light image before the enhancement.
arXiv Detail & Related papers (2021-10-11T09:10:19Z) - Unsupervised Low-light Image Enhancement with Decoupled Networks [103.74355338972123]
We learn a two-stage GAN-based framework to enhance the real-world low-light images in a fully unsupervised fashion.
Our proposed method outperforms the state-of-the-art unsupervised image enhancement methods in terms of both illumination enhancement and noise reduction.
arXiv Detail & Related papers (2020-05-06T13:37:08Z) - Zero-Reference Deep Curve Estimation for Low-Light Image Enhancement [156.18634427704583]
The paper presents a novel method, Zero-Reference Deep Curve Estimation (Zero-DCE), which formulates light enhancement as a task of image-specific curve estimation with a deep network.
Our method trains a lightweight deep network, DCE-Net, to estimate pixel-wise and high-order curves for dynamic range adjustment of a given image.
arXiv Detail & Related papers (2020-01-19T13:49:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.