DCA-LUT: Deep Chromatic Alignment with 5D LUT for Purple Fringing Removal
- URL: http://arxiv.org/abs/2511.12066v1
- Date: Sat, 15 Nov 2025 07:11:49 GMT
- Title: DCA-LUT: Deep Chromatic Alignment with 5D LUT for Purple Fringing Removal
- Authors: Jialang Lu, Shuning Sun, Pu Wang, Chen Wu, Feng Gao, Lina Gong, Dianjie Lu, Guijuan Zhang, Zhuoran Zheng,
- Abstract summary: We introduce DCA-LUT, the first deep learning framework for purple fringing removal.<n>Inspired by the physical root of the problem, we introduce a novel Chromatic-Aware Coordinate Transformation (CA-CT) module.<n>The final color correction is performed by a learned 5D Look-Up Table (5D LUT)
- Score: 18.657059101949887
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Purple fringing, a persistent artifact caused by Longitudinal Chromatic Aberration (LCA) in camera lenses, has long degraded the clarity and realism of digital imaging. Traditional solutions rely on complex and expensive apochromatic (APO) lens hardware and the extraction of handcrafted features, ignoring the data-driven approach. To fill this gap, we introduce DCA-LUT, the first deep learning framework for purple fringing removal. Inspired by the physical root of the problem, the spatial misalignment of RGB color channels due to lens dispersion, we introduce a novel Chromatic-Aware Coordinate Transformation (CA-CT) module, learning an image-adaptive color space to decouple and isolate fringing into a dedicated dimension. This targeted separation allows the network to learn a precise ``purple fringe channel", which then guides the accurate restoration of the luminance channel. The final color correction is performed by a learned 5D Look-Up Table (5D LUT), enabling efficient and powerful% non-linear color mapping. To enable robust training and fair evaluation, we constructed a large-scale synthetic purple fringing dataset (PF-Synth). Extensive experiments in synthetic and real-world datasets demonstrate that our method achieves state-of-the-art performance in purple fringing removal.
Related papers
- CAST-LUT: Tokenizer-Guided HSV Look-Up Tables for Purple Flare Removal [23.950152572091543]
We propose a novel network built upon decoupled HSV Look-Up Tables (LUTs)<n>The method aims to simplify color correction by adjusting the Hue (H), Saturation (S), and Value (V) components independently.<n>Our model not only significantly outperforms existing methods in visual effects but also achieves state-of-the-art performance on all quantitative metrics.
arXiv Detail & Related papers (2025-11-10T06:45:03Z) - HVI-CIDNet+: Beyond Extreme Darkness for Low-Light Image Enhancement [56.02740727422916]
Low-Light Image Enhancement (LLIE) aims to restore vivid content and details from corrupted low-light images.<n>Existing standard RGB (sRGB) color space-based LLIE methods often produce color bias and brightness artifacts.<n>We propose a new color space for LLIE, defined by the HV color map and learnable intensity.<n>HVI-CIDNet+ is built upon the HVI color space to restore damaged content and mitigate color distortion in extremely dark regions.
arXiv Detail & Related papers (2025-07-09T13:03:34Z) - DSDNet: Raw Domain Demoiréing via Dual Color-Space Synergy [33.10273685997384]
We propose a single-stage raw domain demoir'eing framework, Dual-Stream Demoir'eing Network (DSDNet)<n>To guide luminance correction and moir'e removal, we design a raw-to-YCbCr mapping pipeline.<n>We also develop a Luminance-Chrominance Adaptive Transformer (LCAT) to better guide color fidelity.
arXiv Detail & Related papers (2025-04-22T10:09:33Z) - An Adaptive Underwater Image Enhancement Framework via Multi-Domain Fusion and Color Compensation [0.6144680854063939]
Underwater optical imaging is severely degraded by light absorption, scattering, and color distortion.<n>This paper presents an adaptive enhancement framework integrating illumination compensation, multi-domain filtering, and dynamic color correction.<n> Experimental results on benchmark datasets demonstrate superior performance over state-of-the-art methods in contrast enhancement, color correction, and structural preservation.
arXiv Detail & Related papers (2025-03-05T16:19:56Z) - Discovering an Image-Adaptive Coordinate System for Photography Processing [51.164345878060956]
We propose a novel algorithm, IAC, to learn an image-adaptive coordinate system in the RGB color space before performing curve operations.<n>This end-to-end trainable approach enables us to efficiently adjust images with a jointly learned image-adaptive coordinate system and curves.
arXiv Detail & Related papers (2025-01-11T06:20:07Z) - Retinex-RAWMamba: Bridging Demosaicing and Denoising for Low-Light RAW Image Enhancement [71.13353154514418]
Low-light image enhancement, particularly in cross-domain tasks such as mapping from the raw domain to the sRGB domain, remains a significant challenge.<n>We propose a novel Mamba-based method customized for low light RAW images, called RAWMamba, to effectively handle raw images with different CFAs.<n>By bridging demosaicing and denoising, better enhancement for low light RAW images is achieved.
arXiv Detail & Related papers (2024-09-11T06:12:03Z) - FDCE-Net: Underwater Image Enhancement with Embedding Frequency and Dual Color Encoder [49.79611204954311]
Underwater images often suffer from various issues such as low brightness, color shift, blurred details, and noise due to absorption light and scattering caused by water and suspended particles.
Previous underwater image enhancement (UIE) methods have primarily focused on spatial domain enhancement, neglecting the frequency domain information inherent in the images.
arXiv Detail & Related papers (2024-04-27T15:16:34Z) - CVT-xRF: Contrastive In-Voxel Transformer for 3D Consistent Radiance Fields from Sparse Inputs [65.80187860906115]
We propose a novel approach to improve NeRF's performance with sparse inputs.
We first adopt a voxel-based ray sampling strategy to ensure that the sampled rays intersect with a certain voxel in 3D space.
We then randomly sample additional points within the voxel and apply a Transformer to infer the properties of other points on each ray, which are then incorporated into the volume rendering.
arXiv Detail & Related papers (2024-03-25T15:56:17Z) - FloatingFusion: Depth from ToF and Image-stabilized Stereo Cameras [37.812681878193914]
smartphones now have multimodal camera systems with time-of-flight (ToF) depth sensors and multiple color cameras.
producing accurate high-resolution depth is still challenging due to the low resolution and limited active illumination power of ToF sensors.
We propose an automatic calibration technique based on dense 2D/3D matching that can estimate camera parameters from a single snapshot.
arXiv Detail & Related papers (2022-10-06T09:57:09Z) - Cross-Camera Deep Colorization [10.254243409261898]
We propose an end-to-end convolutional neural network to align and fuse images from a color-plus-mono dual-camera system.
Our method consistently achieves substantial improvements, i.e., around 10dB PSNR gain.
arXiv Detail & Related papers (2022-08-26T11:02:14Z) - Degrade is Upgrade: Learning Degradation for Low-light Image Enhancement [52.49231695707198]
We investigate the intrinsic degradation and relight the low-light image while refining the details and color in two steps.
Inspired by the color image formulation, we first estimate the degradation from low-light inputs to simulate the distortion of environment illumination color, and then refine the content to recover the loss of diffuse illumination color.
Our proposed method has surpassed the SOTA by 0.95dB in PSNR on LOL1000 dataset and 3.18% in mAP on ExDark dataset.
arXiv Detail & Related papers (2021-03-19T04:00:27Z) - Single-Image HDR Reconstruction by Learning to Reverse the Camera
Pipeline [100.5353614588565]
We propose to incorporate the domain knowledge of the LDR image formation pipeline into our model.
We model the HDRto-LDR image formation pipeline as the (1) dynamic range clipping, (2) non-linear mapping from a camera response function, and (3) quantization.
We demonstrate that the proposed method performs favorably against state-of-the-art single-image HDR reconstruction algorithms.
arXiv Detail & Related papers (2020-04-02T17:59:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.