CAST-LUT: Tokenizer-Guided HSV Look-Up Tables for Purple Flare Removal
- URL: http://arxiv.org/abs/2511.06764v1
- Date: Mon, 10 Nov 2025 06:45:03 GMT
- Title: CAST-LUT: Tokenizer-Guided HSV Look-Up Tables for Purple Flare Removal
- Authors: Pu Wang, Shuning Sun, Jialang Lu, Chen Wu, Zhihua Zhang, Youshan Zhang, Chenggang Shan, Dianjie Lu, Guijuan Zhang, Zhuoran Zheng,
- Abstract summary: We propose a novel network built upon decoupled HSV Look-Up Tables (LUTs)<n>The method aims to simplify color correction by adjusting the Hue (H), Saturation (S), and Value (V) components independently.<n>Our model not only significantly outperforms existing methods in visual effects but also achieves state-of-the-art performance on all quantitative metrics.
- Score: 23.950152572091543
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Purple flare, a diffuse chromatic aberration artifact commonly found around highlight areas, severely degrades the tone transition and color of the image. Existing traditional methods are based on hand-crafted features, which lack flexibility and rely entirely on fixed priors, while the scarcity of paired training data critically hampers deep learning. To address this issue, we propose a novel network built upon decoupled HSV Look-Up Tables (LUTs). The method aims to simplify color correction by adjusting the Hue (H), Saturation (S), and Value (V) components independently. This approach resolves the inherent color coupling problems in traditional methods. Our model adopts a two-stage architecture: First, a Chroma-Aware Spectral Tokenizer (CAST) converts the input image from RGB space to HSV space and independently encodes the Hue (H) and Value (V) channels into a set of semantic tokens describing the Purple flare status; second, the HSV-LUT module takes these tokens as input and dynamically generates independent correction curves (1D-LUTs) for the three channels H, S, and V. To effectively train and validate our model, we built the first large-scale purple flare dataset with diverse scenes. We also proposed new metrics and a loss function specifically designed for this task. Extensive experiments demonstrate that our model not only significantly outperforms existing methods in visual effects but also achieves state-of-the-art performance on all quantitative metrics.
Related papers
- VIRGi: View-dependent Instant Recoloring of 3D Gaussians Splats [53.602701067430075]
We introduce VIRGi, a novel approach for rapidly editing the color of scenes modeled by 3DGS.<n>By fine-tuning the weights of a single user, the color edits are seamlessly propagated to the entire scene in just two seconds.<n>An exhaustive validation on diverse datasets demonstrates significant quantitative and qualitative advancements over competitors.
arXiv Detail & Related papers (2026-03-03T13:41:17Z) - DCA-LUT: Deep Chromatic Alignment with 5D LUT for Purple Fringing Removal [18.657059101949887]
We introduce DCA-LUT, the first deep learning framework for purple fringing removal.<n>Inspired by the physical root of the problem, we introduce a novel Chromatic-Aware Coordinate Transformation (CA-CT) module.<n>The final color correction is performed by a learned 5D Look-Up Table (5D LUT)
arXiv Detail & Related papers (2025-11-15T07:11:49Z) - FlowLUT: Efficient Image Enhancement via Differentiable LUTs and Iterative Flow Matching [10.213645938731338]
FlowLUT is a novel end-to-end model that integrates the efficiency of LUTs, multiple priors, and the parameter-independent characteristic of flow-matched reconstructed images.<n>A lightweight fusion prediction network runs on multiple 3D LUTs, with $mathcalO(1)$ complexity for scene-adaptive color correction.<n>The entire model is jointly optimized under a composite loss function enforcing perceptual and structural fidelity.
arXiv Detail & Related papers (2025-09-28T03:22:01Z) - HVI-CIDNet+: Beyond Extreme Darkness for Low-Light Image Enhancement [56.02740727422916]
Low-Light Image Enhancement (LLIE) aims to restore vivid content and details from corrupted low-light images.<n>Existing standard RGB (sRGB) color space-based LLIE methods often produce color bias and brightness artifacts.<n>We propose a new color space for LLIE, defined by the HV color map and learnable intensity.<n>HVI-CIDNet+ is built upon the HVI color space to restore damaged content and mitigate color distortion in extremely dark regions.
arXiv Detail & Related papers (2025-07-09T13:03:34Z) - Leveraging Semantic Attribute Binding for Free-Lunch Color Control in Diffusion Models [53.73253164099701]
We introduce ColorWave, a training-free approach that achieves exact RGB-level color control in diffusion models without fine-tuning.<n>We demonstrate that ColorWave establishes a new paradigm for structured, color-consistent diffusion-based image synthesis.
arXiv Detail & Related papers (2025-03-12T21:49:52Z) - Adaptive H&E-IHC information fusion staining framework based on feature extra [0.5242869847419834]
Immunotruth (IHC) staining plays a significant role in the evaluation of diseases such as breast cancer.<n>H&E-to-IHC transformation based on generative models provides a simple and cost-effective method for obtaining IHC images.<n>The lack of pixel-perfect H&E-IHC ground pairs poses a challenge to the classical L1 loss.<n>We propose an adaptive information enhanced coloring framework based on feature extractors.
arXiv Detail & Related papers (2025-02-27T14:55:34Z) - Visual Prompt Tuning in Null Space for Continual Learning [51.96411454304625]
Existing prompt-tuning methods have demonstrated impressive performances in continual learning (CL)
This paper aims to learn each task by tuning the prompts in the direction orthogonal to the subspace spanned by previous tasks' features.
In practice, an effective null-space-based approximation solution has been proposed to implement the prompt gradient projection.
arXiv Detail & Related papers (2024-06-09T05:57:40Z) - CVT-xRF: Contrastive In-Voxel Transformer for 3D Consistent Radiance Fields from Sparse Inputs [65.80187860906115]
We propose a novel approach to improve NeRF's performance with sparse inputs.
We first adopt a voxel-based ray sampling strategy to ensure that the sampled rays intersect with a certain voxel in 3D space.
We then randomly sample additional points within the voxel and apply a Transformer to infer the properties of other points on each ray, which are then incorporated into the volume rendering.
arXiv Detail & Related papers (2024-03-25T15:56:17Z) - Haze Removal via Regional Saturation-Value Translation and Soft Segmentation [0.0]
This paper proposes a single image dehazing prior, called Regional Saturation-Value Translation (RSVT)
The RSVT prior is developed based on two key observations regarding the relationship between hazy and haze-free points in the HSV color space.
Experimental results on various synthetic and realistic hazy image datasets demonstrate that the proposed scheme successfully addresses color distortion issues.
arXiv Detail & Related papers (2024-01-07T07:52:50Z) - Color Equivariant Convolutional Networks [50.655443383582124]
CNNs struggle if there is data imbalance between color variations introduced by accidental recording conditions.
We propose Color Equivariant Convolutions ( CEConvs), a novel deep learning building block that enables shape feature sharing across the color spectrum.
We demonstrate the benefits of CEConvs in terms of downstream performance to various tasks and improved robustness to color changes, including train-test distribution shifts.
arXiv Detail & Related papers (2023-10-30T09:18:49Z) - SFANet: A Spectrum-aware Feature Augmentation Network for
Visible-Infrared Person Re-Identification [12.566284647658053]
We propose a novel spectrum-aware feature augementation network named SFANet for cross-modality matching problem.
Learning with grayscale-spectrum images, our model can apparently reduce modality discrepancy and detect inner structure relations.
In feature-level, we improve the conventional two-stream network through balancing the number of specific and sharable convolutional blocks.
arXiv Detail & Related papers (2021-02-24T08:57:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.