Video Color Grading via Look-Up Table Generation
- URL: http://arxiv.org/abs/2508.00548v1
- Date: Fri, 01 Aug 2025 11:43:30 GMT
- Title: Video Color Grading via Look-Up Table Generation
- Authors: Seunghyun Shin, Dongmin Shin, Jisu Shin, Hae-Gon Jeon, Joon-Young Lee,
- Abstract summary: In this paper, we present a reference-based video color grading framework.<n>Our key idea is explicitly generating a look-up table (LUT) for color attribute alignment between reference scenes and input video.<n>As a training objective, we enforce that high-level features of the reference scenes like look, mood, and emotion should be similar to that of the input video.
- Score: 38.14578948732577
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Different from color correction and transfer, color grading involves adjusting colors for artistic or storytelling purposes in a video, which is used to establish a specific look or mood. However, due to the complexity of the process and the need for specialized editing skills, video color grading remains primarily the domain of professional colorists. In this paper, we present a reference-based video color grading framework. Our key idea is explicitly generating a look-up table (LUT) for color attribute alignment between reference scenes and input video via a diffusion model. As a training objective, we enforce that high-level features of the reference scenes like look, mood, and emotion should be similar to that of the input video. Our LUT-based approach allows for color grading without any loss of structural details in the whole video frames as well as achieving fast inference. We further build a pipeline to incorporate a user-preference via text prompts for low-level feature enhancement such as contrast and brightness, etc. Experimental results, including extensive user studies, demonstrate the effectiveness of our approach for video color grading. Codes are publicly available at https://github.com/seunghyuns98/VideoColorGrading.
Related papers
- AnimeColor: Reference-based Animation Colorization with Diffusion Transformers [9.64847784171945]
Animation colorization plays a vital role in animation production, yet existing methods struggle to achieve color accuracy and temporal consistency.<n>We propose textbfAnimeColor, a novel reference-based animation colorization framework leveraging Diffusion Transformers (DiT)<n>Our approach integrates sketch sequences into a DiT-based video diffusion model, enabling sketch-controlled animation generation.
arXiv Detail & Related papers (2025-07-27T07:25:08Z) - VanGogh: A Unified Multimodal Diffusion-based Framework for Video Colorization [53.35016574938809]
Video colorization aims to transform grayscale videos into vivid color representations while maintaining temporal consistency and structural integrity.<n>Existing video colorization methods often suffer from color bleeding and lack comprehensive control.<n>We introduce VanGogh, a unified multimodal diffusion-based framework for video colorization.
arXiv Detail & Related papers (2025-01-16T12:20:40Z) - DreamColour: Controllable Video Colour Editing without Training [80.90808879991182]
We present a training-free framework that makes precise video colour editing accessible through an intuitive interface.<n>By decoupling spatial and temporal aspects of colour editing, we can better align with users' natural workflow.<n>Our approach matches or exceeds state-of-the-art methods while eliminating the need for training or specialized hardware.
arXiv Detail & Related papers (2024-12-06T16:57:54Z) - Learning Inclusion Matching for Animation Paint Bucket Colorization [76.4507878427755]
We introduce a new learning-based inclusion matching pipeline, which directs the network to comprehend the inclusion relationships between segments.
Our method features a two-stage pipeline that integrates a coarse color warping module with an inclusion matching module.
To facilitate the training of our network, we also develope a unique dataset, referred to as PaintBucket-Character.
arXiv Detail & Related papers (2024-03-27T08:32:48Z) - Improving Video Colorization by Test-Time Tuning [79.67548221384202]
We propose an effective method, which aims to enhance video colorization through test-time tuning.
By exploiting the reference to construct additional training samples during testing, our approach achieves a performance boost of 13 dB in PSNR on average.
arXiv Detail & Related papers (2023-06-25T05:36:40Z) - Video Colorization with Pre-trained Text-to-Image Diffusion Models [19.807766482434563]
We present ColorDiffuser, an adaptation of a pre-trained text-to-image latent diffusion model for video colorization.
We propose two novel techniques to enhance the temporal coherence and maintain the vividness of colorization across frames.
arXiv Detail & Related papers (2023-06-02T17:58:00Z) - RecolorNeRF: Layer Decomposed Radiance Fields for Efficient Color
Editing of 3D Scenes [21.284044381058575]
We present RecolorNeRF, a novel user-friendly color editing approach for neural radiance fields.
Our key idea is to decompose the scene into a set of pure-colored layers, forming a palette.
To support efficient palette-based editing, the color of each layer needs to be as representative as possible.
arXiv Detail & Related papers (2023-01-19T09:18:06Z) - BiSTNet: Semantic Image Prior Guided Bidirectional Temporal Feature
Fusion for Deep Exemplar-based Video Colorization [70.14893481468525]
We present an effective BiSTNet to explore colors of reference exemplars and utilize them to help video colorization.
We first establish the semantic correspondence between each frame and the reference exemplars in deep feature space to explore color information from reference exemplars.
We develop a mixed expert block to extract semantic information for modeling the object boundaries of frames so that the semantic image prior can better guide the colorization process.
arXiv Detail & Related papers (2022-12-05T13:47:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.