Video Colorization with Pre-trained Text-to-Image Diffusion Models
- URL: http://arxiv.org/abs/2306.01732v1
- Date: Fri, 2 Jun 2023 17:58:00 GMT
- Title: Video Colorization with Pre-trained Text-to-Image Diffusion Models
- Authors: Hanyuan Liu, Minshan Xie, Jinbo Xing, Chengze Li, Tien-Tsin Wong
- Abstract summary: We present ColorDiffuser, an adaptation of a pre-trained text-to-image latent diffusion model for video colorization.
We propose two novel techniques to enhance the temporal coherence and maintain the vividness of colorization across frames.
- Score: 19.807766482434563
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Video colorization is a challenging task that involves inferring plausible
and temporally consistent colors for grayscale frames. In this paper, we
present ColorDiffuser, an adaptation of a pre-trained text-to-image latent
diffusion model for video colorization. With the proposed adapter-based
approach, we repropose the pre-trained text-to-image model to accept input
grayscale video frames, with the optional text description, for video
colorization. To enhance the temporal coherence and maintain the vividness of
colorization across frames, we propose two novel techniques: the Color
Propagation Attention and Alternated Sampling Strategy. Color Propagation
Attention enables the model to refine its colorization decision based on a
reference latent frame, while Alternated Sampling Strategy captures
spatiotemporal dependencies by using the next and previous adjacent latent
frames alternatively as reference during the generative diffusion sampling
steps. This encourages bidirectional color information propagation between
adjacent video frames, leading to improved color consistency across frames. We
conduct extensive experiments on benchmark datasets, and the results
demonstrate the effectiveness of our proposed framework. The evaluations show
that ColorDiffuser achieves state-of-the-art performance in video colorization,
surpassing existing methods in terms of color fidelity, temporal consistency,
and visual quality.
Related papers
- Paint Bucket Colorization Using Anime Character Color Design Sheets [72.66788521378864]
We introduce inclusion matching, which allows the network to understand the relationships between segments.
Our network's training pipeline significantly improves performance in both colorization and consecutive frame colorization.
To support our network's training, we have developed a unique dataset named PaintBucket-Character.
arXiv Detail & Related papers (2024-10-25T09:33:27Z) - LatentColorization: Latent Diffusion-Based Speaker Video Colorization [1.2641141743223379]
We introduce a novel solution for achieving temporal consistency in video colorization.
We demonstrate strong improvements on established image quality metrics compared to other existing methods.
Our dataset encompasses a combination of conventional datasets and videos from television/movies.
arXiv Detail & Related papers (2024-05-09T12:06:06Z) - Control Color: Multimodal Diffusion-based Interactive Image Colorization [81.68817300796644]
Control Color (Ctrl Color) is a multi-modal colorization method that leverages the pre-trained Stable Diffusion (SD) model.
We present an effective way to encode user strokes to enable precise local color manipulation.
We also introduce a novel module based on self-attention and a content-guided deformable autoencoder to address the long-standing issues of color overflow and inaccurate coloring.
arXiv Detail & Related papers (2024-02-16T17:51:13Z) - DiffColor: Toward High Fidelity Text-Guided Image Colorization with
Diffusion Models [12.897939032560537]
We propose a new method called DiffColor to recover vivid colors conditioned on a prompt text.
We first fine-tune a pre-trained text-to-image model to generate colorized images using a CLIP-based contrastive loss.
Then we try to obtain an optimized text embedding aligning the colorized image and the text prompt, and a fine-tuned diffusion model enabling high-quality image reconstruction.
Our method can produce vivid and diverse colors with a few iterations, and keep the structure and background intact while having colors well-aligned with the target language guidance.
arXiv Detail & Related papers (2023-08-03T09:38:35Z) - Improving Video Colorization by Test-Time Tuning [79.67548221384202]
We propose an effective method, which aims to enhance video colorization through test-time tuning.
By exploiting the reference to construct additional training samples during testing, our approach achieves a performance boost of 13 dB in PSNR on average.
arXiv Detail & Related papers (2023-06-25T05:36:40Z) - Rerender A Video: Zero-Shot Text-Guided Video-to-Video Translation [93.18163456287164]
This paper proposes a novel text-guided video-to-video translation framework to adapt image models to videos.
Our framework achieves global style and local texture temporal consistency at a low cost.
arXiv Detail & Related papers (2023-06-13T17:52:23Z) - FlowChroma -- A Deep Recurrent Neural Network for Video Colorization [1.0499611180329804]
We develop an automated video colorization framework that minimizes the flickering of colors across frames.
We show that recurrent neural networks can be successfully used to improve color consistency in video colorization.
arXiv Detail & Related papers (2023-05-23T05:41:53Z) - Temporal Consistent Automatic Video Colorization via Semantic
Correspondence [12.107878178519128]
We propose a novel video colorization framework, which combines semantic correspondence into automatic video colorization.
In the NTIRE 2023 Video Colorization Challenge, our method ranks at the 3rd place in Color Distribution Consistency (CDC) Optimization track.
arXiv Detail & Related papers (2023-05-13T12:06:09Z) - BiSTNet: Semantic Image Prior Guided Bidirectional Temporal Feature
Fusion for Deep Exemplar-based Video Colorization [70.14893481468525]
We present an effective BiSTNet to explore colors of reference exemplars and utilize them to help video colorization.
We first establish the semantic correspondence between each frame and the reference exemplars in deep feature space to explore color information from reference exemplars.
We develop a mixed expert block to extract semantic information for modeling the object boundaries of frames so that the semantic image prior can better guide the colorization process.
arXiv Detail & Related papers (2022-12-05T13:47:15Z) - Temporally Consistent Video Colorization with Deep Feature Propagation
and Self-regularization Learning [90.38674162878496]
We propose a novel temporally consistent video colorization framework (TCVC)
TCVC effectively propagates frame-level deep features in a bidirectional way to enhance the temporal consistency of colorization.
Experiments demonstrate that our method can not only obtain visually pleasing colorized video, but also achieve clearly better temporal consistency than state-of-the-art methods.
arXiv Detail & Related papers (2021-10-09T13:00:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.