BiSTNet: Semantic Image Prior Guided Bidirectional Temporal Feature
Fusion for Deep Exemplar-based Video Colorization
- URL: http://arxiv.org/abs/2212.02268v1
- Date: Mon, 5 Dec 2022 13:47:15 GMT
- Title: BiSTNet: Semantic Image Prior Guided Bidirectional Temporal Feature
Fusion for Deep Exemplar-based Video Colorization
- Authors: Yixin Yang, Zhongzheng Peng, Xiaoyu Du, Zhulin Tao, Jinhui Tang,
Jinshan Pan
- Abstract summary: We present an effective BiSTNet to explore colors of reference exemplars and utilize them to help video colorization.
We first establish the semantic correspondence between each frame and the reference exemplars in deep feature space to explore color information from reference exemplars.
We develop a mixed expert block to extract semantic information for modeling the object boundaries of frames so that the semantic image prior can better guide the colorization process.
- Score: 70.14893481468525
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: How to effectively explore the colors of reference exemplars and propagate
them to colorize each frame is vital for exemplar-based video colorization. In
this paper, we present an effective BiSTNet to explore colors of reference
exemplars and utilize them to help video colorization by a bidirectional
temporal feature fusion with the guidance of semantic image prior. We first
establish the semantic correspondence between each frame and the reference
exemplars in deep feature space to explore color information from reference
exemplars. Then, to better propagate the colors of reference exemplars into
each frame and avoid the inaccurate matches colors from exemplars we develop a
simple yet effective bidirectional temporal feature fusion module to better
colorize each frame. We note that there usually exist color-bleeding artifacts
around the boundaries of the important objects in videos. To overcome this
problem, we further develop a mixed expert block to extract semantic
information for modeling the object boundaries of frames so that the semantic
image prior can better guide the colorization process for better performance.
In addition, we develop a multi-scale recurrent block to progressively colorize
frames in a coarse-to-fine manner. Extensive experimental results demonstrate
that the proposed BiSTNet performs favorably against state-of-the-art methods
on the benchmark datasets. Our code will be made available at
\url{https://yyang181.github.io/BiSTNet/}
Related papers
- Paint Bucket Colorization Using Anime Character Color Design Sheets [72.66788521378864]
We introduce inclusion matching, which allows the network to understand the relationships between segments.
Our network's training pipeline significantly improves performance in both colorization and consecutive frame colorization.
To support our network's training, we have developed a unique dataset named PaintBucket-Character.
arXiv Detail & Related papers (2024-10-25T09:33:27Z) - Automatic Controllable Colorization via Imagination [55.489416987587305]
We propose a framework for automatic colorization that allows for iterative editing and modifications.
By understanding the content within a grayscale image, we utilize a pre-trained image generation model to generate multiple images that contain the same content.
These images serve as references for coloring, mimicking the process of human experts.
arXiv Detail & Related papers (2024-04-08T16:46:07Z) - Learning Inclusion Matching for Animation Paint Bucket Colorization [76.4507878427755]
We introduce a new learning-based inclusion matching pipeline, which directs the network to comprehend the inclusion relationships between segments.
Our method features a two-stage pipeline that integrates a coarse color warping module with an inclusion matching module.
To facilitate the training of our network, we also develope a unique dataset, referred to as PaintBucket-Character.
arXiv Detail & Related papers (2024-03-27T08:32:48Z) - Improving Video Colorization by Test-Time Tuning [79.67548221384202]
We propose an effective method, which aims to enhance video colorization through test-time tuning.
By exploiting the reference to construct additional training samples during testing, our approach achieves a performance boost of 13 dB in PSNR on average.
arXiv Detail & Related papers (2023-06-25T05:36:40Z) - Video Colorization with Pre-trained Text-to-Image Diffusion Models [19.807766482434563]
We present ColorDiffuser, an adaptation of a pre-trained text-to-image latent diffusion model for video colorization.
We propose two novel techniques to enhance the temporal coherence and maintain the vividness of colorization across frames.
arXiv Detail & Related papers (2023-06-02T17:58:00Z) - Temporal Consistent Automatic Video Colorization via Semantic
Correspondence [12.107878178519128]
We propose a novel video colorization framework, which combines semantic correspondence into automatic video colorization.
In the NTIRE 2023 Video Colorization Challenge, our method ranks at the 3rd place in Color Distribution Consistency (CDC) Optimization track.
arXiv Detail & Related papers (2023-05-13T12:06:09Z) - Reference-Based Video Colorization with Spatiotemporal Correspondence [8.472559058510205]
We propose a reference-based video colorization framework with temporal correspondence.
By restricting temporally-related regions for referencing colors, our approach propagates faithful colors throughout the video.
arXiv Detail & Related papers (2020-11-25T05:47:38Z) - Instance-aware Image Colorization [51.12040118366072]
In this paper, we propose a method for achieving instance-aware colorization.
Our network architecture leverages an off-the-shelf object detector to obtain cropped object images.
We use a similar network to extract the full-image features and apply a fusion module to predict the final colors.
arXiv Detail & Related papers (2020-05-21T17:59:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.