Color-aware Deep Temporal Backdrop Duplex Matting System
- URL: http://arxiv.org/abs/2306.02954v1
- Date: Mon, 5 Jun 2023 15:20:44 GMT
- Title: Color-aware Deep Temporal Backdrop Duplex Matting System
- Authors: Hendrik Hachmann and Bodo Rosenhahn
- Abstract summary: We propose a temporal multi-backdrop production system that combines beneficial features from chroma keying and alpha matting.
The proposed studio set is actor friendly, and produces high-quality, temporal consistent alpha and color estimations.
- Score: 26.114550071165628
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Deep learning-based alpha matting showed tremendous improvements in recent
years, yet, feature film production studios still rely on classical chroma
keying including costly post-production steps. This perceived discrepancy can
be explained by some missing links necessary for production which are currently
not adequately addressed in the alpha matting community, in particular
foreground color estimation or color spill compensation. We propose a neural
network-based temporal multi-backdrop production system that combines
beneficial features from chroma keying and alpha matting. Given two consecutive
frames with different background colors, our one-encoder-dual-decoder network
predicts foreground colors and alpha values using a patch-based overlap-blend
approach. The system is able to handle imprecise backdrops, dynamic cameras,
and dynamic foregrounds and has no restrictions on foreground colors. We
compare our method to state-of-the-art algorithms using benchmark datasets and
a video sequence captured by a demonstrator setup. We verify that a dual
backdrop input is superior to the usually applied trimap-based approach. In
addition, the proposed studio set is actor friendly, and produces high-quality,
temporal consistent alpha and color estimations that include a superior color
spill compensation.
Related papers
- LatentColorization: Latent Diffusion-Based Speaker Video Colorization [1.2641141743223379]
We introduce a novel solution for achieving temporal consistency in video colorization.
We demonstrate strong improvements on established image quality metrics compared to other existing methods.
Our dataset encompasses a combination of conventional datasets and videos from television/movies.
arXiv Detail & Related papers (2024-05-09T12:06:06Z) - Improving Video Colorization by Test-Time Tuning [79.67548221384202]
We propose an effective method, which aims to enhance video colorization through test-time tuning.
By exploiting the reference to construct additional training samples during testing, our approach achieves a performance boost of 13 dB in PSNR on average.
arXiv Detail & Related papers (2023-06-25T05:36:40Z) - Video Colorization with Pre-trained Text-to-Image Diffusion Models [19.807766482434563]
We present ColorDiffuser, an adaptation of a pre-trained text-to-image latent diffusion model for video colorization.
We propose two novel techniques to enhance the temporal coherence and maintain the vividness of colorization across frames.
arXiv Detail & Related papers (2023-06-02T17:58:00Z) - FlowChroma -- A Deep Recurrent Neural Network for Video Colorization [1.0499611180329804]
We develop an automated video colorization framework that minimizes the flickering of colors across frames.
We show that recurrent neural networks can be successfully used to improve color consistency in video colorization.
arXiv Detail & Related papers (2023-05-23T05:41:53Z) - Adaptive Human Matting for Dynamic Videos [62.026375402656754]
Adaptive Matting for Dynamic Videos, termed AdaM, is a framework for simultaneously differentiating foregrounds from backgrounds.
Two interconnected network designs are employed to achieve this goal.
We benchmark and study our methods recently introduced datasets, showing that our matting achieves new best-in-class generalizability.
arXiv Detail & Related papers (2023-04-12T17:55:59Z) - BiSTNet: Semantic Image Prior Guided Bidirectional Temporal Feature
Fusion for Deep Exemplar-based Video Colorization [70.14893481468525]
We present an effective BiSTNet to explore colors of reference exemplars and utilize them to help video colorization.
We first establish the semantic correspondence between each frame and the reference exemplars in deep feature space to explore color information from reference exemplars.
We develop a mixed expert block to extract semantic information for modeling the object boundaries of frames so that the semantic image prior can better guide the colorization process.
arXiv Detail & Related papers (2022-12-05T13:47:15Z) - Learning Dynamic View Synthesis With Few RGBD Cameras [60.36357774688289]
We propose to utilize RGBD cameras to synthesize free-viewpoint videos of dynamic indoor scenes.
We generate point clouds from RGBD frames and then render them into free-viewpoint videos via a neural feature.
We introduce a simple Regional Depth-Inpainting module that adaptively inpaints missing depth values to render complete novel views.
arXiv Detail & Related papers (2022-04-22T03:17:35Z) - Temporally Consistent Video Colorization with Deep Feature Propagation
and Self-regularization Learning [90.38674162878496]
We propose a novel temporally consistent video colorization framework (TCVC)
TCVC effectively propagates frame-level deep features in a bidirectional way to enhance the temporal consistency of colorization.
Experiments demonstrate that our method can not only obtain visually pleasing colorized video, but also achieve clearly better temporal consistency than state-of-the-art methods.
arXiv Detail & Related papers (2021-10-09T13:00:14Z) - Attention-guided Temporal Coherent Video Object Matting [78.82835351423383]
We propose a novel deep learning-based object matting method that can achieve temporally coherent matting results.
Its key component is an attention-based temporal aggregation module that maximizes image matting networks' strength.
We show how to effectively solve the trimap generation problem by fine-tuning a state-of-the-art video object segmentation network.
arXiv Detail & Related papers (2021-05-24T17:34:57Z) - VCGAN: Video Colorization with Hybrid Generative Adversarial Network [22.45196398040388]
Hybrid Video Colorization with Hybrid Generative Adversarative Network (VCGAN) is an improved approach to colorization using end-to-end learning.
Experimental results demonstrate that VCGAN produces higher-quality and temporally more consistent colorful videos than existing approaches.
arXiv Detail & Related papers (2021-04-26T05:50:53Z) - $F$, $B$, Alpha Matting [0.0]
We propose a low-cost modification to alpha matting networks to also predict the foreground and background colours.
Our method achieves the state of the art performance on the Adobe Composition-1k dataset for alpha matte and composite colour quality.
arXiv Detail & Related papers (2020-03-17T13:27:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.