NCST: Neural-based Color Style Transfer for Video Retouching
- URL: http://arxiv.org/abs/2411.00335v1
- Date: Fri, 01 Nov 2024 03:25:15 GMT
- Title: NCST: Neural-based Color Style Transfer for Video Retouching
- Authors: Xintao Jiang, Yaosen Chen, Siqin Zhang, Wei Wang, Xuming Wen,
- Abstract summary: Video color style transfer aims to transform the color style of an original video by using a reference style image.
Most existing methods employ neural networks, which come with challenges like opaque transfer processes.
We introduce a method that predicts specific parameters for color style transfer using two images.
- Score: 3.2050418539021774
- License:
- Abstract: Video color style transfer aims to transform the color style of an original video by using a reference style image. Most existing methods employ neural networks, which come with challenges like opaque transfer processes and limited user control over the outcomes. Typically, users cannot fine-tune the resulting images or videos. To tackle this issue, we introduce a method that predicts specific parameters for color style transfer using two images. Initially, we train a neural network to learn the corresponding color adjustment parameters. When applying style transfer to a video, we fine-tune the network with key frames from the video and the chosen style image, generating precise transformation parameters. These are then applied to convert the color style of both images and videos. Our experimental results demonstrate that our algorithm surpasses current methods in color style transfer quality. Moreover, each parameter in our method has a specific, interpretable meaning, enabling users to understand the color style transfer process and allowing them to perform manual fine-tuning if desired.
Related papers
- Training-free Color-Style Disentanglement for Constrained Text-to-Image Synthesis [16.634138745034733]
We present the first training-free, test-time-only method to disentangle and condition text-to-image models on color and style attributes from reference image.
arXiv Detail & Related papers (2024-09-04T04:16:58Z) - Any-to-Any Style Transfer: Making Picasso and Da Vinci Collaborate [58.83278629019384]
Style transfer aims to render the style of a given image for style reference to another given image for content reference.
Existing approaches either apply the holistic style of the style image in a global manner, or migrate local colors and textures of the style image to the content counterparts in a pre-defined way.
We propose Any-to-Any Style Transfer, which enables users to interactively select styles of regions in the style image and apply them to the prescribed content regions.
arXiv Detail & Related papers (2023-04-19T15:15:36Z) - Neural Preset for Color Style Transfer [46.66925849502683]
We present a Neural Preset technique to address the limitations of existing color style transfer methods.
Our method is based on two core designs. First, we propose Deterministic Neural Color Mapping (DNCM) to consistently operate on each pixel.
Second, we develop a two-stage pipeline by dividing the task into color normalization and stylization.
arXiv Detail & Related papers (2023-03-23T17:59:10Z) - Learning Diverse Tone Styles for Image Retouching [73.60013618215328]
We propose to learn diverse image retouching with normalizing flow-based architectures.
A joint-training pipeline is composed of a style encoder, a conditional RetouchNet, and the image tone style normalizing flow (TSFlow) module.
Our proposed method performs favorably against state-of-the-art methods and is effective in generating diverse results.
arXiv Detail & Related papers (2022-07-12T09:49:21Z) - Domain Enhanced Arbitrary Image Style Transfer via Contrastive Learning [84.8813842101747]
Contrastive Arbitrary Style Transfer (CAST) is a new style representation learning and style transfer method via contrastive learning.
Our framework consists of three key components, i.e., a multi-layer style projector for style code encoding, a domain enhancement module for effective learning of style distribution, and a generative network for image style transfer.
arXiv Detail & Related papers (2022-05-19T13:11:24Z) - CAMS: Color-Aware Multi-Style Transfer [46.550390398057985]
Style transfer aims to manipulate the appearance of a source image, or "content" image, to share similar texture and colors of a target "style" image.
A commonly used approach to assist in transferring styles is based on Gram matrix optimization.
We propose a color-aware multi-style transfer method that generates aesthetically pleasing results while preserving the style-color correlation between style and generated images.
arXiv Detail & Related papers (2021-06-26T01:15:09Z) - StyTr^2: Unbiased Image Style Transfer with Transformers [59.34108877969477]
The goal of image style transfer is to render an image with artistic features guided by a style reference while maintaining the original content.
Traditional neural style transfer methods are usually biased and content leak can be observed by running several times of the style transfer process with the same reference image.
We propose a transformer-based approach, namely StyTr2, to address this critical issue.
arXiv Detail & Related papers (2021-05-30T15:57:09Z) - Deep Preset: Blending and Retouching Photos with Color Style Transfer [15.95010869939508]
We focus on learning low-level image transformation, especially color-shifting methods, then present a novel scheme to train color style transfer with ground-truth.
It is designed to 1) generalize the features representing the color transformation from content with natural colors to retouched reference, then blend it into the contextual features of content.
We script Lightroom, a powerful tool in editing photos, to generate 600,000 training samples using 1,200 images from the Flick2K dataset and 500 user-generated presets with 69 settings.
arXiv Detail & Related papers (2020-07-21T10:41:03Z) - Deep Line Art Video Colorization with a Few References [49.7139016311314]
We propose a deep architecture to automatically color line art videos with the same color style as the given reference images.
Our framework consists of a color transform network and a temporal constraint network.
Our model can achieve even better coloring results by fine-tuning the parameters with only a small amount of samples.
arXiv Detail & Related papers (2020-03-24T06:57:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.