Joint Bilateral Learning for Real-time Universal Photorealistic Style
Transfer
- URL: http://arxiv.org/abs/2004.10955v2
- Date: Mon, 27 Apr 2020 13:02:17 GMT
- Title: Joint Bilateral Learning for Real-time Universal Photorealistic Style
Transfer
- Authors: Xide Xia, Meng Zhang, Tianfan Xue, Zheng Sun, Hui Fang, Brian Kulis,
and Jiawen Chen
- Abstract summary: Photorealistic style transfer is the task of transferring the artistic style of an image onto a content target, producing a result that is plausibly taken with a camera.
Recent approaches, based on deep neural networks, produce impressive results but are either too slow to run at practical resolutions, or still contain objectionable artifacts.
We propose a new end-to-end model for photorealistic style transfer that is both fast and inherently generates photorealistic results.
- Score: 18.455002563426262
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Photorealistic style transfer is the task of transferring the artistic style
of an image onto a content target, producing a result that is plausibly taken
with a camera. Recent approaches, based on deep neural networks, produce
impressive results but are either too slow to run at practical resolutions, or
still contain objectionable artifacts. We propose a new end-to-end model for
photorealistic style transfer that is both fast and inherently generates
photorealistic results. The core of our approach is a feed-forward neural
network that learns local edge-aware affine transforms that automatically obey
the photorealism constraint. When trained on a diverse set of images and a
variety of styles, our model can robustly apply style transfer to an arbitrary
pair of input images. Compared to the state of the art, our method produces
visually superior results and is three orders of magnitude faster, enabling
real-time performance at 4K on a mobile phone. We validate our method with
ablation and user studies.
Related papers
- Towards Highly Realistic Artistic Style Transfer via Stable Diffusion with Step-aware and Layer-aware Prompt [12.27693060663517]
Artistic style transfer aims to transfer the learned artistic style onto an arbitrary content image, generating artistic stylized images.
We propose a novel pre-trained diffusion-based artistic style transfer method, called LSAST.
Our proposed method can generate more highly realistic artistic stylized images than the state-of-the-art artistic style transfer methods.
arXiv Detail & Related papers (2024-04-17T15:28:53Z) - Is Synthetic Image Useful for Transfer Learning? An Investigation into Data Generation, Volume, and Utilization [62.157627519792946]
We introduce a novel framework called bridged transfer, which initially employs synthetic images for fine-tuning a pre-trained model to improve its transferability.
We propose dataset style inversion strategy to improve the stylistic alignment between synthetic and real images.
Our proposed methods are evaluated across 10 different datasets and 5 distinct models, demonstrating consistent improvements.
arXiv Detail & Related papers (2024-03-28T22:25:05Z) - Generative AI Model for Artistic Style Transfer Using Convolutional
Neural Networks [0.0]
Artistic style transfer involves fusing the content of one image with the artistic style of another to create unique visual compositions.
This paper presents a comprehensive overview of a novel technique for style transfer using Convolutional Neural Networks (CNNs)
arXiv Detail & Related papers (2023-10-27T16:21:17Z) - AdaCM: Adaptive ColorMLP for Real-Time Universal Photo-realistic Style
Transfer [53.41350013698697]
Photo-realistic style transfer aims at migrating the artistic style from an exemplar style image to a content image, producing a result image without spatial distortions or unrealistic artifacts.
We propose the textbfAdaptive ColorMLP (AdaCM), an effective and efficient framework for universal photo-realistic style transfer.
arXiv Detail & Related papers (2022-12-03T07:56:08Z) - Learning to Relight Portrait Images via a Virtual Light Stage and
Synthetic-to-Real Adaptation [76.96499178502759]
Relighting aims to re-illuminate the person in the image as if the person appeared in an environment with the target lighting.
Recent methods rely on deep learning to achieve high-quality results.
We propose a new approach that can perform on par with the state-of-the-art (SOTA) relighting methods without requiring a light stage.
arXiv Detail & Related papers (2022-09-21T17:15:58Z) - AesUST: Towards Aesthetic-Enhanced Universal Style Transfer [15.078430702469886]
AesUST is a novel Aesthetic-enhanced Universal Style Transfer approach.
We introduce an aesthetic discriminator to learn the universal human-delightful aesthetic features from a large corpus of artist-created paintings.
We also develop a new two-stage transfer training strategy with two aesthetic regularizations to train our model more effectively.
arXiv Detail & Related papers (2022-08-27T13:51:11Z) - Controllable Person Image Synthesis with Spatially-Adaptive Warped
Normalization [72.65828901909708]
Controllable person image generation aims to produce realistic human images with desirable attributes.
We introduce a novel Spatially-Adaptive Warped Normalization (SAWN), which integrates a learned flow-field to warp modulation parameters.
We propose a novel self-training part replacement strategy to refine the pretrained model for the texture-transfer task.
arXiv Detail & Related papers (2021-05-31T07:07:44Z) - Style and Pose Control for Image Synthesis of Humans from a Single
Monocular View [78.6284090004218]
StylePoseGAN is a non-controllable generator to accept conditioning of pose and appearance separately.
Our network can be trained in a fully supervised way with human images to disentangle pose, appearance and body parts.
StylePoseGAN achieves state-of-the-art image generation fidelity on common perceptual metrics.
arXiv Detail & Related papers (2021-02-22T18:50:47Z) - Real-time Localized Photorealistic Video Style Transfer [25.91181753178577]
We present a novel algorithm for transferring artistic styles of semantically meaningful local regions of an image onto local regions of a target video.
Our method, based on a deep neural network architecture inspired by recent work in photorealistic style transfer, is real-time and works on arbitrary inputs.
We demonstrate our method on a variety of style images and target videos, including the ability to transfer different styles onto multiple objects simultaneously.
arXiv Detail & Related papers (2020-10-20T06:21:09Z) - Intrinsic Autoencoders for Joint Neural Rendering and Intrinsic Image
Decomposition [67.9464567157846]
We propose an autoencoder for joint generation of realistic images from synthetic 3D models while simultaneously decomposing real images into their intrinsic shape and appearance properties.
Our experiments confirm that a joint treatment of rendering and decomposition is indeed beneficial and that our approach outperforms state-of-the-art image-to-image translation baselines both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-06-29T12:53:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.