StyleCariGAN: Caricature Generation via StyleGAN Feature Map Modulation
- URL: http://arxiv.org/abs/2107.04331v1
- Date: Fri, 9 Jul 2021 09:49:31 GMT
- Title: StyleCariGAN: Caricature Generation via StyleGAN Feature Map Modulation
- Authors: Wonjong Jang, Gwangjin Ju, Yucheol Jung, Jiaolong Yang, Xin Tong,
Seungyong Lee
- Abstract summary: We present a caricature generation framework based on shape and style manipulation using StyleGAN.
Our framework, dubbed StyleCariGAN, automatically creates a realistic and detailed caricature from an input photo.
- Score: 20.14727435894964
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a caricature generation framework based on shape and style
manipulation using StyleGAN. Our framework, dubbed StyleCariGAN, automatically
creates a realistic and detailed caricature from an input photo with optional
controls on shape exaggeration degree and color stylization type. The key
component of our method is shape exaggeration blocks that are used for
modulating coarse layer feature maps of StyleGAN to produce desirable
caricature shape exaggerations. We first build a layer-mixed StyleGAN for
photo-to-caricature style conversion by swapping fine layers of the StyleGAN
for photos to the corresponding layers of the StyleGAN trained to generate
caricatures. Given an input photo, the layer-mixed model produces detailed
color stylization for a caricature but without shape exaggerations. We then
append shape exaggeration blocks to the coarse layers of the layer-mixed model
and train the blocks to create shape exaggerations while preserving the
characteristic appearances of the input. Experimental results show that our
StyleCariGAN generates realistic and detailed caricatures compared to the
current state-of-the-art methods. We demonstrate StyleCariGAN also supports
other StyleGAN-based image manipulations, such as facial expression control.
Related papers
- PS-StyleGAN: Illustrative Portrait Sketching using Attention-Based Style Adaptation [0.0]
Portrait sketching involves capturing identity specific attributes of a real face with abstract lines and shades.
This paper introduces textbfPortrait Sketching StyleGAN (PS-StyleGAN), a style transfer approach tailored for portrait sketch synthesis.
We leverage the semantic $W+$ latent space of StyleGAN to generate portrait sketches, allowing us to make meaningful edits, like pose and expression alterations, without compromising identity.
arXiv Detail & Related papers (2024-08-31T04:22:45Z) - StyleShot: A Snapshot on Any Style [20.41380860802149]
We show that, a good style representation is crucial and sufficient for generalized style transfer without test-time tuning.
We achieve this through constructing a style-aware encoder and a well-organized style dataset called StyleGallery.
We highlight that, our approach, named StyleShot, is simple yet effective in mimicking various desired styles, without test-time tuning.
arXiv Detail & Related papers (2024-07-01T16:05:18Z) - Portrait Diffusion: Training-free Face Stylization with
Chain-of-Painting [64.43760427752532]
Face stylization refers to the transformation of a face into a specific portrait style.
Current methods require the use of example-based adaptation approaches to fine-tune pre-trained generative models.
This paper proposes a training-free face stylization framework, named Portrait Diffusion.
arXiv Detail & Related papers (2023-12-03T06:48:35Z) - Face Cartoonisation For Various Poses Using StyleGAN [0.7673339435080445]
This paper presents an innovative approach to achieve face cartoonisation while preserving the original identity and accommodating various poses.
We achieve this by introducing an encoder that captures both pose and identity information from images and generates a corresponding embedding within the StyleGAN latent space.
We show by extensive experimentation how our encoder adapts the StyleGAN output to better preserve identity when the objective is cartoonisation.
arXiv Detail & Related papers (2023-09-26T13:10:25Z) - Any-to-Any Style Transfer: Making Picasso and Da Vinci Collaborate [58.83278629019384]
Style transfer aims to render the style of a given image for style reference to another given image for content reference.
Existing approaches either apply the holistic style of the style image in a global manner, or migrate local colors and textures of the style image to the content counterparts in a pre-defined way.
We propose Any-to-Any Style Transfer, which enables users to interactively select styles of regions in the style image and apply them to the prescribed content regions.
arXiv Detail & Related papers (2023-04-19T15:15:36Z) - Learning Diverse Tone Styles for Image Retouching [73.60013618215328]
We propose to learn diverse image retouching with normalizing flow-based architectures.
A joint-training pipeline is composed of a style encoder, a conditional RetouchNet, and the image tone style normalizing flow (TSFlow) module.
Our proposed method performs favorably against state-of-the-art methods and is effective in generating diverse results.
arXiv Detail & Related papers (2022-07-12T09:49:21Z) - Domain Enhanced Arbitrary Image Style Transfer via Contrastive Learning [84.8813842101747]
Contrastive Arbitrary Style Transfer (CAST) is a new style representation learning and style transfer method via contrastive learning.
Our framework consists of three key components, i.e., a multi-layer style projector for style code encoding, a domain enhancement module for effective learning of style distribution, and a generative network for image style transfer.
arXiv Detail & Related papers (2022-05-19T13:11:24Z) - Interactive Style Transfer: All is Your Palette [74.06681967115594]
We propose a drawing-like interactive style transfer (IST) method, by which users can interactively create a harmonious-style image.
Our IST method can serve as a brush, dip style from anywhere, and then paint to any region of the target content image.
arXiv Detail & Related papers (2022-03-25T06:38:46Z) - StyleCLIPDraw: Coupling Content and Style in Text-to-Drawing Translation [10.357474047610172]
We present an approach for generating styled drawings for a given text description where a user can specify a desired drawing style.
Inspired by a theory in art that style and content are generally inseparable during the creative process, we propose a coupled approach, known here as StyleCLIPDraw.
Based on human evaluation, the styles of images generated by StyleCLIPDraw are strongly preferred to those by the sequential approach.
arXiv Detail & Related papers (2022-02-24T21:03:51Z) - Fine-Grained Control of Artistic Styles in Image Generation [24.524863555822837]
generative models and adversarial training have enabled artificially generating artworks in various artistic styles.
We propose to capture the continuous spectrum of styles and apply it to a style generation task.
Our method can be used with common generative adversarial networks (such as StyleGAN)
arXiv Detail & Related papers (2021-10-19T21:51:52Z) - Unsupervised Contrastive Photo-to-Caricature Translation based on
Auto-distortion [49.93278173824292]
Photo-to-caricature aims to synthesize the caricature as a rendered image exaggerating the features through sketching, pencil strokes, or other artistic drawings.
Style rendering and geometry deformation are the most important aspects in photo-to-caricature translation task.
We propose an unsupervised contrastive photo-to-caricature translation architecture.
arXiv Detail & Related papers (2020-11-10T08:14:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.