Controlling Geometric Abstraction and Texture for Artistic Images
- URL: http://arxiv.org/abs/2308.00148v1
- Date: Mon, 31 Jul 2023 20:37:43 GMT
- Title: Controlling Geometric Abstraction and Texture for Artistic Images
- Authors: Martin B\"u{\ss}emeyer, Max Reimann, Benito Buchheim, Amir Semmo,
J\"urgen D\"ollner, Matthias Trapp
- Abstract summary: We present a novel method for the interactive control of geometric abstraction and texture in artistic images.
Previous example-based stylization methods often entangle shape, texture, and color, while generative methods for image synthesis generally make assumptions about the input image.
By contrast, our holistic approach spatially decomposes the input into shapes and a parametric representation of high-frequency details comprising the image's texture, thus enabling independent control of color and texture.
- Score: 0.22835610890984162
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We present a novel method for the interactive control of geometric
abstraction and texture in artistic images. Previous example-based stylization
methods often entangle shape, texture, and color, while generative methods for
image synthesis generally either make assumptions about the input image, such
as only allowing faces or do not offer precise editing controls. By contrast,
our holistic approach spatially decomposes the input into shapes and a
parametric representation of high-frequency details comprising the image's
texture, thus enabling independent control of color and texture. Each parameter
in this representation controls painterly attributes of a pipeline of
differentiable stylization filters. The proposed decoupling of shape and
texture enables various options for stylistic editing, including interactive
global and local adjustments of shape, stroke, and painterly attributes such as
surface relief and contours. Additionally, we demonstrate optimization-based
texture style-transfer in the parametric space using reference images and text
prompts, as well as the training of single- and arbitrary style parameter
prediction networks for real-time texture decomposition.
Related papers
- Zero-Painter: Training-Free Layout Control for Text-to-Image Synthesis [63.757624792753205]
We present Zero-Painter, a framework for layout-conditional text-to-image synthesis.
Our method utilizes object masks and individual descriptions, coupled with a global text prompt, to generate images with high fidelity.
arXiv Detail & Related papers (2024-06-06T13:02:00Z) - Compositional Neural Textures [25.885557234297835]
This work introduces a fully unsupervised approach for representing textures using a compositional neural model.
We represent each texton as a 2D Gaussian function whose spatial support approximates its shape, and an associated feature that encodes its detailed appearance.
This approach enables a wide range of applications, including transferring appearance from an image texture to another image, diversifying textures, revealing/modifying texture variations, edit propagation, texture animation, and direct texton manipulation.
arXiv Detail & Related papers (2024-04-18T21:09:34Z) - TextureDreamer: Image-guided Texture Synthesis through Geometry-aware
Diffusion [64.49276500129092]
TextureDreamer is an image-guided texture synthesis method.
It can transfer relightable textures from a small number of input images to target 3D shapes across arbitrary categories.
arXiv Detail & Related papers (2024-01-17T18:55:49Z) - Curved Diffusion: A Generative Model With Optical Geometry Control [56.24220665691974]
The influence of different optical systems on the final scene appearance is frequently overlooked.
This study introduces a framework that intimately integrates a textto-image diffusion model with the particular lens used in image rendering.
arXiv Detail & Related papers (2023-11-29T13:06:48Z) - Locally Stylized Neural Radiance Fields [30.037649804991315]
We propose a stylization framework for neural radiance fields (NeRF) based on local style transfer.
In particular, we use a hash-grid encoding to learn the embedding of the appearance and geometry components.
We show that our method yields plausible stylization results with novel view synthesis.
arXiv Detail & Related papers (2023-09-19T15:08:10Z) - Realtime Fewshot Portrait Stylization Based On Geometric Alignment [32.224973317381426]
This paper presents a portrait stylization method designed for real-time mobile applications with limited style examples available.
Previous learning based stylization methods suffer from the geometric and semantic gaps between portrait domain and style domain.
Based on the geometric prior of human facial attributions, we propose to utilize geometric alignment to tackle this issue.
arXiv Detail & Related papers (2022-11-28T16:53:19Z) - Controllable Person Image Synthesis with Spatially-Adaptive Warped
Normalization [72.65828901909708]
Controllable person image generation aims to produce realistic human images with desirable attributes.
We introduce a novel Spatially-Adaptive Warped Normalization (SAWN), which integrates a learned flow-field to warp modulation parameters.
We propose a novel self-training part replacement strategy to refine the pretrained model for the texture-transfer task.
arXiv Detail & Related papers (2021-05-31T07:07:44Z) - Stylized Neural Painting [0.0]
This paper proposes an image-to-painting translation method that generates vivid and realistic painting artworks with controllable styles.
Experiments show that the paintings generated by our method have a high degree of fidelity in both global appearance and local textures.
arXiv Detail & Related papers (2020-11-16T17:24:21Z) - Learning to Caricature via Semantic Shape Transform [95.25116681761142]
We propose an algorithm based on a semantic shape transform to produce shape exaggerations.
We show that the proposed framework is able to render visually pleasing shape exaggerations while maintaining their facial structures.
arXiv Detail & Related papers (2020-08-12T03:41:49Z) - Image Morphing with Perceptual Constraints and STN Alignment [70.38273150435928]
We propose a conditional GAN morphing framework operating on a pair of input images.
A special training protocol produces sequences of frames, combined with a perceptual similarity loss, promote smooth transformation over time.
We provide comparisons to classic as well as latent space morphing techniques, and demonstrate that, given a set of images for self-supervision, our network learns to generate visually pleasing morphing effects.
arXiv Detail & Related papers (2020-04-29T10:49:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.