DeepObjStyle: Deep Object-based Photo Style Transfer
- URL: http://arxiv.org/abs/2012.06498v1
- Date: Fri, 11 Dec 2020 17:02:01 GMT
- Title: DeepObjStyle: Deep Object-based Photo Style Transfer
- Authors: Indra Deep Mastan and Shanmuganathan Raman
- Abstract summary: One of the major challenges of style transfer is the appropriate image features supervision between the output image and the input (style and content) images.
We propose an object-based style transfer approach, called DeepStyle, for the style supervision in the training data-independent framework.
- Score: 31.75300124593133
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: One of the major challenges of style transfer is the appropriate image
features supervision between the output image and the input (style and content)
images. An efficient strategy would be to define an object map between the
objects of the style and the content images. However, such a mapping is not
well established when there are semantic objects of different types and numbers
in the style and the content images. It also leads to content mismatch in the
style transfer output, which could reduce the visual quality of the results. We
propose an object-based style transfer approach, called DeepObjStyle, for the
style supervision in the training data-independent framework. DeepObjStyle
preserves the semantics of the objects and achieves better style transfer in
the challenging scenario when the style and the content images have a mismatch
of image features. We also perform style transfer of images containing a word
cloud to demonstrate that DeepObjStyle enables an appropriate image features
supervision. We validate the results using quantitative comparisons and user
studies.
Related papers
- FAGStyle: Feature Augmentation on Geodesic Surface for Zero-shot Text-guided Diffusion Image Style Transfer [2.3293561091456283]
The goal of image style transfer is to render an image guided by a style reference while maintaining the original content.
We introduce FAGStyle, a zero-shot text-guided diffusion image style transfer method.
Our approach enhances inter-patch information interaction by incorporating the Sliding Window Crop technique.
arXiv Detail & Related papers (2024-08-20T04:20:11Z) - Soulstyler: Using Large Language Model to Guide Image Style Transfer for
Target Object [9.759321877363258]
"Soulstyler" allows users to guide the stylization of specific objects in an image through simple textual descriptions.
We introduce a large language model to parse the text and identify stylization goals and specific styles.
We also introduce a novel localized text-image block matching loss that ensures that style transfer is performed only on specified target objects.
arXiv Detail & Related papers (2023-11-22T18:15:43Z) - MOSAIC: Multi-Object Segmented Arbitrary Stylization Using CLIP [0.0]
Style transfer driven by text prompts paved a new path for creatively stylizing the images without collecting an actual style image.
We propose a new method Multi-Object Segmented Arbitrary Stylization Using CLIP (MOSAIC) that can apply styles to different objects in the image based on the context extracted from the input prompt.
Our method can extend to any arbitrary objects, styles and produce high-quality images compared to the current state of art methods.
arXiv Detail & Related papers (2023-09-24T18:24:55Z) - StyleAdapter: A Unified Stylized Image Generation Model [97.24936247688824]
StyleAdapter is a unified stylized image generation model capable of producing a variety of stylized images.
It can be integrated with existing controllable synthesis methods, such as T2I-adapter and ControlNet.
arXiv Detail & Related papers (2023-09-04T19:16:46Z) - Sem-CS: Semantic CLIPStyler for Text-Based Image Style Transfer [4.588028371034406]
We propose Semantic CLIPStyler (Sem-CS) that performs semantic style transfer.
Sem-CS first segments the content image into salient and non-salient objects and then transfers artistic style based on a given style text description.
Our empirical results, including DISTS, NIMA and user study scores, show that our proposed framework yields superior qualitative and quantitative performance.
arXiv Detail & Related papers (2023-07-12T05:59:42Z) - Any-to-Any Style Transfer: Making Picasso and Da Vinci Collaborate [58.83278629019384]
Style transfer aims to render the style of a given image for style reference to another given image for content reference.
Existing approaches either apply the holistic style of the style image in a global manner, or migrate local colors and textures of the style image to the content counterparts in a pre-defined way.
We propose Any-to-Any Style Transfer, which enables users to interactively select styles of regions in the style image and apply them to the prescribed content regions.
arXiv Detail & Related papers (2023-04-19T15:15:36Z) - SEM-CS: Semantic CLIPStyler for Text-Based Image Style Transfer [4.588028371034406]
We propose Semantic CLIPStyler (Sem-CS) that performs semantic style transfer.
Sem-CS first segments the content image into salient and non-salient objects and then transfers artistic style based on a given style text description.
Our empirical results, including DISTS, NIMA and user study scores, show that our proposed framework yields superior qualitative and quantitative performance.
arXiv Detail & Related papers (2023-03-11T07:33:06Z) - DSI2I: Dense Style for Unpaired Image-to-Image Translation [70.93865212275412]
Unpaired exemplar-based image-to-image (UEI2I) translation aims to translate a source image to a target image domain with the style of a target image exemplar.
We propose to represent style as a dense feature map, allowing for a finer-grained transfer to the source image without requiring any external semantic information.
Our results show that the translations produced by our approach are more diverse, preserve the source content better, and are closer to the exemplars when compared to the state-of-the-art methods.
arXiv Detail & Related papers (2022-12-26T18:45:25Z) - Domain Enhanced Arbitrary Image Style Transfer via Contrastive Learning [84.8813842101747]
Contrastive Arbitrary Style Transfer (CAST) is a new style representation learning and style transfer method via contrastive learning.
Our framework consists of three key components, i.e., a multi-layer style projector for style code encoding, a domain enhancement module for effective learning of style distribution, and a generative network for image style transfer.
arXiv Detail & Related papers (2022-05-19T13:11:24Z) - Manifold Alignment for Semantically Aligned Style Transfer [61.1274057338588]
We make a new assumption that image features from the same semantic region form a manifold and an image with multiple semantic regions follows a multi-manifold distribution.
Based on this assumption, the style transfer problem is formulated as aligning two multi-manifold distributions.
The proposed framework allows semantically similar regions between the output and the style image share similar style patterns.
arXiv Detail & Related papers (2020-05-21T16:52:37Z) - Parameter-Free Style Projection for Arbitrary Style Transfer [64.06126075460722]
This paper proposes a new feature-level style transformation technique, named Style Projection, for parameter-free, fast, and effective content-style transformation.
This paper further presents a real-time feed-forward model to leverage Style Projection for arbitrary image style transfer.
arXiv Detail & Related papers (2020-03-17T13:07:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.