Inharmonious Region Localization with Auxiliary Style Feature
- URL: http://arxiv.org/abs/2210.02029v1
- Date: Wed, 5 Oct 2022 05:37:35 GMT
- Title: Inharmonious Region Localization with Auxiliary Style Feature
- Authors: Penghao Wu, Li Niu, Liqing Zhang
- Abstract summary: Inharmonious region localization aims to localize the inharmonious region in a synthetic image.
We propose a novel color mapping module and a style feature loss to extract discriminative style features.
Based on the extracted style features, we also propose a novel style voting module to guide the localization of inharmonious region.
- Score: 19.146209624835322
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the prevalence of image editing techniques, users can create fantastic
synthetic images, but the image quality may be compromised by the
color/illumination discrepancy between the manipulated region and background.
Inharmonious region localization aims to localize the inharmonious region in a
synthetic image. In this work, we attempt to leverage auxiliary style feature
to facilitate this task. Specifically, we propose a novel color mapping module
and a style feature loss to extract discriminative style features containing
task-relevant color/illumination information. Based on the extracted style
features, we also propose a novel style voting module to guide the localization
of inharmonious region. Moreover, we introduce semantic information into the
style voting module to achieve further improvement. Our method surpasses the
existing methods by a large margin on the benchmark dataset.
Related papers
- Region-controlled Style Transfer [3.588126599266807]
We propose a training method that uses a loss function to constrain the style intensity in different regions.
This method guides the transfer strength of style features in different regions based on the gradient relationship between style and content images.
We also introduce a novel feature fusion method that linearly transforms content features to resemble style features while preserving their semantic relationships.
arXiv Detail & Related papers (2023-10-24T09:11:34Z) - Locally Stylized Neural Radiance Fields [30.037649804991315]
We propose a stylization framework for neural radiance fields (NeRF) based on local style transfer.
In particular, we use a hash-grid encoding to learn the embedding of the appearance and geometry components.
We show that our method yields plausible stylization results with novel view synthesis.
arXiv Detail & Related papers (2023-09-19T15:08:10Z) - Any-to-Any Style Transfer: Making Picasso and Da Vinci Collaborate [58.83278629019384]
Style transfer aims to render the style of a given image for style reference to another given image for content reference.
Existing approaches either apply the holistic style of the style image in a global manner, or migrate local colors and textures of the style image to the content counterparts in a pre-defined way.
We propose Any-to-Any Style Transfer, which enables users to interactively select styles of regions in the style image and apply them to the prescribed content regions.
arXiv Detail & Related papers (2023-04-19T15:15:36Z) - Inharmonious Region Localization by Magnifying Domain Discrepancy [18.661683923953085]
Inharmonious region localization aims to localize the region in a synthetic image which is incompatible with surrounding background.
In this work, we tend to transform the input image to another color space to magnify the domain discrepancy between inharmonious region and background.
We present a novel framework consisting of a color mapping module and an inharmonious region localization network.
arXiv Detail & Related papers (2022-09-30T10:41:16Z) - Image Harmonization with Region-wise Contrastive Learning [51.309905690367835]
We propose a novel image harmonization framework with external style fusion and region-wise contrastive learning scheme.
Our method attempts to bring together corresponding positive and negative samples by maximizing the mutual information between the foreground and background styles.
arXiv Detail & Related papers (2022-05-27T15:46:55Z) - Domain Enhanced Arbitrary Image Style Transfer via Contrastive Learning [84.8813842101747]
Contrastive Arbitrary Style Transfer (CAST) is a new style representation learning and style transfer method via contrastive learning.
Our framework consists of three key components, i.e., a multi-layer style projector for style code encoding, a domain enhancement module for effective learning of style distribution, and a generative network for image style transfer.
arXiv Detail & Related papers (2022-05-19T13:11:24Z) - Style Transfer with Target Feature Palette and Attention Coloring [15.775618544581885]
A novel artistic stylization method with target feature palettes is proposed, which can transfer key features accurately.
Our stylized images exhibit state-of-the-art performance, with strength in preserving core structures and details of the content image.
arXiv Detail & Related papers (2021-11-07T08:09:20Z) - Towards Controllable and Photorealistic Region-wise Image Manipulation [11.601157452472714]
We present a generative model with auto-encoder architecture for per-region style manipulation.
We apply a code consistency loss to enforce an explicit disentanglement between content and style latent representations.
The model is constrained by a content alignment loss to ensure the foreground editing will not interfere background contents.
arXiv Detail & Related papers (2021-08-19T13:29:45Z) - Drafting and Revision: Laplacian Pyramid Network for Fast High-Quality
Artistic Style Transfer [115.13853805292679]
Artistic style transfer aims at migrating the style from an example image to a content image.
Inspired by the common painting process of drawing a draft and revising the details, we introduce a novel feed-forward method named Laplacian Pyramid Network (LapStyle)
Our method can synthesize high quality stylized images in real time, where holistic style patterns are properly transferred.
arXiv Detail & Related papers (2021-04-12T11:53:53Z) - Region-adaptive Texture Enhancement for Detailed Person Image Synthesis [86.69934638569815]
RATE-Net is a novel framework for synthesizing person images with sharp texture details.
The proposed framework leverages an additional texture enhancing module to extract appearance information from the source image.
Experiments conducted on DeepFashion benchmark dataset have demonstrated the superiority of our framework compared with existing networks.
arXiv Detail & Related papers (2020-05-26T02:33:21Z) - Manifold Alignment for Semantically Aligned Style Transfer [61.1274057338588]
We make a new assumption that image features from the same semantic region form a manifold and an image with multiple semantic regions follows a multi-manifold distribution.
Based on this assumption, the style transfer problem is formulated as aligning two multi-manifold distributions.
The proposed framework allows semantically similar regions between the output and the style image share similar style patterns.
arXiv Detail & Related papers (2020-05-21T16:52:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.