HyperNST: Hyper-Networks for Neural Style Transfer
- URL: http://arxiv.org/abs/2208.04807v1
- Date: Tue, 9 Aug 2022 14:34:07 GMT
- Title: HyperNST: Hyper-Networks for Neural Style Transfer
- Authors: Dan Ruta, Andrew Gilbert, Saeid Motiian, Baldo Faieta, Zhe Lin, and
John Collomosse
- Abstract summary: We present a technique for the artistic stylization of images, based on Hyper-networks and the StyleGAN2 architecture.
Our contribution is a novel method for inducing style transfer parameterized by a metric space, pre-trained for style-based visual search (SBVS)
The technical contribution is a hyper-network that predicts weight updates to a StyleGAN2 pre-trained over a diverse gamut of artistic content.
- Score: 19.71337532582559
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present HyperNST; a neural style transfer (NST) technique for the artistic
stylization of images, based on Hyper-networks and the StyleGAN2 architecture.
Our contribution is a novel method for inducing style transfer parameterized by
a metric space, pre-trained for style-based visual search (SBVS). We show for
the first time that such space may be used to drive NST, enabling the
application and interpolation of styles from an SBVS system. The technical
contribution is a hyper-network that predicts weight updates to a StyleGAN2
pre-trained over a diverse gamut of artistic content (portraits), tailoring the
style parameterization on a per-region basis using a semantic map of the facial
regions. We show HyperNST to exceed state of the art in content preservation
for our stylized content while retaining good style transfer performance.
Related papers
- DiffuseST: Unleashing the Capability of the Diffusion Model for Style Transfer [13.588643982359413]
Style transfer aims to fuse the artistic representation of a style image with the structural information of a content image.
Existing methods train specific networks or utilize pre-trained models to learn content and style features.
We propose a novel and training-free approach for style transfer, combining textual embedding with spatial features.
arXiv Detail & Related papers (2024-10-19T06:42:43Z) - ArtWeaver: Advanced Dynamic Style Integration via Diffusion Model [73.95608242322949]
Stylized Text-to-Image Generation (STIG) aims to generate images from text prompts and style reference images.
We present ArtWeaver, a novel framework that leverages pretrained Stable Diffusion to address challenges such as misinterpreted styles and inconsistent semantics.
arXiv Detail & Related papers (2024-05-24T07:19:40Z) - Diffusion-based Human Motion Style Transfer with Semantic Guidance [23.600154466988073]
We propose a novel framework for few-shot style transfer learning based on the diffusion model.
In the first stage, we pre-train a diffusion-based text-to-motion model as a generative prior.
In the second stage, based on the single style example, we fine-tune the pre-trained diffusion model in a few-shot manner to make it capable of style transfer.
arXiv Detail & Related papers (2024-03-20T05:52:11Z) - DIFF-NST: Diffusion Interleaving For deFormable Neural Style Transfer [27.39248034592382]
We propose using a new class of models to perform style transfer while enabling deformable style transfer.
We show how leveraging the priors of these models can expose new artistic controls at inference time.
arXiv Detail & Related papers (2023-07-09T12:13:43Z) - A Unified Arbitrary Style Transfer Framework via Adaptive Contrastive
Learning [84.8813842101747]
Unified Contrastive Arbitrary Style Transfer (UCAST) is a novel style representation learning and transfer framework.
We present an adaptive contrastive learning scheme for style transfer by introducing an input-dependent temperature.
Our framework consists of three key components, i.e., a parallel contrastive learning scheme for style representation and style transfer, a domain enhancement module for effective learning of style distribution, and a generative network for style transfer.
arXiv Detail & Related papers (2023-03-09T04:35:00Z) - StyleAdv: Meta Style Adversarial Training for Cross-Domain Few-Shot
Learning [89.86971464234533]
Cross-Domain Few-Shot Learning (CD-FSL) is a recently emerging task that tackles few-shot learning across different domains.
We propose a novel model-agnostic meta Style Adversarial training (StyleAdv) method together with a novel style adversarial attack method.
Our method is gradually robust to the visual styles, thus boosting the generalization ability for novel target datasets.
arXiv Detail & Related papers (2023-02-18T11:54:37Z) - Learning Graph Neural Networks for Image Style Transfer [131.73237185888215]
State-of-the-art parametric and non-parametric style transfer approaches are prone to either distorted local style patterns due to global statistics alignment, or unpleasing artifacts resulting from patch mismatching.
In this paper, we study a novel semi-parametric neural style transfer framework that alleviates the deficiency of both parametric and non-parametric stylization.
arXiv Detail & Related papers (2022-07-24T07:41:31Z) - Pastiche Master: Exemplar-Based High-Resolution Portrait Style Transfer [103.54337984566877]
Recent studies on StyleGAN show high performance on artistic portrait generation by transfer learning with limited data.
We introduce a novel DualStyleGAN with flexible control of dual styles of the original face domain and the extended artistic portrait domain.
Experiments demonstrate the superiority of DualStyleGAN over state-of-the-art methods in high-quality portrait style transfer and flexible style control.
arXiv Detail & Related papers (2022-03-24T17:57:11Z) - Neural Neighbor Style Transfer [31.746423262728598]
We propose a pipeline that offers state-of-the-art quality, generalization, and competitive efficiency for artistic style transfer.
Our approach is based on explicitly replacing neural features extracted from the content input with those from a style exemplar, then synthesizing the final output.
arXiv Detail & Related papers (2022-03-24T17:11:31Z) - DRB-GAN: A Dynamic ResBlock Generative Adversarial Network for Artistic
Style Transfer [29.20616177457981]
The paper proposes a Dynamic ResBlock Generative Adversarial Network (DRB-GAN) for artistic style transfer.
Our proposed DRB-GAN outperforms state-of-the-art methods and exhibits its superior performance in terms of visual quality and efficiency.
arXiv Detail & Related papers (2021-08-17T00:02:19Z) - Geometric Style Transfer [74.58782301514053]
We introduce a neural architecture that supports transfer of geometric style.
New architecture runs prior to a network that transfers texture style.
Users can input content/style pair as is common, or they can chose to input a content/texture-style/geometry-style triple.
arXiv Detail & Related papers (2020-07-10T16:33:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.