What Can Style Transfer and Paintings Do For Model Robustness?
- URL: http://arxiv.org/abs/2011.14477v2
- Date: Thu, 27 May 2021 11:31:36 GMT
- Title: What Can Style Transfer and Paintings Do For Model Robustness?
- Authors: Hubert Lin, Mitchell van Zuijlen, Sylvia C. Pont, Maarten W.A.
Wijntjes, Kavita Bala
- Abstract summary: A common strategy for improving model robustness is through data augmentations.
Recent work has shown that arbitrary style transfer can be used as a form of data augmentation.
We show that learning from paintings as a form of perceptual data augmentation can improve model robustness.
- Score: 12.543035508615896
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A common strategy for improving model robustness is through data
augmentations. Data augmentations encourage models to learn desired
invariances, such as invariance to horizontal flipping or small changes in
color. Recent work has shown that arbitrary style transfer can be used as a
form of data augmentation to encourage invariance to textures by creating
painting-like images from photographs. However, a stylized photograph is not
quite the same as an artist-created painting. Artists depict perceptually
meaningful cues in paintings so that humans can recognize salient components in
scenes, an emphasis which is not enforced in style transfer. Therefore, we
study how style transfer and paintings differ in their impact on model
robustness. First, we investigate the role of paintings as style images for
stylization-based data augmentation. We find that style transfer functions well
even without paintings as style images. Second, we show that learning from
paintings as a form of perceptual data augmentation can improve model
robustness. Finally, we investigate the invariances learned from stylization
and from paintings, and show that models learn different invariances from these
differing forms of data. Our results provide insights into how stylization
improves model robustness, and provide evidence that artist-created paintings
can be a valuable source of data for model robustness.
Related papers
- Is Synthetic Image Useful for Transfer Learning? An Investigation into Data Generation, Volume, and Utilization [62.157627519792946]
We introduce a novel framework called bridged transfer, which initially employs synthetic images for fine-tuning a pre-trained model to improve its transferability.
We propose dataset style inversion strategy to improve the stylistic alignment between synthetic and real images.
Our proposed methods are evaluated across 10 different datasets and 5 distinct models, demonstrating consistent improvements.
arXiv Detail & Related papers (2024-03-28T22:25:05Z) - DIFF-NST: Diffusion Interleaving For deFormable Neural Style Transfer [27.39248034592382]
We propose using a new class of models to perform style transfer while enabling deformable style transfer.
We show how leveraging the priors of these models can expose new artistic controls at inference time.
arXiv Detail & Related papers (2023-07-09T12:13:43Z) - A Unified Arbitrary Style Transfer Framework via Adaptive Contrastive
Learning [84.8813842101747]
Unified Contrastive Arbitrary Style Transfer (UCAST) is a novel style representation learning and transfer framework.
We present an adaptive contrastive learning scheme for style transfer by introducing an input-dependent temperature.
Our framework consists of three key components, i.e., a parallel contrastive learning scheme for style representation and style transfer, a domain enhancement module for effective learning of style distribution, and a generative network for style transfer.
arXiv Detail & Related papers (2023-03-09T04:35:00Z) - Neural Artistic Style Transfer with Conditional Adversaria [0.0]
A neural artistic style transformation model can modify the appearance of a simple image by adding the style of a famous image.
In this paper, we present two methods that step toward the style image independent neural style transfer model.
Our novel contribution is a unidirectional-GAN model that ensures the Cyclic consistency by the model architecture.
arXiv Detail & Related papers (2023-02-08T04:34:20Z) - Inversion-Based Style Transfer with Diffusion Models [78.93863016223858]
Previous arbitrary example-guided artistic image generation methods often fail to control shape changes or convey elements.
We propose an inversion-based style transfer method (InST), which can efficiently and accurately learn the key information of an image.
arXiv Detail & Related papers (2022-11-23T18:44:25Z) - Person Image Synthesis via Denoising Diffusion Model [116.34633988927429]
We show how denoising diffusion models can be applied for high-fidelity person image synthesis.
Our results on two large-scale benchmarks and a user study demonstrate the photorealism of our proposed approach under challenging scenarios.
arXiv Detail & Related papers (2022-11-22T18:59:50Z) - Adversarial Style Augmentation for Domain Generalized Urban-Scene
Segmentation [120.96012935286913]
We propose a novel adversarial style augmentation approach, which can generate hard stylized images during training.
Experiments on two synthetic-to-real semantic segmentation benchmarks demonstrate that AdvStyle can significantly improve the model performance on unseen real domains.
arXiv Detail & Related papers (2022-07-11T14:01:25Z) - Domain Enhanced Arbitrary Image Style Transfer via Contrastive Learning [84.8813842101747]
Contrastive Arbitrary Style Transfer (CAST) is a new style representation learning and style transfer method via contrastive learning.
Our framework consists of three key components, i.e., a multi-layer style projector for style code encoding, a domain enhancement module for effective learning of style distribution, and a generative network for image style transfer.
arXiv Detail & Related papers (2022-05-19T13:11:24Z) - StyleAugment: Learning Texture De-biased Representations by Style
Augmentation without Pre-defined Textures [7.81768535871051]
Recently powerful vision classifiers are biased towards textures, while shape information is overlooked by the models.
A simple attempt by augmenting training images using the artistic style transfer method, called Stylized ImageNet, can reduce the texture bias.
However, Stylized ImageNet approach has two drawbacks in fidelity and diversity.
We propose a StyleAugment by augmenting styles from the mini-batch.
arXiv Detail & Related papers (2021-08-24T07:17:02Z) - Enhancing Human Pose Estimation in Ancient Vase Paintings via
Perceptually-grounded Style Transfer Learning [15.888271913164969]
We show how to adapt a dataset of natural images of known person and pose annotations to the style of Greek vase paintings by means of image style-transfer.
We show that using style-transfer learning significantly improves the SOTA performance on unlabelled data by more than 6% mean average precision (mAP) and mean average recall (mAR)
In a thorough ablation study, we give a targeted analysis of the influence of style intensities, revealing that the model learns generic domain styles.
arXiv Detail & Related papers (2020-12-10T12:08:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.