WSAM: Visual Explanations from Style Augmentation as Adversarial
Attacker and Their Influence in Image Classification
- URL: http://arxiv.org/abs/2308.14995v1
- Date: Tue, 29 Aug 2023 02:50:36 GMT
- Title: WSAM: Visual Explanations from Style Augmentation as Adversarial
Attacker and Their Influence in Image Classification
- Authors: Felipe Moreno-Vera and Edgar Medina and Jorge Poco
- Abstract summary: This paper outlines a style augmentation algorithm using noise-based sampling with addition to improving randomization on a general linear transformation for style transfer.
All models not only present incredible robustness against image stylizing but also outperform all previous methods and surpass the state-of-the-art performance for the STL-10 dataset.
- Score: 2.282270386262498
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Currently, style augmentation is capturing attention due to convolutional
neural networks (CNN) being strongly biased toward recognizing textures rather
than shapes. Most existing styling methods either perform a low-fidelity style
transfer or a weak style representation in the embedding vector. This paper
outlines a style augmentation algorithm using stochastic-based sampling with
noise addition to improving randomization on a general linear transformation
for style transfer. With our augmentation strategy, all models not only present
incredible robustness against image stylizing but also outperform all previous
methods and surpass the state-of-the-art performance for the STL-10 dataset. In
addition, we present an analysis of the model interpretations under different
style variations. At the same time, we compare comprehensive experiments
demonstrating the performance when applied to deep neural architectures in
training settings.
Related papers
- HiCAST: Highly Customized Arbitrary Style Transfer with Adapter Enhanced
Diffusion Models [84.12784265734238]
The goal of Arbitrary Style Transfer (AST) is injecting the artistic features of a style reference into a given image/video.
We propose HiCAST, which is capable of explicitly customizing the stylization results according to various source of semantic clues.
A novel learning objective is leveraged for video diffusion model training, which significantly improve cross-frame temporal consistency.
arXiv Detail & Related papers (2024-01-11T12:26:23Z) - AesFA: An Aesthetic Feature-Aware Arbitrary Neural Style Transfer [6.518925259025401]
This work proposes a lightweight but effective model, AesFA -- Aesthetic Feature-Aware NST.
The primary idea is to decompose the image via its frequencies to better disentangle aesthetic styles from the reference image.
To improve the network's ability to extract more distinct representations, this work introduces a new aesthetic feature: contrastive loss.
arXiv Detail & Related papers (2023-12-10T16:29:54Z) - A Unified Arbitrary Style Transfer Framework via Adaptive Contrastive
Learning [84.8813842101747]
Unified Contrastive Arbitrary Style Transfer (UCAST) is a novel style representation learning and transfer framework.
We present an adaptive contrastive learning scheme for style transfer by introducing an input-dependent temperature.
Our framework consists of three key components, i.e., a parallel contrastive learning scheme for style representation and style transfer, a domain enhancement module for effective learning of style distribution, and a generative network for style transfer.
arXiv Detail & Related papers (2023-03-09T04:35:00Z) - StyleAdv: Meta Style Adversarial Training for Cross-Domain Few-Shot
Learning [89.86971464234533]
Cross-Domain Few-Shot Learning (CD-FSL) is a recently emerging task that tackles few-shot learning across different domains.
We propose a novel model-agnostic meta Style Adversarial training (StyleAdv) method together with a novel style adversarial attack method.
Our method is gradually robust to the visual styles, thus boosting the generalization ability for novel target datasets.
arXiv Detail & Related papers (2023-02-18T11:54:37Z) - Style-Agnostic Reinforcement Learning [9.338454092492901]
We present a novel method of learning style-agnostic representation using both style transfer and adversarial learning.
Our method trains the actor with diverse image styles generated from an inherent adversarial style generator.
We verify that our method achieves competitive or better performances than the state-of-the-art approaches on Procgen and Distracting Control Suite benchmarks.
arXiv Detail & Related papers (2022-08-31T13:45:00Z) - Learning Graph Neural Networks for Image Style Transfer [131.73237185888215]
State-of-the-art parametric and non-parametric style transfer approaches are prone to either distorted local style patterns due to global statistics alignment, or unpleasing artifacts resulting from patch mismatching.
In this paper, we study a novel semi-parametric neural style transfer framework that alleviates the deficiency of both parametric and non-parametric stylization.
arXiv Detail & Related papers (2022-07-24T07:41:31Z) - Adversarial Style Augmentation for Domain Generalized Urban-Scene
Segmentation [120.96012935286913]
We propose a novel adversarial style augmentation approach, which can generate hard stylized images during training.
Experiments on two synthetic-to-real semantic segmentation benchmarks demonstrate that AdvStyle can significantly improve the model performance on unseen real domains.
arXiv Detail & Related papers (2022-07-11T14:01:25Z) - Domain Enhanced Arbitrary Image Style Transfer via Contrastive Learning [84.8813842101747]
Contrastive Arbitrary Style Transfer (CAST) is a new style representation learning and style transfer method via contrastive learning.
Our framework consists of three key components, i.e., a multi-layer style projector for style code encoding, a domain enhancement module for effective learning of style distribution, and a generative network for image style transfer.
arXiv Detail & Related papers (2022-05-19T13:11:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.