Style-Agnostic Reinforcement Learning
- URL: http://arxiv.org/abs/2208.14863v1
- Date: Wed, 31 Aug 2022 13:45:00 GMT
- Title: Style-Agnostic Reinforcement Learning
- Authors: Juyong Lee, Seokjun Ahn, Jaesik Park
- Abstract summary: We present a novel method of learning style-agnostic representation using both style transfer and adversarial learning.
Our method trains the actor with diverse image styles generated from an inherent adversarial style generator.
We verify that our method achieves competitive or better performances than the state-of-the-art approaches on Procgen and Distracting Control Suite benchmarks.
- Score: 9.338454092492901
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a novel method of learning style-agnostic representation using
both style transfer and adversarial learning in the reinforcement learning
framework. The style, here, refers to task-irrelevant details such as the color
of the background in the images, where generalizing the learned policy across
environments with different styles is still a challenge. Focusing on learning
style-agnostic representations, our method trains the actor with diverse image
styles generated from an inherent adversarial style perturbation generator,
which plays a min-max game between the actor and the generator, without
demanding expert knowledge for data augmentation or additional class labels for
adversarial training. We verify that our method achieves competitive or better
performances than the state-of-the-art approaches on Procgen and Distracting
Control Suite benchmarks, and further investigate the features extracted from
our model, showing that the model better captures the invariants and is less
distracted by the shifted style. The code is available at
https://github.com/POSTECH-CVLab/style-agnostic-RL.
Related papers
- StyleShot: A Snapshot on Any Style [20.41380860802149]
We show that, a good style representation is crucial and sufficient for generalized style transfer without test-time tuning.
We achieve this through constructing a style-aware encoder and a well-organized style dataset called StyleGallery.
We highlight that, our approach, named StyleShot, is simple yet effective in mimicking various desired styles, without test-time tuning.
arXiv Detail & Related papers (2024-07-01T16:05:18Z) - MuseumMaker: Continual Style Customization without Catastrophic Forgetting [50.12727620780213]
We propose MuseumMaker, a method that enables the synthesis of images by following a set of customized styles in a never-end manner.
When facing with a new customization style, we develop a style distillation loss module to extract and learn the styles of the training data for new image generation.
It can minimize the learning biases caused by content of new training images, and address the catastrophic overfitting issue induced by few-shot images.
arXiv Detail & Related papers (2024-04-25T13:51:38Z) - A Unified Arbitrary Style Transfer Framework via Adaptive Contrastive
Learning [84.8813842101747]
Unified Contrastive Arbitrary Style Transfer (UCAST) is a novel style representation learning and transfer framework.
We present an adaptive contrastive learning scheme for style transfer by introducing an input-dependent temperature.
Our framework consists of three key components, i.e., a parallel contrastive learning scheme for style representation and style transfer, a domain enhancement module for effective learning of style distribution, and a generative network for style transfer.
arXiv Detail & Related papers (2023-03-09T04:35:00Z) - StyleAdv: Meta Style Adversarial Training for Cross-Domain Few-Shot
Learning [89.86971464234533]
Cross-Domain Few-Shot Learning (CD-FSL) is a recently emerging task that tackles few-shot learning across different domains.
We propose a novel model-agnostic meta Style Adversarial training (StyleAdv) method together with a novel style adversarial attack method.
Our method is gradually robust to the visual styles, thus boosting the generalization ability for novel target datasets.
arXiv Detail & Related papers (2023-02-18T11:54:37Z) - Learning Diverse Tone Styles for Image Retouching [73.60013618215328]
We propose to learn diverse image retouching with normalizing flow-based architectures.
A joint-training pipeline is composed of a style encoder, a conditional RetouchNet, and the image tone style normalizing flow (TSFlow) module.
Our proposed method performs favorably against state-of-the-art methods and is effective in generating diverse results.
arXiv Detail & Related papers (2022-07-12T09:49:21Z) - Adversarial Style Augmentation for Domain Generalized Urban-Scene
Segmentation [120.96012935286913]
We propose a novel adversarial style augmentation approach, which can generate hard stylized images during training.
Experiments on two synthetic-to-real semantic segmentation benchmarks demonstrate that AdvStyle can significantly improve the model performance on unseen real domains.
arXiv Detail & Related papers (2022-07-11T14:01:25Z) - Domain Enhanced Arbitrary Image Style Transfer via Contrastive Learning [84.8813842101747]
Contrastive Arbitrary Style Transfer (CAST) is a new style representation learning and style transfer method via contrastive learning.
Our framework consists of three key components, i.e., a multi-layer style projector for style code encoding, a domain enhancement module for effective learning of style distribution, and a generative network for image style transfer.
arXiv Detail & Related papers (2022-05-19T13:11:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.