Tackling Data Bias in Painting Classification with Style Transfer
- URL: http://arxiv.org/abs/2301.02524v1
- Date: Fri, 6 Jan 2023 14:33:53 GMT
- Title: Tackling Data Bias in Painting Classification with Style Transfer
- Authors: Mridula Vijendran, Frederick W. B. Li, Hubert P. H. Shum
- Abstract summary: We propose a system to handle data bias in small paintings datasets like the Kaokore dataset.
Our system consists of two stages which are style transfer and classification.
- Score: 12.88476464580968
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: It is difficult to train classifiers on paintings collections due to model
bias from domain gaps and data bias from the uneven distribution of artistic
styles. Previous techniques like data distillation, traditional data
augmentation and style transfer improve classifier training using task specific
training datasets or domain adaptation. We propose a system to handle data bias
in small paintings datasets like the Kaokore dataset while simultaneously
accounting for domain adaptation in fine-tuning a model trained on real world
images. Our system consists of two stages which are style transfer and
classification. In the style transfer stage, we generate the stylized training
samples per class with uniformly sampled content and style images and train the
style transformation network per domain. In the classification stage, we can
interpret the effectiveness of the style and content layers at the attention
layers when training on the original training dataset and the stylized images.
We can tradeoff the model performance and convergence by dynamically varying
the proportion of augmented samples in the majority and minority classes. We
achieve comparable results to the SOTA with fewer training epochs and a
classifier with fewer training parameters.
Related papers
- ST-SACLF: Style Transfer Informed Self-Attention Classifier for Bias-Aware Painting Classification [9.534646914709018]
Painting classification plays a vital role in organizing, finding, and suggesting artwork for digital and classic art galleries.
Existing methods struggle with adapting knowledge from the real world to artistic images during training, leading to poor performance when dealing with different datasets.
We generate more data using Style Transfer with Adaptive Instance Normalization (AdaIN), bridging the gap between diverse styles.
We achieve an impressive 87.24% accuracy using the ResNet-50 backbone over 40 training epochs.
arXiv Detail & Related papers (2024-08-03T17:31:58Z) - Diversify Your Vision Datasets with Automatic Diffusion-Based
Augmentation [66.6546668043249]
ALIA (Automated Language-guided Image Augmentation) is a method which utilizes large vision and language models to automatically generate natural language descriptions of a dataset's domains.
To maintain data integrity, a model trained on the original dataset filters out minimal image edits and those which corrupt class-relevant information.
We show that ALIA is able to surpasses traditional data augmentation and text-to-image generated data on fine-grained classification tasks.
arXiv Detail & Related papers (2023-05-25T17:43:05Z) - Balancing Effect of Training Dataset Distribution of Multiple Styles for
Multi-Style Text Transfer [8.305622604531074]
This paper explores the impact of training data input diversity on the quality of the generated text from the multi-style transfer model.
We construct a pseudo-parallel dataset by devisings to adjust the style distribution in the training samples.
We observe that a balanced dataset produces more effective control effects over multiple styles than an imbalanced or skewed one.
arXiv Detail & Related papers (2023-05-24T21:36:15Z) - Training on Thin Air: Improve Image Classification with Generated Data [28.96941414724037]
Diffusion Inversion is a simple yet effective method to generate diverse, high-quality training data for image classification.
Our approach captures the original data distribution and ensures data coverage by inverting images to the latent space of Stable Diffusion.
We identify three key components that allow our generated images to successfully supplant the original dataset.
arXiv Detail & Related papers (2023-05-24T16:33:02Z) - A Unified Arbitrary Style Transfer Framework via Adaptive Contrastive
Learning [84.8813842101747]
Unified Contrastive Arbitrary Style Transfer (UCAST) is a novel style representation learning and transfer framework.
We present an adaptive contrastive learning scheme for style transfer by introducing an input-dependent temperature.
Our framework consists of three key components, i.e., a parallel contrastive learning scheme for style representation and style transfer, a domain enhancement module for effective learning of style distribution, and a generative network for style transfer.
arXiv Detail & Related papers (2023-03-09T04:35:00Z) - Beyond Transfer Learning: Co-finetuning for Action Localisation [64.07196901012153]
We propose co-finetuning -- simultaneously training a single model on multiple upstream'' and downstream'' tasks.
We demonstrate that co-finetuning outperforms traditional transfer learning when using the same total amount of data.
We also show how we can easily extend our approach to multiple upstream'' datasets to further improve performance.
arXiv Detail & Related papers (2022-07-08T10:25:47Z) - Domain Enhanced Arbitrary Image Style Transfer via Contrastive Learning [84.8813842101747]
Contrastive Arbitrary Style Transfer (CAST) is a new style representation learning and style transfer method via contrastive learning.
Our framework consists of three key components, i.e., a multi-layer style projector for style code encoding, a domain enhancement module for effective learning of style distribution, and a generative network for image style transfer.
arXiv Detail & Related papers (2022-05-19T13:11:24Z) - Improving filling level classification with adversarial training [90.01594595780928]
We investigate the problem of classifying - from a single image - the level of content in a cup or a drinking glass.
We use adversarial training in a generic source dataset and then refine the training with a task-specific dataset.
We show that transfer learning with adversarial training in the source domain consistently improves the classification accuracy on the test set.
arXiv Detail & Related papers (2021-02-08T08:32:56Z) - Adversarially-Trained Deep Nets Transfer Better: Illustration on Image
Classification [53.735029033681435]
Transfer learning is a powerful methodology for adapting pre-trained deep neural networks on image recognition tasks to new domains.
In this work, we demonstrate that adversarially-trained models transfer better than non-adversarially-trained models.
arXiv Detail & Related papers (2020-07-11T22:48:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.