MixAugment & Mixup: Augmentation Methods for Facial Expression
Recognition
- URL: http://arxiv.org/abs/2205.04442v1
- Date: Mon, 9 May 2022 17:43:08 GMT
- Title: MixAugment & Mixup: Augmentation Methods for Facial Expression
Recognition
- Authors: Andreas Psaroudakis and Dimitrios Kollias
- Abstract summary: We propose a new data augmentation strategy which is based on Mixup, called MixAugment.
We conduct an extensive experimental study that proves the effectiveness of MixAugment over Mixup and various state-of-the-art methods.
- Score: 4.273075747204267
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Automatic Facial Expression Recognition (FER) has attracted increasing
attention in the last 20 years since facial expressions play a central role in
human communication. Most FER methodologies utilize Deep Neural Networks (DNNs)
that are powerful tools when it comes to data analysis. However, despite their
power, these networks are prone to overfitting, as they often tend to memorize
the training data. What is more, there are not currently a lot of in-the-wild
(i.e. in unconstrained environment) large databases for FER. To alleviate this
issue, a number of data augmentation techniques have been proposed. Data
augmentation is a way to increase the diversity of available data by applying
constrained transformations on the original data. One such technique, which has
positively contributed to various classification tasks, is Mixup. According to
this, a DNN is trained on convex combinations of pairs of examples and their
corresponding labels. In this paper, we examine the effectiveness of Mixup for
in-the-wild FER in which data have large variations in head poses, illumination
conditions, backgrounds and contexts. We then propose a new data augmentation
strategy which is based on Mixup, called MixAugment. According to this, the
network is trained concurrently on a combination of virtual examples and real
examples; all these examples contribute to the overall loss function. We
conduct an extensive experimental study that proves the effectiveness of
MixAugment over Mixup and various state-of-the-art methods. We further
investigate the combination of dropout with Mixup and MixAugment, as well as
the combination of other data augmentation techniques with MixAugment.
Related papers
- A Survey on Mixup Augmentations and Beyond [59.578288906956736]
Mixup and relevant data-mixing methods that convexly combine selected samples and the corresponding labels are widely adopted.
This survey presents a comprehensive review of foundational mixup methods and their applications.
arXiv Detail & Related papers (2024-09-08T19:32:22Z) - TransformMix: Learning Transformation and Mixing Strategies from Data [20.79680733590554]
We propose an automated approach, TransformMix, to learn better transformation and mixing augmentation strategies from data.
We demonstrate the effectiveness of TransformMix on multiple datasets in transfer learning, classification, object detection, and knowledge distillation settings.
arXiv Detail & Related papers (2024-03-19T04:36:41Z) - MixupE: Understanding and Improving Mixup from Directional Derivative
Perspective [86.06981860668424]
We propose an improved version of Mixup, theoretically justified to deliver better generalization performance than the vanilla Mixup.
Our results show that the proposed method improves Mixup across multiple datasets using a variety of architectures.
arXiv Detail & Related papers (2022-12-27T07:03:52Z) - ScoreMix: A Scalable Augmentation Strategy for Training GANs with
Limited Data [93.06336507035486]
Generative Adversarial Networks (GANs) typically suffer from overfitting when limited training data is available.
We present ScoreMix, a novel and scalable data augmentation approach for various image synthesis tasks.
arXiv Detail & Related papers (2022-10-27T02:55:15Z) - Harnessing Hard Mixed Samples with Decoupled Regularizer [69.98746081734441]
Mixup is an efficient data augmentation approach that improves the generalization of neural networks by smoothing the decision boundary with mixed data.
In this paper, we propose an efficient mixup objective function with a decoupled regularizer named Decoupled Mixup (DM)
DM can adaptively utilize hard mixed samples to mine discriminative features without losing the original smoothness of mixup.
arXiv Detail & Related papers (2022-03-21T07:12:18Z) - Contrastive-mixup learning for improved speaker verification [17.93491404662201]
This paper proposes a novel formulation of prototypical loss with mixup for speaker verification.
Mixup is a simple yet efficient data augmentation technique that fabricates a weighted combination of random data point and label pairs.
arXiv Detail & Related papers (2022-02-22T05:09:22Z) - Feature transforms for image data augmentation [74.12025519234153]
In image classification, many augmentation approaches utilize simple image manipulation algorithms.
In this work, we build ensembles on the data level by adding images generated by combining fourteen augmentation approaches.
Pretrained ResNet50 networks are finetuned on training sets that include images derived from each augmentation method.
arXiv Detail & Related papers (2022-01-24T14:12:29Z) - Mixup-Transformer: Dynamic Data Augmentation for NLP Tasks [75.69896269357005]
Mixup is the latest data augmentation technique that linearly interpolates input examples and the corresponding labels.
In this paper, we explore how to apply mixup to natural language processing tasks.
We incorporate mixup to transformer-based pre-trained architecture, named "mixup-transformer", for a wide range of NLP tasks.
arXiv Detail & Related papers (2020-10-05T23:37:30Z) - Puzzle Mix: Exploiting Saliency and Local Statistics for Optimal Mixup [19.680580983094323]
Puzzle Mix is a mixup method for explicitly utilizing the saliency information and the underlying statistics of the natural examples.
Our experiments show Puzzle Mix achieves the state of the art generalization and the adversarial robustness results.
arXiv Detail & Related papers (2020-09-15T10:10:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.