Deep Metric Structured Learning For Facial Expression Recognition
- URL: http://arxiv.org/abs/2001.06612v2
- Date: Thu, 6 Jan 2022 03:31:32 GMT
- Title: Deep Metric Structured Learning For Facial Expression Recognition
- Authors: Pedro D. Marrero Fernandez, Tsang Ing Ren, Tsang Ing Jyh, Fidel A.
Guerrero Pe\~na, Alexandre Cunha
- Abstract summary: We propose a deep metric learning model to create embedded sub-spaces with a well defined structure.
A new loss function that imposes Gaussian structures on the output space is introduced to create these sub-spaces.
We experimentally demonstrate that the learned embedding can be successfully used for various applications including expression retrieval and emotion recognition.
- Score: 58.7528672474537
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a deep metric learning model to create embedded sub-spaces with a
well defined structure. A new loss function that imposes Gaussian structures on
the output space is introduced to create these sub-spaces thus shaping the
distribution of the data. Having a mixture of Gaussians solution space is
advantageous given its simplified and well established structure. It allows
fast discovering of classes within classes and the identification of mean
representatives at the centroids of individual classes. We also propose a new
semi-supervised method to create sub-classes. We illustrate our methods on the
facial expression recognition problem and validate results on the FER+,
AffectNet, Extended Cohn-Kanade (CK+), BU-3DFE, and JAFFE datasets. We
experimentally demonstrate that the learned embedding can be successfully used
for various applications including expression retrieval and emotion
recognition.
Related papers
- Learning Invariant Molecular Representation in Latent Discrete Space [52.13724532622099]
We propose a new framework for learning molecular representations that exhibit invariance and robustness against distribution shifts.
Our model achieves stronger generalization against state-of-the-art baselines in the presence of various distribution shifts.
arXiv Detail & Related papers (2023-10-22T04:06:44Z) - Combating Mode Collapse in GANs via Manifold Entropy Estimation [70.06639443446545]
Generative Adversarial Networks (GANs) have shown compelling results in various tasks and applications.
We propose a novel training pipeline to address the mode collapse issue of GANs.
arXiv Detail & Related papers (2022-08-25T12:33:31Z) - Class-Specific Semantic Reconstruction for Open Set Recognition [101.24781422480406]
Open set recognition enables deep neural networks (DNNs) to identify samples of unknown classes.
We propose a novel method, called Class-Specific Semantic Reconstruction (CSSR), that integrates the power of auto-encoder (AE) and prototype learning.
Results of experiments conducted on multiple datasets show that the proposed method achieves outstanding performance in both close and open set recognition.
arXiv Detail & Related papers (2022-07-05T16:25:34Z) - Closed-Loop Data Transcription to an LDR via Minimaxing Rate Reduction [27.020835928724775]
This work proposes a new computational framework for learning an explicit generative model for real-world datasets.
In particular, we propose to learn em a closed-loop transcription between a multi-class multi-dimensional data distribution and a linear discriminative representation (LDR) in the feature space.
Our experiments on many benchmark imagery datasets demonstrate tremendous potential of this new closed-loop formulation.
arXiv Detail & Related papers (2021-11-12T10:06:08Z) - Structure-Aware Feature Generation for Zero-Shot Learning [108.76968151682621]
We introduce a novel structure-aware feature generation scheme, termed as SA-GAN, to account for the topological structure in learning both the latent space and the generative networks.
Our method significantly enhances the generalization capability on unseen-classes and consequently improve the classification performance.
arXiv Detail & Related papers (2021-08-16T11:52:08Z) - Learning optimally separated class-specific subspace representations
using convolutional autoencoder [0.0]
We propose a novel convolutional autoencoder based architecture to generate subspace specific feature representations.
To demonstrate the effectiveness of the proposed approach, several experiments have been carried out on state-of-the-art machine learning datasets.
arXiv Detail & Related papers (2021-05-19T00:45:34Z) - Switch Spaces: Learning Product Spaces with Sparse Gating [48.591045282317424]
We propose Switch Spaces, a data-driven approach for learning representations in product space.
We introduce sparse gating mechanisms that learn to choose, combine and switch spaces.
Experiments on knowledge graph completion and item recommendations show that the proposed switch space achieves new state-of-the-art performances.
arXiv Detail & Related papers (2021-02-17T11:06:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.