An Ensemble with Shared Representations Based on Convolutional Networks
for Continually Learning Facial Expressions
- URL: http://arxiv.org/abs/2103.03934v1
- Date: Fri, 5 Mar 2021 20:40:52 GMT
- Title: An Ensemble with Shared Representations Based on Convolutional Networks
for Continually Learning Facial Expressions
- Authors: Henrique Siqueira, Pablo Barros, Sven Magg and Stefan Wermter
- Abstract summary: Semi-supervised learning through ensemble predictions is an efficient strategy to leverage the high exposure of unlabelled facial expressions during human-robot interactions.
Traditional ensemble-based systems are composed of several independent classifiers leading to a high degree of redundancy.
We show that our approach is able to continually learn facial expressions through ensemble predictions using unlabelled samples from different data distributions.
- Score: 19.72032908764253
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Social robots able to continually learn facial expressions could
progressively improve their emotion recognition capability towards people
interacting with them. Semi-supervised learning through ensemble predictions is
an efficient strategy to leverage the high exposure of unlabelled facial
expressions during human-robot interactions. Traditional ensemble-based
systems, however, are composed of several independent classifiers leading to a
high degree of redundancy, and unnecessary allocation of computational
resources. In this paper, we proposed an ensemble based on convolutional
networks where the early layers are strong low-level feature extractors, and
their representations shared with an ensemble of convolutional branches. This
results in a significant drop in redundancy of low-level features processing.
Training in a semi-supervised setting, we show that our approach is able to
continually learn facial expressions through ensemble predictions using
unlabelled samples from different data distributions.
Related papers
- Neural Clustering based Visual Representation Learning [61.72646814537163]
Clustering is one of the most classic approaches in machine learning and data analysis.
We propose feature extraction with clustering (FEC), which views feature extraction as a process of selecting representatives from data.
FEC alternates between grouping pixels into individual clusters to abstract representatives and updating the deep features of pixels with current representatives.
arXiv Detail & Related papers (2024-03-26T06:04:50Z) - Improving Compositional Generalization Using Iterated Learning and
Simplicial Embeddings [19.667133565610087]
Compositional generalization is easy for humans but hard for deep neural networks.
We propose to improve this ability by using iterated learning on models with simplicial embeddings.
We show that this combination of changes improves compositional generalization over other approaches.
arXiv Detail & Related papers (2023-10-28T18:30:30Z) - Decentralized Adversarial Training over Graphs [55.28669771020857]
The vulnerability of machine learning models to adversarial attacks has been attracting considerable attention in recent years.
This work studies adversarial training over graphs, where individual agents are subjected to varied strength perturbation space.
arXiv Detail & Related papers (2023-03-23T15:05:16Z) - Self-Supervised Visual Representation Learning with Semantic Grouping [50.14703605659837]
We tackle the problem of learning visual representations from unlabeled scene-centric data.
We propose contrastive learning from data-driven semantic slots, namely SlotCon, for joint semantic grouping and representation learning.
arXiv Detail & Related papers (2022-05-30T17:50:59Z) - Learning from Heterogeneous Data Based on Social Interactions over
Graphs [58.34060409467834]
This work proposes a decentralized architecture, where individual agents aim at solving a classification problem while observing streaming features of different dimensions.
We show that the.
strategy enables the agents to learn consistently under this highly-heterogeneous setting.
We show that the.
strategy enables the agents to learn consistently under this highly-heterogeneous setting.
arXiv Detail & Related papers (2021-12-17T12:47:18Z) - Contrastive Learning for Fair Representations [50.95604482330149]
Trained classification models can unintentionally lead to biased representations and predictions.
Existing debiasing methods for classification models, such as adversarial training, are often expensive to train and difficult to optimise.
We propose a method for mitigating bias by incorporating contrastive learning, in which instances sharing the same class label are encouraged to have similar representations.
arXiv Detail & Related papers (2021-09-22T10:47:51Z) - Information Maximization Clustering via Multi-View Self-Labelling [9.947717243638289]
We propose a novel single-phase clustering method that simultaneously learns meaningful representations and assigns the corresponding annotations.
This is achieved by integrating a discrete representation into the self-supervised paradigm through a net.
Our empirical results show that the proposed framework outperforms state-of-the-art techniques with the average accuracy of 89.1% and 49.0%, respectively.
arXiv Detail & Related papers (2021-03-12T16:04:41Z) - Network Classifiers Based on Social Learning [71.86764107527812]
We propose a new way of combining independently trained classifiers over space and time.
The proposed architecture is able to improve prediction performance over time with unlabeled data.
We show that this strategy results in consistent learning with high probability, and it yields a robust structure against poorly trained classifiers.
arXiv Detail & Related papers (2020-10-23T11:18:20Z) - Efficient Facial Feature Learning with Wide Ensemble-based Convolutional
Neural Networks [20.09586211332088]
We present experiments on Ensembles with Shared Representations based on convolutional networks.
We show that redundancy and computational load can be dramatically reduced by varying the branching level of the ESR.
Experiments on large-scale datasets suggest that ESRs reduce the remaining residual generalization error.
arXiv Detail & Related papers (2020-01-17T14:32:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.