Continuous Learning Based Novelty Aware Emotion Recognition System
- URL: http://arxiv.org/abs/2306.08733v1
- Date: Wed, 14 Jun 2023 20:34:07 GMT
- Title: Continuous Learning Based Novelty Aware Emotion Recognition System
- Authors: Mijanur Palash, Bharat Bhargava
- Abstract summary: Current works in human emotion recognition follow the traditional closed learning approach governed by rigid rules without any consideration of novelty.
In this work, we propose a continuous learning based approach to deal with novelty in the automatic emotion recognition task.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Current works in human emotion recognition follow the traditional closed
learning approach governed by rigid rules without any consideration of novelty.
Classification models are trained on some collected datasets and expected to
have the same data distribution in the real-world deployment. Due to the fluid
and constantly changing nature of the world we live in, it is possible to have
unexpected and novel sample distribution which can lead the model to fail.
Hence, in this work, we propose a continuous learning based approach to deal
with novelty in the automatic emotion recognition task.
Related papers
- Towards Open-World Gesture Recognition [19.019579924491847]
In real-world applications involving gesture recognition, such as gesture recognition based on wrist-worn devices, the data distribution may change over time.
We propose the use of continual learning to enable machine learning models to be adaptive to new tasks.
We provide design guidelines to enhance the development of an open-world wrist-worn gesture recognition process.
arXiv Detail & Related papers (2024-01-20T06:45:16Z) - Continual Learning with Deep Streaming Regularized Discriminant Analysis [0.0]
We propose a streaming version of regularized discriminant analysis as a solution to this challenge.
We combine our algorithm with a convolutional neural network and demonstrate that it outperforms both batch learning and existing streaming learning algorithms.
arXiv Detail & Related papers (2023-09-15T12:25:42Z) - A Domain-Agnostic Approach for Characterization of Lifelong Learning
Systems [128.63953314853327]
"Lifelong Learning" systems are capable of 1) Continuous Learning, 2) Transfer and Adaptation, and 3) Scalability.
We show that this suite of metrics can inform the development of varied and complex Lifelong Learning systems.
arXiv Detail & Related papers (2023-01-18T21:58:54Z) - Uncertainty-aware Label Distribution Learning for Facial Expression
Recognition [13.321770808076398]
We propose a new uncertainty-aware label distribution learning method to improve the robustness of deep models against uncertainty and ambiguity.
Our method can be easily integrated into a deep network to obtain more training supervision and improve recognition accuracy.
arXiv Detail & Related papers (2022-09-21T15:48:41Z) - On Generalizing Beyond Domains in Cross-Domain Continual Learning [91.56748415975683]
Deep neural networks often suffer from catastrophic forgetting of previously learned knowledge after learning a new task.
Our proposed approach learns new tasks under domain shift with accuracy boosts up to 10% on challenging datasets such as DomainNet and OfficeHome.
arXiv Detail & Related papers (2022-03-08T09:57:48Z) - Task-agnostic Continual Learning with Hybrid Probabilistic Models [75.01205414507243]
We propose HCL, a Hybrid generative-discriminative approach to Continual Learning for classification.
The flow is used to learn the data distribution, perform classification, identify task changes, and avoid forgetting.
We demonstrate the strong performance of HCL on a range of continual learning benchmarks such as split-MNIST, split-CIFAR, and SVHN-MNIST.
arXiv Detail & Related papers (2021-06-24T05:19:26Z) - Automatic Recall Machines: Internal Replay, Continual Learning and the
Brain [104.38824285741248]
Replay in neural networks involves training on sequential data with memorized samples, which counteracts forgetting of previous behavior caused by non-stationarity.
We present a method where these auxiliary samples are generated on the fly, given only the model that is being trained for the assessed objective.
Instead the implicit memory of learned samples within the assessed model itself is exploited.
arXiv Detail & Related papers (2020-06-22T15:07:06Z) - Task-Agnostic Online Reinforcement Learning with an Infinite Mixture of
Gaussian Processes [25.513074215377696]
This paper proposes a continual online model-based reinforcement learning approach.
It does not require pre-training to solve task-agnostic problems with unknown task boundaries.
In experiments, our approach outperforms alternative methods in non-stationary tasks.
arXiv Detail & Related papers (2020-06-19T23:52:45Z) - Guided Uncertainty-Aware Policy Optimization: Combining Learning and
Model-Based Strategies for Sample-Efficient Policy Learning [75.56839075060819]
Traditional robotic approaches rely on an accurate model of the environment, a detailed description of how to perform the task, and a robust perception system to keep track of the current state.
reinforcement learning approaches can operate directly from raw sensory inputs with only a reward signal to describe the task, but are extremely sample-inefficient and brittle.
In this work, we combine the strengths of model-based methods with the flexibility of learning-based methods to obtain a general method that is able to overcome inaccuracies in the robotics perception/actuation pipeline.
arXiv Detail & Related papers (2020-05-21T19:47:05Z) - A Neural Dirichlet Process Mixture Model for Task-Free Continual
Learning [48.87397222244402]
We propose an expansion-based approach for task-free continual learning.
Our model successfully performs task-free continual learning for both discriminative and generative tasks.
arXiv Detail & Related papers (2020-01-03T02:07:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.