Multi-view Multi-label Fine-grained Emotion Decoding from Human Brain
Activity
- URL: http://arxiv.org/abs/2211.02629v1
- Date: Wed, 26 Oct 2022 05:56:54 GMT
- Title: Multi-view Multi-label Fine-grained Emotion Decoding from Human Brain
Activity
- Authors: Kaicheng Fu, Changde Du, Shengpei Wang and Huiguang He
- Abstract summary: Decoding emotional states from human brain activity plays an important role in brain-computer interfaces.
Existing emotion decoding methods still have two main limitations.
We propose a novel multi-view multi-label hybrid model for fine-grained emotion decoding.
- Score: 9.446422699647625
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Decoding emotional states from human brain activity plays an important role
in brain-computer interfaces. Existing emotion decoding methods still have two
main limitations: one is only decoding a single emotion category from a brain
activity pattern and the decoded emotion categories are coarse-grained, which
is inconsistent with the complex emotional expression of human; the other is
ignoring the discrepancy of emotion expression between the left and right
hemispheres of human brain. In this paper, we propose a novel multi-view
multi-label hybrid model for fine-grained emotion decoding (up to 80 emotion
categories) which can learn the expressive neural representations and
predicting multiple emotional states simultaneously. Specifically, the
generative component of our hybrid model is parametrized by a multi-view
variational auto-encoder, in which we regard the brain activity of left and
right hemispheres and their difference as three distinct views, and use the
product of expert mechanism in its inference network. The discriminative
component of our hybrid model is implemented by a multi-label classification
network with an asymmetric focal loss. For more accurate emotion decoding, we
first adopt a label-aware module for emotion-specific neural representations
learning and then model the dependency of emotional states by a masked
self-attention mechanism. Extensive experiments on two visually evoked
emotional datasets show the superiority of our method.
Related papers
- EmoLLM: Multimodal Emotional Understanding Meets Large Language Models [61.179731667080326]
Multi-modal large language models (MLLMs) have achieved remarkable performance on objective multimodal perception tasks.
But their ability to interpret subjective, emotionally nuanced multimodal content remains largely unexplored.
EmoLLM is a novel model for multimodal emotional understanding, incorporating with two core techniques.
arXiv Detail & Related papers (2024-06-24T08:33:02Z) - Multi-label Class Incremental Emotion Decoding with Augmented Emotional Semantics Learning [20.609772647273374]
We propose an augmented emotional semantics learning framework for incremental emotion decoding.
Specifically, we design an emotional relation graph module with label disambiguation to handle the past-missing partial label problem.
An emotional semantics learning module is constructed with a graph autoencoder to obtain emotion embeddings.
arXiv Detail & Related papers (2024-05-31T03:16:54Z) - Multi-Cue Adaptive Emotion Recognition Network [4.570705738465714]
We propose a new deep learning approach for emotion recognition based on adaptive multi-cues.
We compare the proposed approach with the state-of-art approaches in the CAER-S dataset.
arXiv Detail & Related papers (2021-11-03T15:08:55Z) - Stimuli-Aware Visual Emotion Analysis [75.68305830514007]
We propose a stimuli-aware visual emotion analysis (VEA) method consisting of three stages, namely stimuli selection, feature extraction and emotion prediction.
To the best of our knowledge, it is the first time to introduce stimuli selection process into VEA in an end-to-end network.
Experiments demonstrate that the proposed method consistently outperforms the state-of-the-art approaches on four public visual emotion datasets.
arXiv Detail & Related papers (2021-09-04T08:14:52Z) - Emotion Recognition from Multiple Modalities: Fundamentals and
Methodologies [106.62835060095532]
We discuss several key aspects of multi-modal emotion recognition (MER)
We begin with a brief introduction on widely used emotion representation models and affective modalities.
We then summarize existing emotion annotation strategies and corresponding computational tasks.
Finally, we outline several real-world applications and discuss some future directions.
arXiv Detail & Related papers (2021-08-18T21:55:20Z) - A Circular-Structured Representation for Visual Emotion Distribution
Learning [82.89776298753661]
We propose a well-grounded circular-structured representation to utilize the prior knowledge for visual emotion distribution learning.
To be specific, we first construct an Emotion Circle to unify any emotional state within it.
On the proposed Emotion Circle, each emotion distribution is represented with an emotion vector, which is defined with three attributes.
arXiv Detail & Related papers (2021-06-23T14:53:27Z) - Basic and Depression Specific Emotion Identification in Tweets:
Multi-label Classification Experiments [1.7699344561127386]
We present empirical analysis on basic and depression specific multi-emotion mining in Tweets.
We choose our basic emotions from a hybrid emotion model consisting of the common emotions from four highly regarded psychological models of emotions.
We augment that emotion model with new emotion categories because of their importance in the analysis of depression.
arXiv Detail & Related papers (2021-05-26T07:13:50Z) - Enhancing Cognitive Models of Emotions with Representation Learning [58.2386408470585]
We present a novel deep learning-based framework to generate embedding representations of fine-grained emotions.
Our framework integrates a contextualized embedding encoder with a multi-head probing model.
Our model is evaluated on the Empathetic Dialogue dataset and shows the state-of-the-art result for classifying 32 emotions.
arXiv Detail & Related papers (2021-04-20T16:55:15Z) - Modality-Transferable Emotion Embeddings for Low-Resource Multimodal
Emotion Recognition [55.44502358463217]
We propose a modality-transferable model with emotion embeddings to tackle the aforementioned issues.
Our model achieves state-of-the-art performance on most of the emotion categories.
Our model also outperforms existing baselines in the zero-shot and few-shot scenarios for unseen emotions.
arXiv Detail & Related papers (2020-09-21T06:10:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.