A Genetic Feature Selection Based Two-stream Neural Network for Anger
Veracity Recognition
- URL: http://arxiv.org/abs/2009.02650v3
- Date: Sat, 12 Sep 2020 03:03:43 GMT
- Title: A Genetic Feature Selection Based Two-stream Neural Network for Anger
Veracity Recognition
- Authors: Chaoxing Huang, Xuanying Zhu, Tom Gedeon
- Abstract summary: We use Genetic-based Feature Selection (GFS) methods to select time-series pupillary features of observers who observe acted and genuine anger of the video stimuli.
We then use the selected features to train a simple fully connected neural work and a two-stream neural network.
Our results show that the two-stream architecture is able to achieve a promising recognition result with an accuracy of 93.58% when the pupillary responses from both eyes are available.
- Score: 3.885779089924737
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: People can manipulate emotion expressions when interacting with others. For
example, acted anger can be expressed when stimuli is not genuinely angry with
an aim to manipulate the observer. In this paper, we aim to examine if the
veracity of anger can be recognized with observers' pupillary data with
computational approaches. We use Genetic-based Feature Selection (GFS) methods
to select time-series pupillary features of of observers who observe acted and
genuine anger of the video stimuli. We then use the selected features to train
a simple fully connected neural work and a two-stream neural network. Our
results show that the two-stream architecture is able to achieve a promising
recognition result with an accuracy of 93.58% when the pupillary responses from
both eyes are available. It also shows that genetic algorithm based feature
selection method can effectively improve the classification accuracy by 3.07%.
We hope our work could help daily research such as human machine interaction
and psychology studies that require emotion recognition .
Related papers
- Auto Detecting Cognitive Events Using Machine Learning on Pupillary Data [0.0]
Pupil size is a valuable indicator of cognitive workload, reflecting changes in attention and arousal governed by the autonomic nervous system.
This study explores the potential of using machine learning to automatically detect cognitive events experienced using individuals.
arXiv Detail & Related papers (2024-10-18T04:54:46Z) - BRACTIVE: A Brain Activation Approach to Human Visual Brain Learning [11.517021103782229]
We introduce Brain Activation Network (BRACTIVE), a transformer-based approach to studying the human visual brain.
The main objective of BRACTIVE is to align the visual features of subjects with corresponding brain representations via fMRI signals.
Our experiments demonstrate that BRACTIVE effectively identifies person-specific regions of interest, such as face and body-selective areas.
arXiv Detail & Related papers (2024-05-29T06:50:13Z) - Emotion Analysis on EEG Signal Using Machine Learning and Neural Network [0.0]
The main purpose of this study is to improve ways to improve emotion recognition performance using brain signals.
Various approaches to human-machine interaction technologies have been ongoing for a long time, and in recent years, researchers have had great success in automatically understanding emotion using brain signals.
arXiv Detail & Related papers (2023-07-09T09:50:34Z) - Searching for the Essence of Adversarial Perturbations [73.96215665913797]
We show that adversarial perturbations contain human-recognizable information, which is the key conspirator responsible for a neural network's erroneous prediction.
This concept of human-recognizable information allows us to explain key features related to adversarial perturbations.
arXiv Detail & Related papers (2022-05-30T18:04:57Z) - Overcoming the Domain Gap in Neural Action Representations [60.47807856873544]
3D pose data can now be reliably extracted from multi-view video sequences without manual intervention.
We propose to use it to guide the encoding of neural action representations together with a set of neural and behavioral augmentations.
To reduce the domain gap, during training, we swap neural and behavioral data across animals that seem to be performing similar actions.
arXiv Detail & Related papers (2021-12-02T12:45:46Z) - Overcoming the Domain Gap in Contrastive Learning of Neural Action
Representations [60.47807856873544]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior.
We generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies.
This dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
arXiv Detail & Related papers (2021-11-29T15:27:51Z) - Stimuli-Aware Visual Emotion Analysis [75.68305830514007]
We propose a stimuli-aware visual emotion analysis (VEA) method consisting of three stages, namely stimuli selection, feature extraction and emotion prediction.
To the best of our knowledge, it is the first time to introduce stimuli selection process into VEA in an end-to-end network.
Experiments demonstrate that the proposed method consistently outperforms the state-of-the-art approaches on four public visual emotion datasets.
arXiv Detail & Related papers (2021-09-04T08:14:52Z) - Preserving Privacy in Human-Motion Affect Recognition [4.753703852165805]
This work evaluates the effectiveness of existing methods at recognising emotions using both 3D temporal joint signals and manually extracted features.
We propose a cross-subject transfer learning technique for training a multi-encoder autoencoder deep neural network to learn disentangled latent representations of human motion features.
arXiv Detail & Related papers (2021-05-09T15:26:21Z) - Emotional EEG Classification using Connectivity Features and
Convolutional Neural Networks [81.74442855155843]
We introduce a new classification system that utilizes brain connectivity with a CNN and validate its effectiveness via the emotional video classification.
The level of concentration of the brain connectivity related to the emotional property of the target video is correlated with classification performance.
arXiv Detail & Related papers (2021-01-18T13:28:08Z) - Investigating Emotion-Color Association in Deep Neural Networks [6.85316573653194]
We show that representations learned by deep neural networks can indeed show an emotion-color association.
We also show that this method can help us in the emotion classification task, specifically when there are very few examples to train the model.
arXiv Detail & Related papers (2020-11-22T16:48:02Z) - Continuous Emotion Recognition via Deep Convolutional Autoencoder and
Support Vector Regressor [70.2226417364135]
It is crucial that the machine should be able to recognize the emotional state of the user with high accuracy.
Deep neural networks have been used with great success in recognizing emotions.
We present a new model for continuous emotion recognition based on facial expression recognition.
arXiv Detail & Related papers (2020-01-31T17:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.