Multimodal Affective States Recognition Based on Multiscale CNNs and
Biologically Inspired Decision Fusion Model
- URL: http://arxiv.org/abs/1911.12918v2
- Date: Fri, 28 Apr 2023 02:42:36 GMT
- Title: Multimodal Affective States Recognition Based on Multiscale CNNs and
Biologically Inspired Decision Fusion Model
- Authors: Yuxuan Zhao, Xinyan Cao, Jinlong Lin, Dunshan Yu, Xixin Cao
- Abstract summary: multimodal physiological signals-based affective states recognition methods have not been thoroughly exploited yet.
We propose Multiscale Convolutional Neural Networks (Multiscale CNNs) and a biologically inspired decision fusion model for affective states recognition.
The results show that the fusion model improves the accuracy of affective states recognition significantly compared with the result on single-modality signals.
- Score: 9.006757372508366
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: There has been an encouraging progress in the affective states recognition
models based on the single-modality signals as electroencephalogram (EEG)
signals or peripheral physiological signals in recent years. However,
multimodal physiological signals-based affective states recognition methods
have not been thoroughly exploited yet. Here we propose Multiscale
Convolutional Neural Networks (Multiscale CNNs) and a biologically inspired
decision fusion model for multimodal affective states recognition. Firstly, the
raw signals are pre-processed with baseline signals. Then, the High Scale CNN
and Low Scale CNN in Multiscale CNNs are utilized to predict the probability of
affective states output for EEG and each peripheral physiological signal
respectively. Finally, the fusion model calculates the reliability of each
single-modality signals by the Euclidean distance between various class labels
and the classification probability from Multiscale CNNs, and the decision is
made by the more reliable modality information while other modalities
information is retained. We use this model to classify four affective states
from the arousal valence plane in the DEAP and AMIGOS dataset. The results show
that the fusion model improves the accuracy of affective states recognition
significantly compared with the result on single-modality signals, and the
recognition accuracy of the fusion result achieve 98.52% and 99.89% in the DEAP
and AMIGOS dataset respectively.
Related papers
- Multi-scale Quaternion CNN and BiGRU with Cross Self-attention Feature Fusion for Fault Diagnosis of Bearing [5.3598912592106345]
Deep learning has led to significant advances in bearing fault diagnosis (FD)
We propose a novel FD model by integrating multiscale quaternion convolutional neural network (MQCNN), bidirectional gated recurrent unit (BiG), and cross self-attention feature fusion (CSAFF)
arXiv Detail & Related papers (2024-05-25T07:55:02Z) - Transformer-based Self-supervised Multimodal Representation Learning for
Wearable Emotion Recognition [2.4364387374267427]
We propose a novel self-supervised learning (SSL) framework for wearable emotion recognition.
Our method achieved state-of-the-art results in various emotion classification tasks.
arXiv Detail & Related papers (2023-03-29T19:45:55Z) - Subject-Independent Drowsiness Recognition from Single-Channel EEG with
an Interpretable CNN-LSTM model [0.8250892979520543]
We propose a novel Convolutional Neural Network (CNN)-Long Short-Term Memory (LSTM) model for subject-independent drowsiness recognition from single-channel EEG signals.
Results show that the model achieves an average accuracy of 72.97% on 11 subjects for leave-one-out subject-independent drowsiness recognition on a public dataset.
arXiv Detail & Related papers (2021-11-21T10:37:35Z) - Learning to Perform Downlink Channel Estimation in Massive MIMO Systems [72.76968022465469]
We study downlink (DL) channel estimation in a Massive multiple-input multiple-output (MIMO) system.
A common approach is to use the mean value as the estimate, motivated by channel hardening.
We propose two novel estimation methods.
arXiv Detail & Related papers (2021-09-06T13:42:32Z) - Ensemble of Convolution Neural Networks on Heterogeneous Signals for
Sleep Stage Scoring [63.30661835412352]
This paper explores and compares the convenience of using additional signals apart from electroencephalograms.
The best overall model, an ensemble of Depth-wise Separational Convolutional Neural Networks, has achieved an accuracy of 86.06%.
arXiv Detail & Related papers (2021-07-23T06:37:38Z) - EEG-based Cross-Subject Driver Drowsiness Recognition with an
Interpretable Convolutional Neural Network [0.0]
We develop a novel convolutional neural network combined with an interpretation technique that allows sample-wise analysis of important features for classification.
Results show that the model achieves an average accuracy of 78.35% on 11 subjects for leave-one-out cross-subject recognition.
arXiv Detail & Related papers (2021-05-30T14:47:20Z) - Convolutional Neural Networks for Sleep Stage Scoring on a Two-Channel
EEG Signal [63.18666008322476]
Sleep problems are one of the major diseases all over the world.
Basic tool used by specialists is the Polysomnogram, which is a collection of different signals recorded during sleep.
Specialists have to score the different signals according to one of the standard guidelines.
arXiv Detail & Related papers (2021-03-30T09:59:56Z) - On the benefits of robust models in modulation recognition [53.391095789289736]
Deep Neural Networks (DNNs) using convolutional layers are state-of-the-art in many tasks in communications.
In other domains, like image classification, DNNs have been shown to be vulnerable to adversarial perturbations.
We propose a novel framework to test the robustness of current state-of-the-art models.
arXiv Detail & Related papers (2021-03-27T19:58:06Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - Interpreting Deep Learning Models for Epileptic Seizure Detection on EEG
signals [4.748221780751802]
Deep Learning (DL) is often considered the state-of-the art for Artificial Intelligence-based medical decision support.
It remains sparsely implemented in clinical practice and poorly trusted by clinicians due to insufficient interpretability of neural network models.
We have tackled this issue by developing interpretable DL models in the context of online detection of epileptic seizure, based on EEG signal.
arXiv Detail & Related papers (2020-12-22T11:10:23Z) - Diversity inducing Information Bottleneck in Model Ensembles [73.80615604822435]
In this paper, we target the problem of generating effective ensembles of neural networks by encouraging diversity in prediction.
We explicitly optimize a diversity inducing adversarial loss for learning latent variables and thereby obtain diversity in the output predictions necessary for modeling multi-modal data.
Compared to the most competitive baselines, we show significant improvements in classification accuracy, under a shift in the data distribution.
arXiv Detail & Related papers (2020-03-10T03:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.