Human-Machine Cooperative Multimodal Learning Method for Cross-subject
Olfactory Preference Recognition
- URL: http://arxiv.org/abs/2311.14426v1
- Date: Fri, 24 Nov 2023 11:59:11 GMT
- Title: Human-Machine Cooperative Multimodal Learning Method for Cross-subject
Olfactory Preference Recognition
- Authors: Xiuxin Xia, Yuchen Guo, Yanwei Wang, Yuchao Yang, Yan Shi and Hong Men
- Abstract summary: Olfactory electroencephalogram (EEG) contains odor and individual features associated with human olfactory preference.
An E-nose and olfactory EEG multimodal learning method is proposed for cross-subject olfactory preference recognition.
- Score: 11.566318118981453
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Odor sensory evaluation has a broad application in food, clothing, cosmetics,
and other fields. Traditional artificial sensory evaluation has poor
repeatability, and the machine olfaction represented by the electronic nose
(E-nose) is difficult to reflect human feelings. Olfactory electroencephalogram
(EEG) contains odor and individual features associated with human olfactory
preference, which has unique advantages in odor sensory evaluation. However,
the difficulty of cross-subject olfactory EEG recognition greatly limits its
application. It is worth noting that E-nose and olfactory EEG are more
advantageous in representing odor information and individual emotions,
respectively. In this paper, an E-nose and olfactory EEG multimodal learning
method is proposed for cross-subject olfactory preference recognition. Firstly,
the olfactory EEG and E-nose multimodal data acquisition and preprocessing
paradigms are established. Secondly, a complementary multimodal data mining
strategy is proposed to effectively mine the common features of multimodal data
representing odor information and the individual features in olfactory EEG
representing individual emotional information. Finally, the cross-subject
olfactory preference recognition is achieved in 24 subjects by fusing the
extracted common and individual features, and the recognition effect is
superior to the state-of-the-art recognition methods. Furthermore, the
advantages of the proposed method in cross-subject olfactory preference
recognition indicate its potential for practical odor evaluation applications.
Related papers
- Smile upon the Face but Sadness in the Eyes: Emotion Recognition based on Facial Expressions and Eye Behaviors [63.194053817609024]
We introduce eye behaviors as an important emotional cues for the creation of a new Eye-behavior-aided Multimodal Emotion Recognition dataset.
For the first time, we provide annotations for both Emotion Recognition (ER) and Facial Expression Recognition (FER) in the EMER dataset.
We specifically design a new EMERT architecture to concurrently enhance performance in both ER and FER.
arXiv Detail & Related papers (2024-11-08T04:53:55Z) - Joint Contrastive Learning with Feature Alignment for Cross-Corpus EEG-based Emotion Recognition [2.1645626994550664]
We propose a novel Joint Contrastive learning framework with Feature Alignment to address cross-corpus EEG-based emotion recognition.
In the pre-training stage, a joint domain contrastive learning strategy is introduced to characterize generalizable time-frequency representations of EEG signals.
In the fine-tuning stage, JCFA is refined in conjunction with downstream tasks, where the structural connections among brain electrodes are considered.
arXiv Detail & Related papers (2024-04-15T08:21:17Z) - Emotion recognition based on multi-modal electrophysiology multi-head
attention Contrastive Learning [3.2536246345549538]
We propose ME-MHACL, a self-supervised contrastive learning-based multimodal emotion recognition method.
We apply the trained feature extractor to labeled electrophysiological signals and use multi-head attention mechanisms for feature fusion.
Our method outperformed existing benchmark methods in emotion recognition tasks and had good cross-individual generalization ability.
arXiv Detail & Related papers (2023-07-12T05:55:40Z) - Contrastive Learning of Subject-Invariant EEG Representations for
Cross-Subject Emotion Recognition [9.07006689672858]
We propose Contrast Learning method for Inter-Subject Alignment (ISA) for reliable cross-subject emotion recognition.
ISA involves maximizing the similarity in EEG signals across subjects when they received the same stimuli in contrast to different ones.
A convolutional neural network with depthwise spatial convolution and temporal convolution layers was applied to learn inter-subject representations from raw EEG signals.
arXiv Detail & Related papers (2021-09-20T14:13:45Z) - Attentive Cross-modal Connections for Deep Multimodal Wearable-based
Emotion Recognition [7.559720049837459]
We present a novel attentive cross-modal connection to share information between convolutional neural networks.
Specifically, these connections improve emotion classification by sharing intermediate representations among EDA and ECG.
Our experiments show that the proposed approach is capable of learning strong multimodal representations and outperforms a number of baselines methods.
arXiv Detail & Related papers (2021-08-04T18:40:32Z) - Non-contact Pain Recognition from Video Sequences with Remote
Physiological Measurements Prediction [53.03469655641418]
We present a novel multi-task learning framework which encodes both appearance changes and physiological cues in a non-contact manner for pain recognition.
We establish the state-of-the-art performance of non-contact pain recognition on publicly available pain databases.
arXiv Detail & Related papers (2021-05-18T20:47:45Z) - Detecting Human-Object Interaction via Fabricated Compositional Learning [106.37536031160282]
Human-Object Interaction (HOI) detection is a fundamental task for high-level scene understanding.
Human has extremely powerful compositional perception ability to cognize rare or unseen HOI samples.
We propose Fabricated Compositional Learning (FCL) to address the problem of open long-tailed HOI detection.
arXiv Detail & Related papers (2021-03-15T08:52:56Z) - Emotional EEG Classification using Connectivity Features and
Convolutional Neural Networks [81.74442855155843]
We introduce a new classification system that utilizes brain connectivity with a CNN and validate its effectiveness via the emotional video classification.
The level of concentration of the brain connectivity related to the emotional property of the target video is correlated with classification performance.
arXiv Detail & Related papers (2021-01-18T13:28:08Z) - A Novel Transferability Attention Neural Network Model for EEG Emotion
Recognition [51.203579838210885]
We propose a transferable attention neural network (TANN) for EEG emotion recognition.
TANN learns the emotional discriminative information by highlighting the transferable EEG brain regions data and samples adaptively.
This can be implemented by measuring the outputs of multiple brain-region-level discriminators and one single sample-level discriminator.
arXiv Detail & Related papers (2020-09-21T02:42:30Z) - Investigating EEG-Based Functional Connectivity Patterns for Multimodal
Emotion Recognition [8.356765961526955]
We investigate three functional connectivity network features: strength, clustering, coefficient and eigenvector centrality.
The discrimination ability of the EEG connectivity features in emotion recognition is evaluated on three public EEG datasets.
We construct a multimodal emotion recognition model by combining the functional connectivity features from EEG and the features from eye movements or physiological signals.
arXiv Detail & Related papers (2020-04-04T16:51:56Z) - Continuous Emotion Recognition via Deep Convolutional Autoencoder and
Support Vector Regressor [70.2226417364135]
It is crucial that the machine should be able to recognize the emotional state of the user with high accuracy.
Deep neural networks have been used with great success in recognizing emotions.
We present a new model for continuous emotion recognition based on facial expression recognition.
arXiv Detail & Related papers (2020-01-31T17:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.