EEG-based Texture Roughness Classification in Active Tactile Exploration
with Invariant Representation Learning Networks
- URL: http://arxiv.org/abs/2102.08976v1
- Date: Wed, 17 Feb 2021 19:07:13 GMT
- Title: EEG-based Texture Roughness Classification in Active Tactile Exploration
with Invariant Representation Learning Networks
- Authors: Ozan Ozdenizci, Safaa Eldeeb, Andac Demir, Deniz Erdogmus, Murat
Akcakaya
- Abstract summary: Multiple cortical brain regions are responsible for sensory recognition, perception and motor execution during sensorimotor processing.
Main goal of our work is to discriminate textured surfaces varying in their roughness levels during active tactile exploration.
We use an adversarial invariant representation learning neural network architecture that performs EEG-based classification of different textured surfaces.
- Score: 8.021411285905849
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: During daily activities, humans use their hands to grasp surrounding objects
and perceive sensory information which are also employed for perceptual and
motor goals. Multiple cortical brain regions are known to be responsible for
sensory recognition, perception and motor execution during sensorimotor
processing. While various research studies particularly focus on the domain of
human sensorimotor control, the relation and processing between motor execution
and sensory processing is not yet fully understood. Main goal of our work is to
discriminate textured surfaces varying in their roughness levels during active
tactile exploration using simultaneously recorded electroencephalogram (EEG)
data, while minimizing the variance of distinct motor exploration movement
patterns. We perform an experimental study with eight healthy participants who
were instructed to use the tip of their dominant hand index finger while
rubbing or tapping three different textured surfaces with varying levels of
roughness. We use an adversarial invariant representation learning neural
network architecture that performs EEG-based classification of different
textured surfaces, while simultaneously minimizing the discriminability of
motor movement conditions (i.e., rub or tap). Results show that the proposed
approach can discriminate between three different textured surfaces with
accuracies up to 70%, while suppressing movement related variability from
learned representations.
Related papers
- Multi-modal Mood Reader: Pre-trained Model Empowers Cross-Subject Emotion Recognition [23.505616142198487]
We develop a Pre-trained model based Multimodal Mood Reader for cross-subject emotion recognition.
The model learns universal latent representations of EEG signals through pre-training on large scale dataset.
Extensive experiments on public datasets demonstrate Mood Reader's superior performance in cross-subject emotion recognition tasks.
arXiv Detail & Related papers (2024-05-28T14:31:11Z) - I am Only Happy When There is Light: The Impact of Environmental Changes
on Affective Facial Expressions Recognition [65.69256728493015]
We study the impact of different image conditions on the recognition of arousal from human facial expressions.
Our results show how the interpretation of human affective states can differ greatly in either the positive or negative direction.
arXiv Detail & Related papers (2022-10-28T16:28:26Z) - fMRI from EEG is only Deep Learning away: the use of interpretable DL to
unravel EEG-fMRI relationships [68.8204255655161]
We present an interpretable domain grounded solution to recover the activity of several subcortical regions from multichannel EEG data.
We recover individual spatial and time-frequency patterns of scalp EEG predictive of the hemodynamic signal in the subcortical nuclei.
arXiv Detail & Related papers (2022-10-23T15:11:37Z) - Progressive Graph Convolution Network for EEG Emotion Recognition [35.08010382523394]
Studies in the area of neuroscience have revealed the relationship between emotional patterns and brain functional regions.
In EEG emotion recognition, we can observe that clearer boundaries exist between coarse-grained emotions than those between fine-grained emotions.
We propose a progressive graph convolution network (PGCN) for capturing this inherent characteristic in EEG emotional signals.
arXiv Detail & Related papers (2021-12-14T03:30:13Z) - Overcoming the Domain Gap in Neural Action Representations [60.47807856873544]
3D pose data can now be reliably extracted from multi-view video sequences without manual intervention.
We propose to use it to guide the encoding of neural action representations together with a set of neural and behavioral augmentations.
To reduce the domain gap, during training, we swap neural and behavioral data across animals that seem to be performing similar actions.
arXiv Detail & Related papers (2021-12-02T12:45:46Z) - Overcoming the Domain Gap in Contrastive Learning of Neural Action
Representations [60.47807856873544]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior.
We generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies.
This dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
arXiv Detail & Related papers (2021-11-29T15:27:51Z) - Contrastive Learning of Subject-Invariant EEG Representations for
Cross-Subject Emotion Recognition [9.07006689672858]
We propose Contrast Learning method for Inter-Subject Alignment (ISA) for reliable cross-subject emotion recognition.
ISA involves maximizing the similarity in EEG signals across subjects when they received the same stimuli in contrast to different ones.
A convolutional neural network with depthwise spatial convolution and temporal convolution layers was applied to learn inter-subject representations from raw EEG signals.
arXiv Detail & Related papers (2021-09-20T14:13:45Z) - Preserving Privacy in Human-Motion Affect Recognition [4.753703852165805]
This work evaluates the effectiveness of existing methods at recognising emotions using both 3D temporal joint signals and manually extracted features.
We propose a cross-subject transfer learning technique for training a multi-encoder autoencoder deep neural network to learn disentangled latent representations of human motion features.
arXiv Detail & Related papers (2021-05-09T15:26:21Z) - A Novel Transferability Attention Neural Network Model for EEG Emotion
Recognition [51.203579838210885]
We propose a transferable attention neural network (TANN) for EEG emotion recognition.
TANN learns the emotional discriminative information by highlighting the transferable EEG brain regions data and samples adaptively.
This can be implemented by measuring the outputs of multiple brain-region-level discriminators and one single sample-level discriminator.
arXiv Detail & Related papers (2020-09-21T02:42:30Z) - Continuous Emotion Recognition via Deep Convolutional Autoencoder and
Support Vector Regressor [70.2226417364135]
It is crucial that the machine should be able to recognize the emotional state of the user with high accuracy.
Deep neural networks have been used with great success in recognizing emotions.
We present a new model for continuous emotion recognition based on facial expression recognition.
arXiv Detail & Related papers (2020-01-31T17:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.