SCVCNet: Sliding cross-vector convolution network for cross-task and
inter-individual-set EEG-based cognitive workload recognition
- URL: http://arxiv.org/abs/2310.03749v1
- Date: Thu, 21 Sep 2023 13:06:30 GMT
- Title: SCVCNet: Sliding cross-vector convolution network for cross-task and
inter-individual-set EEG-based cognitive workload recognition
- Authors: Qi Wang, Li Chen, Zhiyuan Zhan, Jianhua Zhang, Zhong Yin
- Abstract summary: This paper presents a generic approach for applying the cognitive workload recognizer by exploiting common electroencephalogram (EEG) patterns across different human-machine tasks and individual sets.
We propose a neural network called SCVCNet, which eliminates task- and individual-set-related interferences in EEGs by analyzing finer-grained frequency structures in the power spectral densities.
- Score: 15.537230343119875
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a generic approach for applying the cognitive workload
recognizer by exploiting common electroencephalogram (EEG) patterns across
different human-machine tasks and individual sets. We propose a neural network
called SCVCNet, which eliminates task- and individual-set-related interferences
in EEGs by analyzing finer-grained frequency structures in the power spectral
densities. The SCVCNet utilizes a sliding cross-vector convolution (SCVC)
operation, where paired input layers representing the theta and alpha power are
employed. By extracting the weights from a kernel matrix's central row and
column, we compute the weighted sum of the two vectors around a specified scalp
location. Next, we introduce an inter-frequency-point feature integration
module to fuse the SCVC feature maps. Finally, we combined the two modules with
the output-channel pooling and classification layers to construct the model. To
train the SCVCNet, we employ the regularized least-square method with ridge
regression and the extreme learning machine theory. We validate its performance
using three databases, each consisting of distinct tasks performed by
independent participant groups. The average accuracy (0.6813 and 0.6229) and F1
score (0.6743 and 0.6076) achieved in two different validation paradigms show
partially higher performance than the previous works. All features and
algorithms are available on website:https://github.com/7ohnKeats/SCVCNet.
Related papers
- Y-CA-Net: A Convolutional Attention Based Network for Volumetric Medical Image Segmentation [47.12719953712902]
discriminative local features are key components for the performance of attention-based VS methods.
We incorporate the convolutional encoder branch with transformer backbone to extract local and global features in a parallel manner.
Y-CT-Net achieves competitive performance on multiple medical segmentation tasks.
arXiv Detail & Related papers (2024-10-01T18:50:45Z) - 4D ASR: Joint Beam Search Integrating CTC, Attention, Transducer, and Mask Predict Decoders [53.297697898510194]
We propose a joint modeling scheme where four decoders share the same encoder -- we refer to this as 4D modeling.
To efficiently train the 4D model, we introduce a two-stage training strategy that stabilizes multitask learning.
In addition, we propose three novel one-pass beam search algorithms by combining three decoders.
arXiv Detail & Related papers (2024-06-05T05:18:20Z) - Community detection in complex networks via node similarity, graph
representation learning, and hierarchical clustering [4.264842058017711]
Community detection is a critical challenge in analysing real graphs.
This article proposes three new, general, hierarchical frameworks to deal with this task.
We compare over a hundred module combinations on the Block Model graphs and real-life datasets.
arXiv Detail & Related papers (2023-03-21T22:12:53Z) - 3DMODT: Attention-Guided Affinities for Joint Detection & Tracking in 3D
Point Clouds [95.54285993019843]
We propose a method for joint detection and tracking of multiple objects in 3D point clouds.
Our model exploits temporal information employing multiple frames to detect objects and track them in a single network.
arXiv Detail & Related papers (2022-11-01T20:59:38Z) - SVNet: Where SO(3) Equivariance Meets Binarization on Point Cloud
Representation [65.4396959244269]
The paper tackles the challenge by designing a general framework to construct 3D learning architectures.
The proposed approach can be applied to general backbones like PointNet and DGCNN.
Experiments on ModelNet40, ShapeNet, and the real-world dataset ScanObjectNN, demonstrated that the method achieves a great trade-off between efficiency, rotation, and accuracy.
arXiv Detail & Related papers (2022-09-13T12:12:19Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - Multi-scale and Cross-scale Contrastive Learning for Semantic
Segmentation [5.281694565226513]
We apply contrastive learning to enhance the discriminative power of the multi-scale features extracted by semantic segmentation networks.
By first mapping the encoder's multi-scale representations to a common feature space, we instantiate a novel form of supervised local-global constraint.
arXiv Detail & Related papers (2022-03-25T01:24:24Z) - Deep ensembles in bioimage segmentation [74.01883650587321]
In this work, we propose an ensemble of convolutional neural networks (CNNs)
In ensemble methods, many different models are trained and then used for classification, the ensemble aggregates the outputs of the single classifiers.
The proposed ensemble is implemented by combining different backbone networks using the DeepLabV3+ and HarDNet environment.
arXiv Detail & Related papers (2021-12-24T05:54:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.