On the utility of power spectral techniques with feature selection
techniques for effective mental task classification in noninvasive BCI
- URL: http://arxiv.org/abs/2111.08154v1
- Date: Tue, 16 Nov 2021 00:27:53 GMT
- Title: On the utility of power spectral techniques with feature selection
techniques for effective mental task classification in noninvasive BCI
- Authors: Akshansh Gupta, Ramesh Kumar Agrawal, Jyoti Singh Kirar, Javier
Andreu-Perez, Wei-Ping Ding, Chin-Teng Lin, Mukesh Prasad
- Abstract summary: This paper proposes an approach to select relevant and non-redundant spectral features for the mental task classification.
The findings demonstrate substantial improvements in the performance of the learning model for mental task classification.
- Score: 19.19039983741124
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this paper classification of mental task-root Brain-Computer Interfaces
(BCI) is being investigated, as those are a dominant area of investigations in
BCI and are of utmost interest as these systems can be augmented life of people
having severe disabilities. The BCI model's performance is primarily dependent
on the size of the feature vector, which is obtained through multiple channels.
In the case of mental task classification, the availability of training samples
to features are minimal. Very often, feature selection is used to increase the
ratio for the mental task classification by getting rid of irrelevant and
superfluous features. This paper proposes an approach to select relevant and
non-redundant spectral features for the mental task classification. This can be
done by using four very known multivariate feature selection methods viz,
Bhattacharya's Distance, Ratio of Scatter Matrices, Linear Regression and
Minimum Redundancy & Maximum Relevance. This work also deals with a comparative
analysis of multivariate and univariate feature selection for mental task
classification. After applying the above-stated method, the findings
demonstrate substantial improvements in the performance of the learning model
for mental task classification. Moreover, the efficacy of the proposed approach
is endorsed by carrying out a robust ranking algorithm and Friedman's
statistical test for finding the best combinations and comparing different
combinations of power spectral density and feature selection methods.
Related papers
- Enhancing Classification Performance via Reinforcement Learning for
Feature Selection [0.0]
This study investigates the importance of effective feature selection in enhancing the performance of classification models.
By employing reinforcement learning (RL) algorithms, specifically Q-learning (QL) and SARSA learning, this paper addresses the feature selection challenge.
arXiv Detail & Related papers (2024-03-09T18:34:59Z) - Greedy feature selection: Classifier-dependent feature selection via
greedy methods [2.4374097382908477]
The purpose of this study is to introduce a new approach to feature ranking for classification tasks, called in what follows greedy feature selection.
The benefits of such scheme are investigated theoretically in terms of model capacity indicators, such as the Vapnik-Chervonenkis (VC) dimension or the kernel alignment.
arXiv Detail & Related papers (2024-03-08T08:12:05Z) - Feature Selection as Deep Sequential Generative Learning [50.00973409680637]
We develop a deep variational transformer model over a joint of sequential reconstruction, variational, and performance evaluator losses.
Our model can distill feature selection knowledge and learn a continuous embedding space to map feature selection decision sequences into embedding vectors associated with utility scores.
arXiv Detail & Related papers (2024-03-06T16:31:56Z) - A Contrast Based Feature Selection Algorithm for High-dimensional Data
set in Machine Learning [9.596923373834093]
We propose a novel filter feature selection method, ContrastFS, which selects discriminative features based on the discrepancies features shown between different classes.
We validate effectiveness and efficiency of our approach on several widely studied benchmark datasets, results show that the new method performs favorably with negligible computation.
arXiv Detail & Related papers (2024-01-15T05:32:35Z) - Graph-Based Automatic Feature Selection for Multi-Class Classification
via Mean Simplified Silhouette [4.786337974720721]
This paper introduces a novel graph-based filter method for automatic feature selection (abbreviated as GB-AFS)
The method determines the minimum combination of features required to sustain prediction performance.
It does not require any user-defined parameters such as the number of features to select.
arXiv Detail & Related papers (2023-09-05T14:37:31Z) - Compactness Score: A Fast Filter Method for Unsupervised Feature
Selection [66.84571085643928]
We propose a fast unsupervised feature selection method, named as, Compactness Score (CSUFS) to select desired features.
Our proposed algorithm seems to be more accurate and efficient compared with existing algorithms.
arXiv Detail & Related papers (2022-01-31T13:01:37Z) - Multivariate feature ranking of gene expression data [62.997667081978825]
We propose two new multivariate feature ranking methods based on pairwise correlation and pairwise consistency.
We statistically prove that the proposed methods outperform the state of the art feature ranking methods Clustering Variation, Chi Squared, Correlation, Information Gain, ReliefF and Significance.
arXiv Detail & Related papers (2021-11-03T17:19:53Z) - MCDAL: Maximum Classifier Discrepancy for Active Learning [74.73133545019877]
Recent state-of-the-art active learning methods have mostly leveraged Generative Adversarial Networks (GAN) for sample acquisition.
We propose in this paper a novel active learning framework that we call Maximum Discrepancy for Active Learning (MCDAL)
In particular, we utilize two auxiliary classification layers that learn tighter decision boundaries by maximizing the discrepancies among them.
arXiv Detail & Related papers (2021-07-23T06:57:08Z) - Channel DropBlock: An Improved Regularization Method for Fine-Grained
Visual Classification [58.07257910065007]
Existing approaches mainly tackle this problem by introducing attention mechanisms to locate the discriminative parts or feature encoding approaches to extract the highly parameterized features in a weakly-supervised fashion.
In this work, we propose a lightweight yet effective regularization method named Channel DropBlock (CDB) in combination with two alternative correlation metrics, to address this problem.
arXiv Detail & Related papers (2021-06-07T09:03:02Z) - Towards Efficient Processing and Learning with Spikes: New Approaches
for Multi-Spike Learning [59.249322621035056]
We propose two new multi-spike learning rules which demonstrate better performance over other baselines on various tasks.
In the feature detection task, we re-examine the ability of unsupervised STDP with its limitations being presented.
Our proposed learning rules can reliably solve the task over a wide range of conditions without specific constraints being applied.
arXiv Detail & Related papers (2020-05-02T06:41:20Z) - A novel embedded min-max approach for feature selection in nonlinear
support vector machine classification [0.0]
We propose an embedded feature selection method based on a min-max optimization problem.
By leveraging duality theory, we equivalently reformulate the min-max problem and solve it without further ado.
The efficiency and usefulness of our approach are tested on several benchmark data sets.
arXiv Detail & Related papers (2020-04-21T09:40:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.