Exploring the Relationship between Brain Hemisphere States and Frequency Bands through Deep Learning Optimization Techniques
- URL: http://arxiv.org/abs/2509.14078v1
- Date: Wed, 17 Sep 2025 15:26:45 GMT
- Title: Exploring the Relationship between Brain Hemisphere States and Frequency Bands through Deep Learning Optimization Techniques
- Authors: Robiul Islam, Dmitry I. Ignatov, Karl Kaberg, Roman Nabatchikov,
- Abstract summary: This study investigates performance across EEG frequency bands using various convolutions and evaluates efficient class prediction for the left and right hemispheres.<n>Adagrad and RMSprops consistently perform well across different frequency bands, with Adadelta exhibiting robust performance in cross-model evaluations.<n>The deep dense network shows competitive performance in learning complex patterns, whereas the shallow three-layer network, sometimes being less accurate, provides computational efficiency.
- Score: 3.966519779235704
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This study investigates classifier performance across EEG frequency bands using various optimizers and evaluates efficient class prediction for the left and right hemispheres. Three neural network architectures - a deep dense network, a shallow three-layer network, and a convolutional neural network (CNN) - are implemented and compared using the TensorFlow and PyTorch frameworks. Results indicate that the Adagrad and RMSprop optimizers consistently perform well across different frequency bands, with Adadelta exhibiting robust performance in cross-model evaluations. Specifically, Adagrad excels in the beta band, while RMSprop achieves superior performance in the gamma band. Conversely, SGD and FTRL exhibit inconsistent performance. Among the models, the CNN demonstrates the second highest accuracy, particularly in capturing spatial features of EEG data. The deep dense network shows competitive performance in learning complex patterns, whereas the shallow three-layer network, sometimes being less accurate, provides computational efficiency. SHAP (Shapley Additive Explanations) plots are employed to identify efficient class prediction, revealing nuanced contributions of EEG frequency bands to model accuracy. Overall, the study highlights the importance of optimizer selection, model architecture, and EEG frequency band analysis in enhancing classifier performance and understanding feature importance in neuroimaging-based classification tasks.
Related papers
- Self-Supervised Learning via Flow-Guided Neural Operator on Time-Series Data [57.85958428020496]
Flow-Guided Neural Operator (FGNO) is a novel framework combining operator learning with flow matching for SSL training.<n>FGNO learns mappings in functional spaces by using Short-Time Fourier Transform to unify different time resolutions.<n>Unlike prior generative SSL methods that use noisy inputs during inference, we propose using clean inputs for representation extraction while learning representations with noise.
arXiv Detail & Related papers (2026-02-12T18:54:57Z) - Finding Optimal Kernel Size and Dimension in Convolutional Neural Networks An Architecture Optimization Approach [0.0]
Kernel size selection in Convolutional Neural Networks (CNNs) is a critical but often overlooked design decision.<n>This paper proposes the Best Kernel Size Estimation (BKSEF) for optimal, layer-wise kernel size determination.<n> BKSEF balances information gain, computational efficiency, and accuracy improvements by integrating principles from information theory, signal processing, and learning theory.
arXiv Detail & Related papers (2025-06-16T15:15:30Z) - Training Graph Neural Networks Using Non-Robust Samples [2.1937382384136637]
Graph Neural Networks (GNNs) are highly effective neural networks for processing graph -- structured data.<n>GNNs leverage both the graph structure, which represents the relationships between data points, and the feature matrix of the data to optimize their feature representation.<n>This paper proposes a novel method for selecting noise-sensitive training samples from the original training set to construct a smaller yet more effective training set for model training.
arXiv Detail & Related papers (2024-12-19T11:10:48Z) - Enhancing Fast Feed Forward Networks with Load Balancing and a Master Leaf Node [49.08777822540483]
Fast feedforward networks (FFFs) exploit the observation that different regions of the input space activate distinct subsets of neurons in wide networks.
We propose the incorporation of load balancing and Master Leaf techniques into the FFF architecture to improve performance and simplify the training process.
arXiv Detail & Related papers (2024-05-27T05:06:24Z) - Leveraging Frequency Domain Learning in 3D Vessel Segmentation [50.54833091336862]
In this study, we leverage Fourier domain learning as a substitute for multi-scale convolutional kernels in 3D hierarchical segmentation models.
We show that our novel network achieves remarkable dice performance (84.37% on ASACA500 and 80.32% on ImageCAS) in tubular vessel segmentation tasks.
arXiv Detail & Related papers (2024-01-11T19:07:58Z) - Optimizing Neural Network Scale for ECG Classification [1.8953148404648703]
We study scaling convolutional neural networks (CNNs) specifically targeting Residual neural networks (ResNet) for analyzing electrocardiograms (ECGs)
We explored and demonstrated an efficient approach to scale ResNet by examining the effects of crucial parameters, including layer depth, the number of channels, and the convolution kernel size.
Our findings provide insight into obtaining more efficient and accurate models with fewer computing resources or less time.
arXiv Detail & Related papers (2023-08-24T01:26:31Z) - Robust Learning with Progressive Data Expansion Against Spurious
Correlation [65.83104529677234]
We study the learning process of a two-layer nonlinear convolutional neural network in the presence of spurious features.
Our analysis suggests that imbalanced data groups and easily learnable spurious features can lead to the dominance of spurious features during the learning process.
We propose a new training algorithm called PDE that efficiently enhances the model's robustness for a better worst-group performance.
arXiv Detail & Related papers (2023-06-08T05:44:06Z) - NAF: Neural Attenuation Fields for Sparse-View CBCT Reconstruction [79.13750275141139]
This paper proposes a novel and fast self-supervised solution for sparse-view CBCT reconstruction.
The desired attenuation coefficients are represented as a continuous function of 3D spatial coordinates, parameterized by a fully-connected deep neural network.
A learning-based encoder entailing hash coding is adopted to help the network capture high-frequency details.
arXiv Detail & Related papers (2022-09-29T04:06:00Z) - Analytically Tractable Inference in Deep Neural Networks [0.0]
Tractable Approximate Inference (TAGI) algorithm was shown to be a viable and scalable alternative to backpropagation for shallow fully-connected neural networks.
We are demonstrating how TAGI matches or exceeds the performance of backpropagation, for training classic deep neural network architectures.
arXiv Detail & Related papers (2021-03-09T14:51:34Z) - From Sound Representation to Model Robustness [82.21746840893658]
We investigate the impact of different standard environmental sound representations (spectrograms) on the recognition performance and adversarial attack robustness of a victim residual convolutional neural network.
Averaged over various experiments on three environmental sound datasets, we found the ResNet-18 model outperforms other deep learning architectures.
arXiv Detail & Related papers (2020-07-27T17:30:49Z) - FBNetV3: Joint Architecture-Recipe Search using Predictor Pretraining [65.39532971991778]
We present an accuracy predictor that scores architecture and training recipes jointly, guiding both sample selection and ranking.
We run fast evolutionary searches in just CPU minutes to generate architecture-recipe pairs for a variety of resource constraints.
FBNetV3 makes up a family of state-of-the-art compact neural networks that outperform both automatically and manually-designed competitors.
arXiv Detail & Related papers (2020-06-03T05:20:21Z) - Classification of Hand Gestures from Wearable IMUs using Deep Neural
Network [0.0]
An Inertial Measurement Unit (IMU) consists of tri-axial accelerometers and gyroscopes which can together be used for formation analysis.
The paper presents a novel classification approach using a Deep Neural Network (DNN) for classifying hand gestures obtained from wearable IMU sensors.
arXiv Detail & Related papers (2020-04-27T01:08:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.