EEG-based Image Feature Extraction for Visual Classification using Deep
Learning
- URL: http://arxiv.org/abs/2209.13090v1
- Date: Tue, 27 Sep 2022 00:50:56 GMT
- Title: EEG-based Image Feature Extraction for Visual Classification using Deep
Learning
- Authors: Alankrit Mishra, Nikhil Raj and Garima Bajwa
- Abstract summary: We develop an efficient way of encoding EEG signals as images to facilitate a more subtle understanding of brain signals with deep learning models.
Our image classification approach with combined EEG features achieved an accuracy of 82% compared to the slightly better accuracy of a pure deep learning approach.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: While capable of segregating visual data, humans take time to examine a
single piece, let alone thousands or millions of samples. The deep learning
models efficiently process sizeable information with the help of modern-day
computing. However, their questionable decision-making process has raised
considerable concerns. Recent studies have identified a new approach to extract
image features from EEG signals and combine them with standard image features.
These approaches make deep learning models more interpretable and also enables
faster converging of models with fewer samples. Inspired by recent studies, we
developed an efficient way of encoding EEG signals as images to facilitate a
more subtle understanding of brain signals with deep learning models. Using two
variations in such encoding methods, we classified the encoded EEG signals
corresponding to 39 image classes with a benchmark accuracy of 70% on the
layered dataset of six subjects, which is significantly higher than the
existing work. Our image classification approach with combined EEG features
achieved an accuracy of 82% compared to the slightly better accuracy of a pure
deep learning approach; nevertheless, it demonstrates the viability of the
theory.
Related papers
- Large-Scale Data-Free Knowledge Distillation for ImageNet via Multi-Resolution Data Generation [53.95204595640208]
Data-Free Knowledge Distillation (DFKD) is an advanced technique that enables knowledge transfer from a teacher model to a student model without relying on original training data.
Previous approaches have generated synthetic images at high resolutions without leveraging information from real images.
MUSE generates images at lower resolutions while using Class Activation Maps (CAMs) to ensure that the generated images retain critical, class-specific features.
arXiv Detail & Related papers (2024-11-26T02:23:31Z) - Mind's Eye: Image Recognition by EEG via Multimodal Similarity-Keeping Contrastive Learning [2.087148326341881]
This paper introduces a MUltimodal Similarity-keeping contrastivE learning framework for zero-shot EEG-based image classification.
We develop a series of multivariate time-series encoders tailored for EEG signals and assess the efficacy of regularized contrastive EEG-Image pretraining.
Our method achieves state-of-the-art performance, with a top-1 accuracy of 19.3% and a top-5 accuracy of 48.8% in 200-way zero-shot image classification.
arXiv Detail & Related papers (2024-06-05T16:42:23Z) - RIGID: A Training-free and Model-Agnostic Framework for Robust AI-Generated Image Detection [60.960988614701414]
RIGID is a training-free and model-agnostic method for robust AI-generated image detection.
RIGID significantly outperforms existing trainingbased and training-free detectors.
arXiv Detail & Related papers (2024-05-30T14:49:54Z) - Learning Robust Deep Visual Representations from EEG Brain Recordings [13.768240137063428]
This study proposes a two-stage method where the first step is to obtain EEG-derived features for robust learning of deep representations.
We demonstrate the generalizability of our feature extraction pipeline across three different datasets using deep-learning architectures.
We propose a novel framework to transform unseen images into the EEG space and reconstruct them with approximation.
arXiv Detail & Related papers (2023-10-25T10:26:07Z) - Improving Human-Object Interaction Detection via Virtual Image Learning [68.56682347374422]
Human-Object Interaction (HOI) detection aims to understand the interactions between humans and objects.
In this paper, we propose to alleviate the impact of such an unbalanced distribution via Virtual Image Leaning (VIL)
A novel label-to-image approach, Multiple Steps Image Creation (MUSIC), is proposed to create a high-quality dataset that has a consistent distribution with real images.
arXiv Detail & Related papers (2023-08-04T10:28:48Z) - MOCA: Self-supervised Representation Learning by Predicting Masked Online Codebook Assignments [72.6405488990753]
Self-supervised learning can be used for mitigating the greedy needs of Vision Transformer networks.
We propose a single-stage and standalone method, MOCA, which unifies both desired properties.
We achieve new state-of-the-art results on low-shot settings and strong experimental results in various evaluation protocols.
arXiv Detail & Related papers (2023-07-18T15:46:20Z) - Genetic Programming-Based Evolutionary Deep Learning for Data-Efficient
Image Classification [3.9310727060473476]
This paper proposes a new genetic programming-based evolutionary deep learning approach to data-efficient image classification.
The new approach can automatically evolve variable-length models using many important operators from both image and classification domains.
A flexible multi-layer representation enables the new approach to automatically construct shallow or deep models/trees for different tasks.
arXiv Detail & Related papers (2022-09-27T08:10:16Z) - Learning Efficient Representations for Enhanced Object Detection on
Large-scene SAR Images [16.602738933183865]
It is a challenging problem to detect and recognize targets on complex large-scene Synthetic Aperture Radar (SAR) images.
Recently developed deep learning algorithms can automatically learn the intrinsic features of SAR images.
We propose an efficient and robust deep learning based target detection method.
arXiv Detail & Related papers (2022-01-22T03:25:24Z) - AugNet: End-to-End Unsupervised Visual Representation Learning with
Image Augmentation [3.6790362352712873]
We propose AugNet, a new deep learning training paradigm to learn image features from a collection of unlabeled pictures.
Our experiments demonstrate that the method is able to represent the image in low dimensional space.
Unlike many deep-learning-based image retrieval algorithms, our approach does not require access to external annotated datasets.
arXiv Detail & Related papers (2021-06-11T09:02:30Z) - Contrastive Learning with Stronger Augmentations [63.42057690741711]
We propose a general framework called Contrastive Learning with Stronger Augmentations(A) to complement current contrastive learning approaches.
Here, the distribution divergence between the weakly and strongly augmented images over the representation bank is adopted to supervise the retrieval of strongly augmented queries.
Experiments showed the information from the strongly augmented images can significantly boost the performance.
arXiv Detail & Related papers (2021-04-15T18:40:04Z) - A Multi-Stage Attentive Transfer Learning Framework for Improving
COVID-19 Diagnosis [49.3704402041314]
We propose a multi-stage attentive transfer learning framework for improving COVID-19 diagnosis.
Our proposed framework consists of three stages to train accurate diagnosis models through learning knowledge from multiple source tasks and data of different domains.
Importantly, we propose a novel self-supervised learning method to learn multi-scale representations for lung CT images.
arXiv Detail & Related papers (2021-01-14T01:39:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.