Neural Brain Fields: A NeRF-Inspired Approach for Generating Nonexistent EEG Electrodes
- URL: http://arxiv.org/abs/2601.00012v1
- Date: Sat, 20 Dec 2025 21:20:18 GMT
- Title: Neural Brain Fields: A NeRF-Inspired Approach for Generating Nonexistent EEG Electrodes
- Authors: Shahar Ain Kedem, Itamar Zimerman, Eliya Nachmani,
- Abstract summary: We show that a neural network can be trained on a single EEG sample in a NeRF style manner to produce a fixed size and informative weight vector.<n>We demonstrate that this approach enables continuous visualization of brain activity at any desired resolution.
- Score: 17.593146780326034
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Electroencephalography (EEG) data present unique modeling challenges because recordings vary in length, exhibit very low signal to noise ratios, differ significantly across participants, drift over time within sessions, and are rarely available in large and clean datasets. Consequently, developing deep learning methods that can effectively process EEG signals remains an open and important research problem. To tackle this problem, this work presents a new method inspired by Neural Radiance Fields (NeRF). In computer vision, NeRF techniques train a neural network to memorize the appearance of a 3D scene and then uses its learned parameters to render and edit the scene from any viewpoint. We draw an analogy between the discrete images captured from different viewpoints used to learn a continuous 3D scene in NeRF, and EEG electrodes positioned at different locations on the scalp, which are used to infer the underlying representation of continuous neural activity. Building on this connection, we show that a neural network can be trained on a single EEG sample in a NeRF style manner to produce a fixed size and informative weight vector that encodes the entire signal. Moreover, via this representation we can render the EEG signal at previously unseen time steps and spatial electrode positions. We demonstrate that this approach enables continuous visualization of brain activity at any desired resolution, including ultra high resolution, and reconstruction of raw EEG signals. Finally, our empirical analysis shows that this method can effectively simulate nonexistent electrodes data in EEG recordings, allowing the reconstructed signal to be fed into standard EEG processing networks to improve performance.
Related papers
- NeuroRVQ: Multi-Scale EEG Tokenization for Generative Large Brainwave Models [66.91449452840318]
We introduce NeuroRVQ, a scalable Large Brainwave Model (LBM) centered on a codebook-based tokenizer.<n>Our tokenizer integrates: (i) multi-scale feature extraction modules that capture the full frequency neural spectrum; (ii) hierarchical residual vector quantization (RVQ) codebooks for high-resolution encoding; and, (iii) an EEG signal phase- and amplitude-aware loss function for efficient training.<n>Our empirical results demonstrate that NeuroRVQ achieves lower reconstruction error and outperforms existing LBMs on a variety of downstream tasks.
arXiv Detail & Related papers (2025-10-15T01:26:52Z) - 3D-Telepathy: Reconstructing 3D Objects from EEG Signals [19.548597299697796]
Reconstructing 3D visual stimuli from Electroencephalography (EEG) data holds significant potential for applications in Brain-Computer Interfaces (BCIs)<n>We propose an innovative EEG architecture that integrates a dual self-attention mechanism.<n>We use a hybrid training strategy to train the EEG, which includes cross-attention, contrastive learning, and self-supervised learning techniques.
arXiv Detail & Related papers (2025-06-27T01:26:52Z) - CognitionCapturer: Decoding Visual Stimuli From Human EEG Signal With Multimodal Information [61.1904164368732]
We propose CognitionCapturer, a unified framework that fully leverages multimodal data to represent EEG signals.<n>Specifically, CognitionCapturer trains Modality Experts for each modality to extract cross-modal information from the EEG modality.<n>The framework does not require any fine-tuning of the generative models and can be extended to incorporate more modalities.
arXiv Detail & Related papers (2024-12-13T16:27:54Z) - Neuro-3D: Towards 3D Visual Decoding from EEG Signals [49.502364730056044]
We introduce a new neuroscience task: decoding 3D visual perception from EEG signals.<n>We first present EEG-3D, a dataset featuring multimodal analysis data and EEG recordings from 12 subjects viewing 72 categories of 3D objects rendered in both videos and images.<n>We propose Neuro-3D, a 3D visual decoding framework based on EEG signals.
arXiv Detail & Related papers (2024-11-19T05:52:17Z) - EEG-Driven 3D Object Reconstruction with Style Consistency and Diffusion Prior [1.7205106391379026]
This paper proposes an EEG-based 3D object reconstruction method with style consistency and diffusion priors.
Through experimental validation, we demonstrate that this method can effectively use EEG data to reconstruct 3D objects with style consistency.
arXiv Detail & Related papers (2024-10-28T12:59:24Z) - EEGDiR: Electroencephalogram denoising network for temporal information storage and global modeling through Retentive Network [11.491355463353731]
We introduce the Retnet from natural language processing to EEG denoising.
Direct application of Retnet to EEG denoising is unfeasible due to the one-dimensional nature of EEG signals.
We propose the signal embedding method, transforming one-dimensional EEG signals into two dimensions for use as network inputs.
arXiv Detail & Related papers (2024-03-20T15:04:21Z) - Learning Robust Deep Visual Representations from EEG Brain Recordings [13.768240137063428]
This study proposes a two-stage method where the first step is to obtain EEG-derived features for robust learning of deep representations.
We demonstrate the generalizability of our feature extraction pipeline across three different datasets using deep-learning architectures.
We propose a novel framework to transform unseen images into the EEG space and reconstruct them with approximation.
arXiv Detail & Related papers (2023-10-25T10:26:07Z) - ResFields: Residual Neural Fields for Spatiotemporal Signals [61.44420761752655]
ResFields is a novel class of networks specifically designed to effectively represent complex temporal signals.
We conduct comprehensive analysis of the properties of ResFields and propose a matrix factorization technique to reduce the number of trainable parameters.
We demonstrate the practical utility of ResFields by showcasing its effectiveness in capturing dynamic 3D scenes from sparse RGBD cameras.
arXiv Detail & Related papers (2023-09-06T16:59:36Z) - fMRI from EEG is only Deep Learning away: the use of interpretable DL to
unravel EEG-fMRI relationships [68.8204255655161]
We present an interpretable domain grounded solution to recover the activity of several subcortical regions from multichannel EEG data.
We recover individual spatial and time-frequency patterns of scalp EEG predictive of the hemodynamic signal in the subcortical nuclei.
arXiv Detail & Related papers (2022-10-23T15:11:37Z) - Neural networks for classification of strokes in electrical impedance
tomography on a 3D head model [0.0]
We employ two neural network architectures -- a fully connected and a convolutional one -- for the classification of hemorrhagic and ischemic strokes.
The networks are trained on a dataset with $40,000$ samples of synthetic electrode measurements.
We then test the networks on several datasets of unseen EIT data, with more complex stroke modeling.
arXiv Detail & Related papers (2020-11-05T14:22:05Z) - A Novel Transferability Attention Neural Network Model for EEG Emotion
Recognition [51.203579838210885]
We propose a transferable attention neural network (TANN) for EEG emotion recognition.
TANN learns the emotional discriminative information by highlighting the transferable EEG brain regions data and samples adaptively.
This can be implemented by measuring the outputs of multiple brain-region-level discriminators and one single sample-level discriminator.
arXiv Detail & Related papers (2020-09-21T02:42:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.