Unidirectional brain-computer interface: Artificial neural network
encoding natural images to fMRI response in the visual cortex
- URL: http://arxiv.org/abs/2309.15018v1
- Date: Tue, 26 Sep 2023 15:38:26 GMT
- Title: Unidirectional brain-computer interface: Artificial neural network
encoding natural images to fMRI response in the visual cortex
- Authors: Ruixing Liang, Xiangyu Zhang, Qiong Li, Lai Wei, Hexin Liu, Avisha
Kumar, Kelley M. Kempski Leadingham, Joshua Punnoose, Leibny Paola Garcia,
Amir Manbachi
- Abstract summary: We propose an artificial neural network dubbed VISION to mimic the human brain and show how it can foster neuroscientific inquiries.
VISION successfully predicts human hemodynamic responses as fMRI voxel values to visual inputs with an accuracy exceeding state-of-the-art performance by 45%.
- Score: 12.1427193917406
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While significant advancements in artificial intelligence (AI) have catalyzed
progress across various domains, its full potential in understanding visual
perception remains underexplored. We propose an artificial neural network
dubbed VISION, an acronym for "Visual Interface System for Imaging Output of
Neural activity," to mimic the human brain and show how it can foster
neuroscientific inquiries. Using visual and contextual inputs, this multimodal
model predicts the brain's functional magnetic resonance imaging (fMRI) scan
response to natural images. VISION successfully predicts human hemodynamic
responses as fMRI voxel values to visual inputs with an accuracy exceeding
state-of-the-art performance by 45%. We further probe the trained networks to
reveal representational biases in different visual areas, generate
experimentally testable hypotheses, and formulate an interpretable metric to
associate these hypotheses with cortical functions. With both a model and
evaluation metric, the cost and time burdens associated with designing and
implementing functional analysis on the visual cortex could be reduced. Our
work suggests that the evolution of computational models may shed light on our
fundamental understanding of the visual cortex and provide a viable approach
toward reliable brain-machine interfaces.
Related papers
- Decoding Visual Experience and Mapping Semantics through Whole-Brain Analysis Using fMRI Foundation Models [10.615012396285337]
We develop algorithms to enhance our understanding of visual processes by incorporating whole-brain activation maps.
We first compare our method with state-of-the-art approaches to decoding visual processing and show improved predictive semantic accuracy by 43%.
arXiv Detail & Related papers (2024-11-11T16:51:17Z) - Teaching CORnet Human fMRI Representations for Enhanced Model-Brain Alignment [2.035627332992055]
Functional magnetic resonance imaging (fMRI) as a widely used technique in cognitive neuroscience can record neural activation in the human visual cortex during the process of visual perception.
This study proposed ReAlnet-fMRI, a model based on the SOTA vision model CORnet but optimized using human fMRI data through a multi-layer encoding-based alignment framework.
The fMRI-optimized ReAlnet-fMRI exhibited higher similarity to the human brain than both CORnet and the control model in within-and across-subject as well as within- and across-modality model-brain (fMRI and EEG
arXiv Detail & Related papers (2024-07-15T03:31:42Z) - Brain3D: Generating 3D Objects from fMRI [76.41771117405973]
We design a novel 3D object representation learning method, Brain3D, that takes as input the fMRI data of a subject.
We show that our model captures the distinct functionalities of each region of human vision system.
Preliminary evaluations indicate that Brain3D can successfully identify the disordered brain regions in simulated scenarios.
arXiv Detail & Related papers (2024-05-24T06:06:11Z) - Psychometry: An Omnifit Model for Image Reconstruction from Human Brain Activity [60.983327742457995]
Reconstructing the viewed images from human brain activity bridges human and computer vision through the Brain-Computer Interface.
We devise Psychometry, an omnifit model for reconstructing images from functional Magnetic Resonance Imaging (fMRI) obtained from different subjects.
arXiv Detail & Related papers (2024-03-29T07:16:34Z) - Decoding Realistic Images from Brain Activity with Contrastive
Self-supervision and Latent Diffusion [29.335943994256052]
Reconstructing visual stimuli from human brain activities provides a promising opportunity to advance our understanding of the brain's visual system.
We propose a two-phase framework named Contrast and Diffuse (CnD) to decode realistic images from functional magnetic resonance imaging (fMRI) recordings.
arXiv Detail & Related papers (2023-09-30T09:15:22Z) - Contrast, Attend and Diffuse to Decode High-Resolution Images from Brain
Activities [31.448924808940284]
We introduce a two-phase fMRI representation learning framework.
The first phase pre-trains an fMRI feature learner with a proposed Double-contrastive Mask Auto-encoder to learn denoised representations.
The second phase tunes the feature learner to attend to neural activation patterns most informative for visual reconstruction with guidance from an image auto-encoder.
arXiv Detail & Related papers (2023-05-26T19:16:23Z) - Controllable Mind Visual Diffusion Model [58.83896307930354]
Brain signal visualization has emerged as an active research area, serving as a critical interface between the human visual system and computer vision models.
We propose a novel approach, referred to as Controllable Mind Visual Model Diffusion (CMVDM)
CMVDM extracts semantic and silhouette information from fMRI data using attribute alignment and assistant networks.
We then leverage a control model to fully exploit the extracted information for image synthesis, resulting in generated images that closely resemble the visual stimuli in terms of semantics and silhouette.
arXiv Detail & Related papers (2023-05-17T11:36:40Z) - Adapting Brain-Like Neural Networks for Modeling Cortical Visual
Prostheses [68.96380145211093]
Cortical prostheses are devices implanted in the visual cortex that attempt to restore lost vision by electrically stimulating neurons.
Currently, the vision provided by these devices is limited, and accurately predicting the visual percepts resulting from stimulation is an open challenge.
We propose to address this challenge by utilizing 'brain-like' convolutional neural networks (CNNs), which have emerged as promising models of the visual system.
arXiv Detail & Related papers (2022-09-27T17:33:19Z) - Functional2Structural: Cross-Modality Brain Networks Representation
Learning [55.24969686433101]
Graph mining on brain networks may facilitate the discovery of novel biomarkers for clinical phenotypes and neurodegenerative diseases.
We propose a novel graph learning framework, known as Deep Signed Brain Networks (DSBN), with a signed graph encoder.
We validate our framework on clinical phenotype and neurodegenerative disease prediction tasks using two independent, publicly available datasets.
arXiv Detail & Related papers (2022-05-06T03:45:36Z) - NeuroGen: activation optimized image synthesis for discovery
neuroscience [9.621977197691747]
We propose a novel computational strategy, which we call NeuroGen, to overcome limitations and develop a powerful tool for human vision neuroscience discovery.
NeuroGen combines an fMRI-trained neural encoding model of human vision with a deep generative network to synthesize images predicted to achieve a target pattern of macro-scale brain activation.
By using only a small number of synthetic images created by NeuroGen, we demonstrate that we can detect and amplify differences in regional and individual human brain response patterns to visual stimuli.
arXiv Detail & Related papers (2021-05-15T04:36:39Z) - Continuous Emotion Recognition via Deep Convolutional Autoencoder and
Support Vector Regressor [70.2226417364135]
It is crucial that the machine should be able to recognize the emotional state of the user with high accuracy.
Deep neural networks have been used with great success in recognizing emotions.
We present a new model for continuous emotion recognition based on facial expression recognition.
arXiv Detail & Related papers (2020-01-31T17:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.