Reconstruction of Perceived Images from fMRI Patterns and Semantic Brain
Exploration using Instance-Conditioned GANs
- URL: http://arxiv.org/abs/2202.12692v1
- Date: Fri, 25 Feb 2022 13:51:00 GMT
- Title: Reconstruction of Perceived Images from fMRI Patterns and Semantic Brain
Exploration using Instance-Conditioned GANs
- Authors: Furkan Ozcelik, Bhavin Choksi, Milad Mozafari, Leila Reddy, Rufin
VanRullen
- Abstract summary: We use an Instance-Conditioned GAN (IC-GAN) model to reconstruct images from fMRI patterns with both accurate semantic attributes and preserved low-level details.
We trained ridge regression models to predict instance features, noise vectors, and dense vectors of stimuli from corresponding fMRI patterns.
Then, we used the IC-GAN generator to reconstruct novel test images based on these fMRI-predicted variables.
- Score: 1.6904374000330984
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reconstructing perceived natural images from fMRI signals is one of the most
engaging topics of neural decoding research. Prior studies had success in
reconstructing either the low-level image features or the semantic/high-level
aspects, but rarely both. In this study, we utilized an Instance-Conditioned
GAN (IC-GAN) model to reconstruct images from fMRI patterns with both accurate
semantic attributes and preserved low-level details. The IC-GAN model takes as
input a 119-dim noise vector and a 2048-dim instance feature vector extracted
from a target image via a self-supervised learning model (SwAV ResNet-50);
these instance features act as a conditioning for IC-GAN image generation,
while the noise vector introduces variability between samples. We trained ridge
regression models to predict instance features, noise vectors, and dense
vectors (the output of the first dense layer of the IC-GAN generator) of
stimuli from corresponding fMRI patterns. Then, we used the IC-GAN generator to
reconstruct novel test images based on these fMRI-predicted variables. The
generated images presented state-of-the-art results in terms of capturing the
semantic attributes of the original test images while remaining relatively
faithful to low-level image details. Finally, we use the learned regression
model and the IC-GAN generator to systematically explore and visualize the
semantic features that maximally drive each of several regions-of-interest in
the human brain.
Related papers
- Contrastive Learning to Fine-Tune Feature Extraction Models for the Visual Cortex [1.2891210250935148]
We adapt contrastive learning to fine-tune a convolutional neural network, which was pretrained for image classification.
We show that CL fine-tuning creates feature extraction models that enable higher encoding accuracy in early visual ROIs.
arXiv Detail & Related papers (2024-10-08T14:14:23Z) - A Unified Model for Compressed Sensing MRI Across Undersampling Patterns [69.19631302047569]
Deep neural networks have shown great potential for reconstructing high-fidelity images from undersampled measurements.
Our model is based on neural operators, a discretization-agnostic architecture.
Our inference speed is also 1,400x faster than diffusion methods.
arXiv Detail & Related papers (2024-10-05T20:03:57Z) - MindFormer: Semantic Alignment of Multi-Subject fMRI for Brain Decoding [50.55024115943266]
We introduce a novel semantic alignment method of multi-subject fMRI signals using so-called MindFormer.
This model is specifically designed to generate fMRI-conditioned feature vectors that can be used for conditioning Stable Diffusion model for fMRI- to-image generation or large language model (LLM) for fMRI-to-text generation.
Our experimental results demonstrate that MindFormer generates semantically consistent images and text across different subjects.
arXiv Detail & Related papers (2024-05-28T00:36:25Z) - NeuroPictor: Refining fMRI-to-Image Reconstruction via Multi-individual Pretraining and Multi-level Modulation [55.51412454263856]
This paper proposes to directly modulate the generation process of diffusion models using fMRI signals.
By training with about 67,000 fMRI-image pairs from various individuals, our model enjoys superior fMRI-to-image decoding capacity.
arXiv Detail & Related papers (2024-03-27T02:42:52Z) - Natural scene reconstruction from fMRI signals using generative latent
diffusion [1.90365714903665]
We present a two-stage scene reconstruction framework called Brain-Diffuser''
In the first stage, we reconstruct images that capture low-level properties and overall layout using a VDVAE (Very Deep Vari Autoencoder) model.
In the second stage, we use the image-to-image framework of a latent diffusion model conditioned on predicted multimodal (text and visual) features.
arXiv Detail & Related papers (2023-03-09T15:24:26Z) - Facial Image Reconstruction from Functional Magnetic Resonance Imaging
via GAN Inversion with Improved Attribute Consistency [5.705640492618758]
We propose a new framework to reconstruct facial images from fMRI data.
The proposed framework accomplishes two goals: (1) reconstructing clear facial images from fMRI data and (2) maintaining the consistency of semantic characteristics.
arXiv Detail & Related papers (2022-07-03T11:18:35Z) - Ensembling with Deep Generative Views [72.70801582346344]
generative models can synthesize "views" of artificial images that mimic real-world variations, such as changes in color or pose.
Here, we investigate whether such views can be applied to real images to benefit downstream analysis tasks such as image classification.
We use StyleGAN2 as the source of generative augmentations and investigate this setup on classification tasks involving facial attributes, cat faces, and cars.
arXiv Detail & Related papers (2021-04-29T17:58:35Z) - IMAGINE: Image Synthesis by Image-Guided Model Inversion [79.4691654458141]
We introduce an inversion based method, denoted as IMAge-Guided model INvErsion (IMAGINE), to generate high-quality and diverse images.
We leverage the knowledge of image semantics from a pre-trained classifier to achieve plausible generations.
IMAGINE enables the synthesis procedure to simultaneously 1) enforce semantic specificity constraints during the synthesis, 2) produce realistic images without generator training, and 3) give users intuitive control over the generation process.
arXiv Detail & Related papers (2021-04-13T02:00:24Z) - Adaptive Gradient Balancing for UndersampledMRI Reconstruction and
Image-to-Image Translation [60.663499381212425]
We enhance the image quality by using a Wasserstein Generative Adversarial Network combined with a novel Adaptive Gradient Balancing technique.
In MRI, our method minimizes artifacts, while maintaining a high-quality reconstruction that produces sharper images than other techniques.
arXiv Detail & Related papers (2021-04-05T13:05:22Z) - BigGAN-based Bayesian reconstruction of natural images from human brain
activity [14.038605815510145]
We propose a new GAN-based visual reconstruction method (GAN-BVRM) that includes a classifier to decode categories from fMRI data.
GAN-BVRM employs the pre-trained generator of the prevailing BigGAN to generate masses of natural images.
Experimental results revealed that GAN-BVRM improves the fidelity and naturalness, that is, the reconstruction is natural and similar to the presented image stimuli.
arXiv Detail & Related papers (2020-03-13T04:32:11Z) - Reconstructing Natural Scenes from fMRI Patterns using BigBiGAN [2.0754848504005583]
We employ a recently proposed large-scale bi-directional adversarial network, called BigBiGAN, to decode and reconstruct natural scenes from fMRI patterns.
We computed a linear mapping between fMRI data, acquired over images from 150 different categories of ImageNet, and their corresponding BigBiGAN latent vectors.
We applied this mapping to the fMRI activity patterns obtained from 50 new test images from 50 unseen categories in order to retrieve their latent vectors, and reconstruct the corresponding images.
arXiv Detail & Related papers (2020-01-31T10:46:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.