Reconstructing Natural Scenes from fMRI Patterns using BigBiGAN
- URL: http://arxiv.org/abs/2001.11761v3
- Date: Mon, 7 Dec 2020 20:15:46 GMT
- Title: Reconstructing Natural Scenes from fMRI Patterns using BigBiGAN
- Authors: Milad Mozafari, Leila Reddy, Rufin VanRullen
- Abstract summary: We employ a recently proposed large-scale bi-directional adversarial network, called BigBiGAN, to decode and reconstruct natural scenes from fMRI patterns.
We computed a linear mapping between fMRI data, acquired over images from 150 different categories of ImageNet, and their corresponding BigBiGAN latent vectors.
We applied this mapping to the fMRI activity patterns obtained from 50 new test images from 50 unseen categories in order to retrieve their latent vectors, and reconstruct the corresponding images.
- Score: 2.0754848504005583
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Decoding and reconstructing images from brain imaging data is a research area
of high interest. Recent progress in deep generative neural networks has
introduced new opportunities to tackle this problem. Here, we employ a recently
proposed large-scale bi-directional generative adversarial network, called
BigBiGAN, to decode and reconstruct natural scenes from fMRI patterns. BigBiGAN
converts images into a 120-dimensional latent space which encodes class and
attribute information together, and can also reconstruct images based on their
latent vectors. We computed a linear mapping between fMRI data, acquired over
images from 150 different categories of ImageNet, and their corresponding
BigBiGAN latent vectors. Then, we applied this mapping to the fMRI activity
patterns obtained from 50 new test images from 50 unseen categories in order to
retrieve their latent vectors, and reconstruct the corresponding images.
Pairwise image decoding from the predicted latent vectors was highly accurate
(84%). Moreover, qualitative and quantitative assessments revealed that the
resulting image reconstructions were visually plausible, successfully captured
many attributes of the original images, and had high perceptual similarity with
the original content. This method establishes a new state-of-the-art for
fMRI-based natural image reconstruction, and can be flexibly updated to take
into account any future improvements in generative models of natural scene
images.
Related papers
- Natural scene reconstruction from fMRI signals using generative latent
diffusion [1.90365714903665]
We present a two-stage scene reconstruction framework called Brain-Diffuser''
In the first stage, we reconstruct images that capture low-level properties and overall layout using a VDVAE (Very Deep Vari Autoencoder) model.
In the second stage, we use the image-to-image framework of a latent diffusion model conditioned on predicted multimodal (text and visual) features.
arXiv Detail & Related papers (2023-03-09T15:24:26Z) - Re-Imagen: Retrieval-Augmented Text-to-Image Generator [58.60472701831404]
Retrieval-Augmented Text-to-Image Generator (Re-Imagen)
Retrieval-Augmented Text-to-Image Generator (Re-Imagen)
arXiv Detail & Related papers (2022-09-29T00:57:28Z) - Facial Image Reconstruction from Functional Magnetic Resonance Imaging
via GAN Inversion with Improved Attribute Consistency [5.705640492618758]
We propose a new framework to reconstruct facial images from fMRI data.
The proposed framework accomplishes two goals: (1) reconstructing clear facial images from fMRI data and (2) maintaining the consistency of semantic characteristics.
arXiv Detail & Related papers (2022-07-03T11:18:35Z) - Multiscale Voxel Based Decoding For Enhanced Natural Image
Reconstruction From Brain Activity [0.22940141855172028]
We present a novel approach for enhanced image reconstruction, in which existing methods for object decoding and image reconstruction are merged together.
This is achieved by conditioning the reconstructed image to its decoded image category using a class-conditional generative adversarial network and neural style transfer.
The results indicate that our approach improves the semantic similarity of the reconstructed images and can be used as a general framework for enhanced image reconstruction.
arXiv Detail & Related papers (2022-05-27T18:09:07Z) - Reconstruction of Perceived Images from fMRI Patterns and Semantic Brain
Exploration using Instance-Conditioned GANs [1.6904374000330984]
We use an Instance-Conditioned GAN (IC-GAN) model to reconstruct images from fMRI patterns with both accurate semantic attributes and preserved low-level details.
We trained ridge regression models to predict instance features, noise vectors, and dense vectors of stimuli from corresponding fMRI patterns.
Then, we used the IC-GAN generator to reconstruct novel test images based on these fMRI-predicted variables.
arXiv Detail & Related papers (2022-02-25T13:51:00Z) - Is Deep Image Prior in Need of a Good Education? [57.3399060347311]
Deep image prior was introduced as an effective prior for image reconstruction.
Despite its impressive reconstructive properties, the approach is slow when compared to learned or traditional reconstruction techniques.
We develop a two-stage learning paradigm to address the computational challenge.
arXiv Detail & Related papers (2021-11-23T15:08:26Z) - Multi-modal Aggregation Network for Fast MR Imaging [85.25000133194762]
We propose a novel Multi-modal Aggregation Network, named MANet, which is capable of discovering complementary representations from a fully sampled auxiliary modality.
In our MANet, the representations from the fully sampled auxiliary and undersampled target modalities are learned independently through a specific network.
Our MANet follows a hybrid domain learning framework, which allows it to simultaneously recover the frequency signal in the $k$-space domain.
arXiv Detail & Related papers (2021-10-15T13:16:59Z) - Ensembling with Deep Generative Views [72.70801582346344]
generative models can synthesize "views" of artificial images that mimic real-world variations, such as changes in color or pose.
Here, we investigate whether such views can be applied to real images to benefit downstream analysis tasks such as image classification.
We use StyleGAN2 as the source of generative augmentations and investigate this setup on classification tasks involving facial attributes, cat faces, and cars.
arXiv Detail & Related papers (2021-04-29T17:58:35Z) - Adaptive Gradient Balancing for UndersampledMRI Reconstruction and
Image-to-Image Translation [60.663499381212425]
We enhance the image quality by using a Wasserstein Generative Adversarial Network combined with a novel Adaptive Gradient Balancing technique.
In MRI, our method minimizes artifacts, while maintaining a high-quality reconstruction that produces sharper images than other techniques.
arXiv Detail & Related papers (2021-04-05T13:05:22Z) - Exploiting Deep Generative Prior for Versatile Image Restoration and
Manipulation [181.08127307338654]
This work presents an effective way to exploit the image prior captured by a generative adversarial network (GAN) trained on large-scale natural images.
The deep generative prior (DGP) provides compelling results to restore missing semantics, e.g., color, patch, resolution, of various degraded images.
arXiv Detail & Related papers (2020-03-30T17:45:07Z) - BigGAN-based Bayesian reconstruction of natural images from human brain
activity [14.038605815510145]
We propose a new GAN-based visual reconstruction method (GAN-BVRM) that includes a classifier to decode categories from fMRI data.
GAN-BVRM employs the pre-trained generator of the prevailing BigGAN to generate masses of natural images.
Experimental results revealed that GAN-BVRM improves the fidelity and naturalness, that is, the reconstruction is natural and similar to the presented image stimuli.
arXiv Detail & Related papers (2020-03-13T04:32:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.