Dream2Image : An Open Multimodal EEG Dataset for Decoding and Visualizing Dreams with Artificial Intelligence
- URL: http://arxiv.org/abs/2510.06252v1
- Date: Fri, 03 Oct 2025 22:43:27 GMT
- Title: Dream2Image : An Open Multimodal EEG Dataset for Decoding and Visualizing Dreams with Artificial Intelligence
- Authors: Yann Bellec,
- Abstract summary: Dream2Image is the world's first dataset combining EEG signals, dream transcriptions, and AI-generated images.<n>Based on 38 participants and more than 31 hours of dream EEG recordings, it contains 129 samples.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Dream2Image is the world's first dataset combining EEG signals, dream transcriptions, and AI-generated images. Based on 38 participants and more than 31 hours of dream EEG recordings, it contains 129 samples offering: the final seconds of brain activity preceding awakening (T-15, T-30, T-60, T-120), raw reports of dream experiences, and an approximate visual reconstruction of the dream. This dataset provides a novel resource for dream research, a unique resource to study the neural correlates of dreaming, to develop models for decoding dreams from brain activity, and to explore new approaches in neuroscience, psychology, and artificial intelligence. Available in open access on Hugging Face and GitHub, Dream2Image provides a multimodal resource designed to support research at the interface of artificial intelligence and neuroscience. It was designed to inspire researchers and extend the current approaches to brain activity decoding. Limitations include the relatively small sample size and the variability of dream recall, which may affect generalizability.
Related papers
- TRIBE: TRImodal Brain Encoder for whole-brain fMRI response prediction [7.864304771129752]
TRIBE is the first deep neural network trained to predict brain responses to stimuli across multiple modalities.<n>Our model can precisely model the spatial and temporal fMRI responses to videos.<n>Our approach paves the way towards building an integrative model of representations in the human brain.
arXiv Detail & Related papers (2025-07-29T20:52:31Z) - EgoBrain: Synergizing Minds and Eyes For Human Action Understanding [50.54007364637855]
EgoBrain is the world's first large-scale, temporally aligned multimodal dataset that synchronizes egocentric vision and EEG of human brain over extended periods of time.<n>This dataset comprises 61 hours of synchronized 32-channel EEG recordings and first-person video from 40 participants engaged in 29 categories of daily activities.<n>All data, tools, and acquisition protocols are openly shared to foster open science in cognitive computing.
arXiv Detail & Related papers (2025-06-02T06:14:02Z) - DreamNet: A Multimodal Framework for Semantic and Emotional Analysis of Sleep Narratives [0.0]
We introduce DreamNet, a novel deep learning framework that decodes semantic themes and emotional states from dream reports.<n>On a curated dataset of 1,500 anonymized dream narratives, DreamNet achieves 92.1% accuracy and 88.4% F1-score in text-only mode.<n>Strong dream-emotion correlations highlight its potential for mental health diagnostics, cognitive science, and personalized therapy.
arXiv Detail & Related papers (2025-02-26T09:10:07Z) - Making Your Dreams A Reality: Decoding the Dreams into a Coherent Video Story from fMRI Signals [46.90535445975669]
This paper studies the brave new idea for Multimedia community, and proposes a novel framework to convert dreams into coherent video narratives.<n>Recent advancements in brain imaging, particularly functional magnetic resonance imaging (fMRI), have provided new ways to explore the neural basis of dreaming.<n>By combining subjective dream experiences with objective neurophysiological data, we aim to understand the visual aspects of dreams and create complete video narratives.
arXiv Detail & Related papers (2025-01-16T08:03:49Z) - Neuro-3D: Towards 3D Visual Decoding from EEG Signals [49.502364730056044]
We introduce a new neuroscience task: decoding 3D visual perception from EEG signals.<n>We first present EEG-3D, a dataset featuring multimodal analysis data and EEG recordings from 12 subjects viewing 72 categories of 3D objects rendered in both videos and images.<n>We propose Neuro-3D, a 3D visual decoding framework based on EEG signals.
arXiv Detail & Related papers (2024-11-19T05:52:17Z) - Brain3D: Generating 3D Objects from fMRI [76.41771117405973]
We design a novel 3D object representation learning method, Brain3D, that takes as input the fMRI data of a subject.<n>We show that our model captures the distinct functionalities of each region of human vision system.<n>Preliminary evaluations indicate that Brain3D can successfully identify the disordered brain regions in simulated scenarios.
arXiv Detail & Related papers (2024-05-24T06:06:11Z) - MindBridge: A Cross-Subject Brain Decoding Framework [60.58552697067837]
Brain decoding aims to reconstruct stimuli from acquired brain signals.
Currently, brain decoding is confined to a per-subject-per-model paradigm.
We present MindBridge, that achieves cross-subject brain decoding by employing only one model.
arXiv Detail & Related papers (2024-04-11T15:46:42Z) - Deep Neural Networks and Brain Alignment: Brain Encoding and Decoding (Survey) [9.14580723964253]
Can artificial intelligence unlock the secrets of the human brain?<n>Is it possible to enhance AI by tapping into the power of brain recordings?<n>Our survey focuses on human brain recording studies and cutting-edge cognitive neuroscience datasets.
arXiv Detail & Related papers (2023-07-17T06:54:36Z) - Memory semantization through perturbed and adversarial dreaming [0.7874708385247353]
We propose that rapid-eye-movement (REM) dreaming is essential for efficient memory semantization.
We implement a cortical architecture with hierarchically organized feedforward and feedback pathways, inspired by generative adversarial networks (GANs)
Our results suggest that adversarial dreaming during REM sleep is essential for extracting memory contents, while dreaming during NREM sleep improves the robustness of the latent representation to noisy sensory inputs.
arXiv Detail & Related papers (2021-09-09T13:31:13Z) - Interpretation of 3D CNNs for Brain MRI Data Classification [56.895060189929055]
We extend the previous findings in gender differences from diffusion-tensor imaging on T1 brain MRI scans.
We provide the voxel-wise 3D CNN interpretation comparing the results of three interpretation methods.
arXiv Detail & Related papers (2020-06-20T17:56:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.