Dreams Are More "Predictable'' Than You Think
- URL: http://arxiv.org/abs/2305.05054v1
- Date: Mon, 8 May 2023 21:24:12 GMT
- Title: Dreams Are More "Predictable'' Than You Think
- Authors: Lorenzo Bertolini
- Abstract summary: I will study if and how dream reports deviate from other human-generated text strings, such as Wikipedia.
On average, single dream reports are significantly more predictable than Wikipedia articles.
Preliminary evidence suggests that word count, gender, and visual impairment can significantly shape how predictable a dream report can appear to the model.
- Score: 2.094022863940315
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A consistent body of evidence suggests that dream reports significantly vary
from other types of textual transcripts with respect to semantic content.
Furthermore, it appears to be a widespread belief in the dream/sleep research
community that dream reports constitute rather ``unique'' strings of text. This
might be a notable issue for the growing amount of approaches using natural
language processing (NLP) tools to automatically analyse dream reports, as they
largely rely on neural models trained on non-dream corpora scraped from the
web. In this work, I will adopt state-of-the-art (SotA) large language models
(LLMs), to study if and how dream reports deviate from other human-generated
text strings, such as Wikipedia. Results show that, taken as a whole, DreamBank
does not deviate from Wikipedia. Moreover, on average, single dream reports are
significantly more predictable than Wikipedia articles. Preliminary evidence
suggests that word count, gender, and visual impairment can significantly shape
how predictable a dream report can appear to the model.
Related papers
- Making Your Dreams A Reality: Decoding the Dreams into a Coherent Video Story from fMRI Signals [46.90535445975669]
This paper studies the brave new idea for Multimedia community, and proposes a novel framework to convert dreams into coherent video narratives.
Recent advancements in brain imaging, particularly functional magnetic resonance imaging (fMRI), have provided new ways to explore the neural basis of dreaming.
By combining subjective dream experiences with objective neurophysiological data, we aim to understand the visual aspects of dreams and create complete video narratives.
arXiv Detail & Related papers (2025-01-16T08:03:49Z) - Sequence-to-Sequence Language Models for Character and Emotion Detection in Dream Narratives [0.0]
This paper presents the first study on character and emotion detection in the English portion of the open DreamBank corpus of dream narratives.
Our results show that language models can effectively address this complex task.
We evaluate the impact of model size, prediction order of characters, and the consideration of proper names and character traits.
arXiv Detail & Related papers (2024-03-21T08:27:49Z) - Fluent dreaming for language models [0.0]
Feature visualization, also known as "dreaming", offers insights into vision models by optimizing the inputs to maximize a neuron's activation or other internal component.
We extend Greedy Coordinate Gradient, a method from the language model adversarial attack literature, to design the Evolutionary Prompt Optimization (EPO) algorithm.
arXiv Detail & Related papers (2024-01-24T17:57:12Z) - Dream Content Discovery from Reddit with an Unsupervised Mixed-Method
Approach [0.8127745323109788]
We developed a new, data-driven mixed-method approach for identifying topics in free-form dream reports through natural language processing.
We tested this method on 44,213 dream reports from Reddit's r/Dreams subreddit.
Our method can find unique patterns in different dream types, understand topic importance and connections, and observe changes in collective dream experiences over time and around major events.
arXiv Detail & Related papers (2023-07-09T13:24:58Z) - Automatic Scoring of Dream Reports' Emotional Content with Large
Language Models [3.1761323820497656]
The study of dream content typically relies on the analysis of verbal reports provided by dreamers upon awakening from their sleep.
This task is classically performed through manual scoring provided by trained annotators, at a great time expense.
While a consistent body of work suggests that natural language processing (NLP) tools can support the automatic analysis of dream reports, proposed methods lacked the ability to reason over a report's full context and required extensive data pre-processing.
In this work, we address these limitations by adopting large language models (LLMs) to study and replicate the manual annotation of dream reports, using a mixture of off-
arXiv Detail & Related papers (2023-02-28T18:23:17Z) - Localization vs. Semantics: Visual Representations in Unimodal and
Multimodal Models [57.08925810659545]
We conduct a comparative analysis of the visual representations in existing vision-and-language models and vision-only models.
Our empirical observations suggest that vision-and-language models are better at label prediction tasks.
We hope our study sheds light on the role of language in visual learning, and serves as an empirical guide for various pretrained models.
arXiv Detail & Related papers (2022-12-01T05:00:18Z) - Transparency Helps Reveal When Language Models Learn Meaning [71.96920839263457]
Our systematic experiments with synthetic data reveal that, with languages where all expressions have context-independent denotations, both autoregressive and masked language models learn to emulate semantic relations between expressions.
Turning to natural language, our experiments with a specific phenomenon -- referential opacity -- add to the growing body of evidence that current language models do not well-represent natural language semantics.
arXiv Detail & Related papers (2022-10-14T02:35:19Z) - Epidemic Dreams: Dreaming about health during the COVID-19 pandemic [1.0093662416275693]
The continuity hypothesis of dreams suggests that the content of dreams is continuous with the dreamer's waking experiences.
We implemented a deep-learning algorithm that can extract mentions of medical conditions from text and applied it to two datasets collected during the pandemic.
The health expressions common to both sets were typical COVID-19 symptoms, suggesting that dreams reflected people's real-world experiences.
arXiv Detail & Related papers (2022-02-02T18:09:06Z) - The World of an Octopus: How Reporting Bias Influences a Language
Model's Perception of Color [73.70233477125781]
We show that reporting bias negatively impacts and inherently limits text-only training.
We then demonstrate that multimodal models can leverage their visual training to mitigate these effects.
arXiv Detail & Related papers (2021-10-15T16:28:17Z) - It's not Rocket Science : Interpreting Figurative Language in Narratives [48.84507467131819]
We study the interpretation of two non-compositional figurative languages (idioms and similes)
Our experiments show that models based solely on pre-trained language models perform substantially worse than humans on these tasks.
We additionally propose knowledge-enhanced models, adopting human strategies for interpreting figurative language.
arXiv Detail & Related papers (2021-08-31T21:46:35Z) - PIGLeT: Language Grounding Through Neuro-Symbolic Interaction in a 3D
World [86.21137454228848]
We factorize PIGLeT into a physical dynamics model, and a separate language model.
PIGLeT can read a sentence, simulate neurally what might happen next, and then communicate that result through a literal symbolic representation.
It is able to correctly forecast "what happens next" given an English sentence over 80% of the time, outperforming a 100x larger, text-to-text approach by over 10%.
arXiv Detail & Related papers (2021-06-01T02:32:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.