Reproducing sensory induced hallucinations via neural fields
- URL: http://arxiv.org/abs/2207.03901v1
- Date: Fri, 8 Jul 2022 13:41:02 GMT
- Title: Reproducing sensory induced hallucinations via neural fields
- Authors: Cyprien Tamekue, Dario Prandi, Yacine Chitour
- Abstract summary: We focus on pattern formation in the visual cortex when the activity is driven by a geometric visual hallucination-like stimulus.
We present a theoretical framework for sensory-induced hallucinations which allows one to reproduce novel psychophysical results.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Understanding sensory-induced cortical patterns in the primary visual cortex
V1 is an important challenge both for physiological motivations and for
improving our understanding of human perception and visual organisation. In
this work, we focus on pattern formation in the visual cortex when the cortical
activity is driven by a geometric visual hallucination-like stimulus. In
particular, we present a theoretical framework for sensory-induced
hallucinations which allows one to reproduce novel psychophysical results such
as the MacKay effect (Nature, 1957) and the Billock and Tsou experiences (PNAS,
2007).
Related papers
- Emergence of the Primacy Effect in Structured State-Space Models [1.4594704809280983]
artificial neural network (ANN) models are typically designed with a memory that decays monotonically over time.
This study reveals a counterintuitive finding: a recently developed ANN architecture, called structured state-space models, exhibits the primacy effect when trained and evaluated.
arXiv Detail & Related papers (2025-02-19T13:55:32Z) - A Bioplausible Model for the Expanding Hole Illusion: Insights into Retinal Processing and Illusory Motion [1.6574413179773761]
The Expanding Hole Illusion challenges our understanding of how the brain processes visual information.
Recent psychophysical studies reveal that this illusion induces not only a perceptual effect but also physiological responses, such as pupil dilation.
This paper presents a computational model based on Difference of Gaussians (DoG) filtering and a classical receptive field (CRF) implementation to simulate early retinal processing.
arXiv Detail & Related papers (2025-01-15T07:03:44Z) - Cracking the Code of Hallucination in LVLMs with Vision-aware Head Divergence [69.86946427928511]
We investigate the internal mechanisms driving hallucination in large vision-language models (LVLMs)
We introduce Vision-aware Head Divergence (VHD), a metric that quantifies the sensitivity of attention head outputs to visual context.
We propose Vision-aware Head Reinforcement (VHR), a training-free approach to mitigate hallucination by enhancing the role of vision-aware attention heads.
arXiv Detail & Related papers (2024-12-18T15:29:30Z) - Decoding Visual Experience and Mapping Semantics through Whole-Brain Analysis Using fMRI Foundation Models [10.615012396285337]
We develop algorithms to enhance our understanding of visual processes by incorporating whole-brain activation maps.
We first compare our method with state-of-the-art approaches to decoding visual processing and show improved predictive semantic accuracy by 43%.
arXiv Detail & Related papers (2024-11-11T16:51:17Z) - Exploring neural oscillations during speech perception via surrogate gradient spiking neural networks [59.38765771221084]
We present a physiologically inspired speech recognition architecture compatible and scalable with deep learning frameworks.
We show end-to-end gradient descent training leads to the emergence of neural oscillations in the central spiking neural network.
Our findings highlight the crucial inhibitory role of feedback mechanisms, such as spike frequency adaptation and recurrent connections, in regulating and synchronising neural activity to improve recognition performance.
arXiv Detail & Related papers (2024-04-22T09:40:07Z) - Visual attention information can be traced on cortical response but not
on the retina: evidence from electrophysiological mouse data using natural
images as stimuli [0.0]
In primary visual cortex (V1), a subset of around $10%$ of the neurons responds differently to salient versus non-salient visual regions.
It appears that the retina remains naive concerning visual attention; cortical response gets to interpret visual attention information.
arXiv Detail & Related papers (2023-08-01T13:09:48Z) - BI AVAN: Brain inspired Adversarial Visual Attention Network [67.05560966998559]
We propose a brain-inspired adversarial visual attention network (BI-AVAN) to characterize human visual attention directly from functional brain activity.
Our model imitates the biased competition process between attention-related/neglected objects to identify and locate the visual objects in a movie frame the human brain focuses on in an unsupervised manner.
arXiv Detail & Related papers (2022-10-27T22:20:36Z) - Plausible May Not Be Faithful: Probing Object Hallucination in
Vision-Language Pre-training [66.0036211069513]
Large-scale vision-language pre-trained models are prone to hallucinate non-existent visual objects when generating text.
We show that models achieving better scores on standard metrics could hallucinate objects more frequently.
Surprisingly, we find that patch-based features perform the best and smaller patch resolution yields a non-trivial reduction in object hallucination.
arXiv Detail & Related papers (2022-10-14T10:27:22Z) - Adapting Brain-Like Neural Networks for Modeling Cortical Visual
Prostheses [68.96380145211093]
Cortical prostheses are devices implanted in the visual cortex that attempt to restore lost vision by electrically stimulating neurons.
Currently, the vision provided by these devices is limited, and accurately predicting the visual percepts resulting from stimulation is an open challenge.
We propose to address this challenge by utilizing 'brain-like' convolutional neural networks (CNNs), which have emerged as promising models of the visual system.
arXiv Detail & Related papers (2022-09-27T17:33:19Z) - Stimuli-Aware Visual Emotion Analysis [75.68305830514007]
We propose a stimuli-aware visual emotion analysis (VEA) method consisting of three stages, namely stimuli selection, feature extraction and emotion prediction.
To the best of our knowledge, it is the first time to introduce stimuli selection process into VEA in an end-to-end network.
Experiments demonstrate that the proposed method consistently outperforms the state-of-the-art approaches on four public visual emotion datasets.
arXiv Detail & Related papers (2021-09-04T08:14:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.