Bayesian sense of time in biological and artificial brains
- URL: http://arxiv.org/abs/2201.05464v1
- Date: Fri, 14 Jan 2022 14:05:30 GMT
- Title: Bayesian sense of time in biological and artificial brains
- Authors: Zafeirios Fountas, Alexey Zakharov
- Abstract summary: The brain's ability to process the passage of time is one of the fundamental dimensions of our experience.
How can we explain empirical data on human time perception using the Bayesian brain hypothesis?
What insights can the agent-based machine learning models provide for the study of this subject?
- Score: 1.52292571922932
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Enquiries concerning the underlying mechanisms and the emergent properties of
a biological brain have a long history of theoretical postulates and
experimental findings. Today, the scientific community tends to converge to a
single interpretation of the brain's cognitive underpinnings -- that it is a
Bayesian inference machine. This contemporary view has naturally been a strong
driving force in recent developments around computational and cognitive
neurosciences. Of particular interest is the brain's ability to process the
passage of time -- one of the fundamental dimensions of our experience. How can
we explain empirical data on human time perception using the Bayesian brain
hypothesis? Can we replicate human estimation biases using Bayesian models?
What insights can the agent-based machine learning models provide for the study
of this subject? In this chapter, we review some of the recent advancements in
the field of time perception and discuss the role of Bayesian processing in the
construction of temporal models.
Related papers
- Temporal Interception and Present Reconstruction: A Cognitive-Signal Model for Human and AI Decision Making [0.0]
This paper proposes a novel theoretical model to explain how the human mind and artificial intelligence can approach real-time awareness.<n>By investigating cosmic signal delay, neurological reaction times, and the ancient cognitive state of stillness, we explore how one may shift from reactive perception to a conscious interface with the near future.
arXiv Detail & Related papers (2025-05-11T15:38:27Z) - Don't Cut Corners: Exact Conditions for Modularity in Biologically Inspired Representations [52.48094670415497]
We develop a theory of when biologically inspired representations modularise with respect to source variables (sources)
We derive necessary and sufficient conditions on a sample of sources that determine whether the neurons in an optimal biologically-inspired linear autoencoder modularise.
Our theory applies to any dataset, extending far beyond the case of statistical independence studied in previous work.
arXiv Detail & Related papers (2024-10-08T17:41:37Z) - Neural timescales from a computational perspective [5.390514665166601]
Neural activity fluctuates over a wide range of timescales within and across brain areas.<n>How timescales are defined and measured from brain recordings vary across the literature.
arXiv Detail & Related papers (2024-09-04T13:16:20Z) - Coin-Flipping In The Brain: Statistical Learning with Neuronal Assemblies [9.757971977909683]
We study the emergence of statistical learning in NEMO, a computational model of the brain.
We show that connections between assemblies record statistics, and ambient noise can be harnessed to make probabilistic choices.
arXiv Detail & Related papers (2024-06-11T20:51:50Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - A Neuro-mimetic Realization of the Common Model of Cognition via Hebbian
Learning and Free Energy Minimization [55.11642177631929]
Large neural generative models are capable of synthesizing semantically rich passages of text or producing complex images.
We discuss the COGnitive Neural GENerative system, such an architecture that casts the Common Model of Cognition.
arXiv Detail & Related papers (2023-10-14T23:28:48Z) - Brain-inspired bodily self-perception model for robot rubber hand
illusion [11.686402949452546]
We propose a Brain-inspired bodily self-perception model, by which perceptions of bodily self can be autonomously constructed without supervision signals.
We validate our model with six rubber hand illusion experiments and a disability experiment on platforms including a iCub humanoid robot and simulated environments.
arXiv Detail & Related papers (2023-03-22T02:00:09Z) - Memory-Augmented Theory of Mind Network [59.9781556714202]
Social reasoning requires the capacity of theory of mind (ToM) to contextualise and attribute mental states to others.
Recent machine learning approaches to ToM have demonstrated that we can train the observer to read the past and present behaviours of other agents.
We tackle the challenges by equipping the observer with novel neural memory mechanisms to encode, and hierarchical attention to selectively retrieve information about others.
This results in ToMMY, a theory of mind model that learns to reason while making little assumptions about the underlying mental processes.
arXiv Detail & Related papers (2023-01-17T14:48:58Z) - Predictive Coding and Stochastic Resonance: Towards a Unified Theory of
Auditory (Phantom) Perception [6.416574036611064]
To gain a mechanistic understanding of brain function, hypothesis driven experiments should be accompanied by biologically plausible computational models.
With a special focus on tinnitus, we review recent work at the intersection of artificial intelligence, psychology, and neuroscience.
We conclude that two fundamental processing principles - being ubiquitous in the brain - best fit to a vast number of experimental results.
arXiv Detail & Related papers (2022-04-07T10:47:58Z) - Observing Interventions: A logic for thinking about experiments [62.997667081978825]
This paper makes a first step towards a logic of learning from experiments.
Crucial for our approach is the idea that the notion of an intervention can be used as a formal expression of a (real or hypothetical) experiment.
For all the proposed logical systems, we provide a sound and complete axiomatization.
arXiv Detail & Related papers (2021-11-25T09:26:45Z) - Inductive Biases for Deep Learning of Higher-Level Cognition [108.89281493851358]
A fascinating hypothesis is that human and animal intelligence could be explained by a few principles.
This work considers a larger list, focusing on those which concern mostly higher-level and sequential conscious processing.
The objective of clarifying these particular principles is that they could potentially help us build AI systems benefiting from humans' abilities.
arXiv Detail & Related papers (2020-11-30T18:29:25Z) - The principles of adaptation in organisms and machines II:
Thermodynamics of the Bayesian brain [0.0]
The article reviews how organisms learn and recognize the world through the dynamics of neural networks from the perspective of Bayesian inference.
We then introduce a thermodynamic view on this process based on the laws for the entropy of neural activity.
arXiv Detail & Related papers (2020-06-23T16:57:46Z) - Learning to infer in recurrent biological networks [4.56877715768796]
We argue that the cortex may learn with an adversarial algorithm.
We illustrate the idea on recurrent neural networks trained to model image and video datasets.
arXiv Detail & Related papers (2020-06-18T19:04:47Z) - Towards a Neural Model for Serial Order in Frontal Cortex: a Brain
Theory from Memory Development to Higher-Level Cognition [53.816853325427424]
We propose that the immature prefrontal cortex (PFC) use its primary functionality of detecting hierarchical patterns in temporal signals.
Our hypothesis is that the PFC detects the hierarchical structure in temporal sequences in the form of ordinal patterns and use them to index information hierarchically in different parts of the brain.
By doing so, it gives the tools to the language-ready brain for manipulating abstract knowledge and planning temporally ordered information.
arXiv Detail & Related papers (2020-05-22T14:29:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.