Psychology of Artificial Intelligence: Epistemological Markers of the Cognitive Analysis of Neural Networks
- URL: http://arxiv.org/abs/2407.09563v1
- Date: Thu, 4 Jul 2024 12:53:05 GMT
- Title: Psychology of Artificial Intelligence: Epistemological Markers of the Cognitive Analysis of Neural Networks
- Authors: Michael Pichat,
- Abstract summary: The psychology of artificial intelligence, as predicted by Asimov (1950), aims to study this AI probing and explainability-sensitive matter.
A prerequisite for examining the latter is to clarify some milestones regarding the cognitive status we can attribute to its phenomenology.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: What is the "nature" of the cognitive processes and contents of an artificial neural network? In other words, how does an artificial intelligence fundamentally "think," and in what form does its knowledge reside? The psychology of artificial intelligence, as predicted by Asimov (1950), aims to study this AI probing and explainability-sensitive matter. This study requires a neuronal level of cognitive granularity, so as not to be limited solely to the secondary macro-cognitive results (such as cognitive and cultural biases) of synthetic neural cognition. A prerequisite for examining the latter is to clarify some epistemological milestones regarding the cognitive status we can attribute to its phenomenology.
Related papers
- Neuropsychology of AI: Relationship Between Activation Proximity and Categorical Proximity Within Neural Categories of Synthetic Cognition [0.11235145048383502]
This study focuses on synthetic neural cog nition as a new type of study object within cognitive psychology.
The goal is to make artificial neural networks of language models more explainable.
This approach involves transposing concepts from cognitive psychology to the interpretive construction of artificial neural cognition.
arXiv Detail & Related papers (2024-10-08T12:34:13Z) - Enhancing learning in spiking neural networks through neuronal heterogeneity and neuromodulatory signaling [52.06722364186432]
We propose a biologically-informed framework for enhancing artificial neural networks (ANNs)
Our proposed dual-framework approach highlights the potential of spiking neural networks (SNNs) for emulating diverse spiking behaviors.
We outline how the proposed approach integrates brain-inspired compartmental models and task-driven SNNs, bioinspiration and complexity.
arXiv Detail & Related papers (2024-07-05T14:11:28Z) - Metacognitive AI: Framework and the Case for a Neurosymbolic Approach [5.5441283041944]
We introduce a framework for understanding metacognitive artificial intelligence (AI) that we call TRAP: transparency, reasoning, adaptation, and perception.
We discuss each of these aspects in-turn and explore how neurosymbolic AI (NSAI) can be leveraged to address challenges of metacognition.
arXiv Detail & Related papers (2024-06-17T23:30:46Z) - A Review of Findings from Neuroscience and Cognitive Psychology as
Possible Inspiration for the Path to Artificial General Intelligence [0.0]
This review aims to contribute to the quest for artificial general intelligence by examining neuroscience and cognitive psychology methods.
Despite the impressive advancements achieved by deep learning models, they still have shortcomings in abstract reasoning and causal understanding.
arXiv Detail & Related papers (2024-01-03T09:46:36Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - A Neuro-mimetic Realization of the Common Model of Cognition via Hebbian
Learning and Free Energy Minimization [55.11642177631929]
Large neural generative models are capable of synthesizing semantically rich passages of text or producing complex images.
We discuss the COGnitive Neural GENerative system, such an architecture that casts the Common Model of Cognition.
arXiv Detail & Related papers (2023-10-14T23:28:48Z) - Constraints on the design of neuromorphic circuits set by the properties
of neural population codes [61.15277741147157]
In the brain, information is encoded, transmitted and used to inform behaviour.
Neuromorphic circuits need to encode information in a way compatible to that used by populations of neuron in the brain.
arXiv Detail & Related papers (2022-12-08T15:16:04Z) - An Introductory Review of Spiking Neural Network and Artificial Neural
Network: From Biological Intelligence to Artificial Intelligence [4.697611383288171]
A kind of spiking neural network with biological interpretability is gradually receiving wide attention.
This review hopes to attract different researchers and advance the development of brain-inspired intelligence and artificial intelligence.
arXiv Detail & Related papers (2022-04-09T09:34:34Z) - From Psychological Curiosity to Artificial Curiosity: Curiosity-Driven
Learning in Artificial Intelligence Tasks [56.20123080771364]
Psychological curiosity plays a significant role in human intelligence to enhance learning through exploration and information acquisition.
In the Artificial Intelligence (AI) community, artificial curiosity provides a natural intrinsic motivation for efficient learning.
CDL has become increasingly popular, where agents are self-motivated to learn novel knowledge.
arXiv Detail & Related papers (2022-01-20T17:07:03Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z) - Morphological Computation and Learning to Learn In Natural Intelligent
Systems And AI [2.487445341407889]
Deep learning algorithms have been inspired from the beginning by nature, specifically by the human brain, in spite of our incomplete knowledge about its brain function.
The question is, what can the inspiration from computational nature at this stage of the development contribute to deep learning and how much models and experiments in machine learning can motivate, justify and lead research in neuroscience and cognitive science.
arXiv Detail & Related papers (2020-04-05T20:11:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.