Lateralization in Agents' Decision Making: Evidence of Benefits/Costs
from Artificial Intelligence
- URL: http://arxiv.org/abs/2302.01542v1
- Date: Fri, 3 Feb 2023 04:34:44 GMT
- Title: Lateralization in Agents' Decision Making: Evidence of Benefits/Costs
from Artificial Intelligence
- Authors: Abubakar Siddique, Will N. Browne, and Gina M. Grimshaw
- Abstract summary: We describe and test two novel lateralized artificial intelligent systems that simultaneously represent and address given problems.
The advantages arise from the abilities to represent an input signal at both the constituent level and holistic level simultaneously.
The computational costs associated with the lateralized AI systems are either less than the conventional AI systems or countered by providing better solutions.
- Score: 0.1529342790344802
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Lateralization is ubiquitous in vertebrate brains which, as well as its role
in locomotion, is considered an important factor in biological intelligence.
Lateralization has been associated with both poor and good performance. It has
been hypothesized that lateralization has benefits that may counterbalance its
costs. Given that lateralization is ubiquitous, it likely has advantages that
can benefit artificial intelligence. In turn, lateralized artificial
intelligent systems can be used as tools to advance the understanding of
lateralization in biological intelligence. Recently lateralization has been
incorporated into artificially intelligent systems to solve complex problems in
computer vision and navigation domains. Here we describe and test two novel
lateralized artificial intelligent systems that simultaneously represent and
address given problems at constituent and holistic levels. The experimental
results demonstrate that the lateralized systems outperformed state-of-the-art
non-lateralized systems in resolving complex problems. The advantages arise
from the abilities, (i) to represent an input signal at both the constituent
level and holistic level simultaneously, such that the most appropriate
viewpoint controls the system; (ii) to avoid extraneous computations by
generating excite and inhibit signals. The computational costs associated with
the lateralized AI systems are either less than the conventional AI systems or
countered by providing better solutions.
Related papers
- Neurodivergent Influenceability as a Contingent Solution to the AI Alignment Problem [1.3905735045377272]
The AI alignment problem, which focusses on ensuring that artificial intelligence (AI) systems act according to human values, presents profound challenges.<n>With the progression from narrow AI to Artificial General Intelligence (AGI) and Superintelligence, fears about control and existential risk have escalated.<n>Here, we investigate whether embracing inevitable AI misalignment can be a contingent strategy to foster a dynamic ecosystem of competing agents.
arXiv Detail & Related papers (2025-05-05T11:33:18Z) - A Study on Neuro-Symbolic Artificial Intelligence: Healthcare Perspectives [2.5782420501870296]
Symbolic AI excels in reasoning, explainability, and knowledge representation but faces challenges in processing complex real-world data with noise.
Deep learning (Black-Box systems) research breakthroughs in neural networks are notable, yet they lack reasoning and interpretability.
Neuro-symbolic AI (NeSy) attempts to bridge this gap by integrating logical reasoning into neural networks, enabling them to learn and reason with symbolic representations.
arXiv Detail & Related papers (2025-03-23T21:33:38Z) - Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Formal Explanations for Neuro-Symbolic AI [28.358183683756028]
This paper proposes a formal approach to explaining the decisions of neuro-symbolic systems.
It first computes a formal explanation for the symbolic component of the system, which serves to identify a subset of the individual parts of neural information that needs to be explained.
This is followed by explaining only those individual neural inputs, independently of each other, which facilitates succinctness of hierarchical formal explanations.
arXiv Detail & Related papers (2024-10-18T07:08:31Z) - Towards Efficient Neuro-Symbolic AI: From Workload Characterization to Hardware Architecture [22.274696991107206]
Neuro-symbolic AI emerges as a promising paradigm, fusing neural and symbolic approaches to enhance interpretability, robustness, and trustworthiness.
Recent neuro-symbolic systems have demonstrated great potential in collaborative human-AI scenarios with reasoning and cognitive capabilities.
We first systematically categorize neuro-symbolic AI algorithms, and then experimentally evaluate and analyze them in terms of runtime, memory, computational operators, sparsity, and system characteristics.
arXiv Detail & Related papers (2024-09-20T01:32:14Z) - Visual Agents as Fast and Slow Thinkers [88.6691504568041]
We introduce FaST, which incorporates the Fast and Slow Thinking mechanism into visual agents.
FaST employs a switch adapter to dynamically select between System 1/2 modes.
It tackles uncertain and unseen objects by adjusting model confidence and integrating new contextual data.
arXiv Detail & Related papers (2024-08-16T17:44:02Z) - Converging Paradigms: The Synergy of Symbolic and Connectionist AI in LLM-Empowered Autonomous Agents [55.63497537202751]
Article explores the convergence of connectionist and symbolic artificial intelligence (AI)
Traditionally, connectionist AI focuses on neural networks, while symbolic AI emphasizes symbolic representation and logic.
Recent advancements in large language models (LLMs) highlight the potential of connectionist architectures in handling human language as a form of symbols.
arXiv Detail & Related papers (2024-07-11T14:00:53Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - Brain-Inspired Computational Intelligence via Predictive Coding [89.6335791546526]
Predictive coding (PC) has shown promising performance in machine intelligence tasks.
PC can model information processing in different brain areas, can be used in cognitive control and robotics.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - Ontology in Hybrid Intelligence: a concise literature review [3.9160947065896803]
Hybrid Intelligence is gaining popularity to refer to a balanced coexistence between human and artificial intelligence.
Ontology improves quality and accuracy, as well as a specific role to enable extended interoperability.
An application-oriented analysis has shown a significant role in present systems (70+% the cases) and, potentially, in future systems.
arXiv Detail & Related papers (2023-03-30T09:55:29Z) - Cognitive Architecture for Co-Evolutionary Hybrid Intelligence [0.17767466724342065]
The paper questions the feasibility of a strong (general) data-centric artificial intelligence (AI)
As an alternative, the concept of co-evolutionary hybrid intelligence is proposed.
An architecture seamlessly incorporates a human into the loop of intelligent problem solving is considered.
arXiv Detail & Related papers (2022-09-05T08:26:16Z) - Neurocompositional computing: From the Central Paradox of Cognition to a
new generation of AI systems [120.297940190903]
Recent progress in AI has resulted from the use of limited forms of neurocompositional computing.
New, deeper forms of neurocompositional computing create AI systems that are more robust, accurate, and comprehensible.
arXiv Detail & Related papers (2022-05-02T18:00:10Z) - Scope and Sense of Explainability for AI-Systems [0.0]
Emphasis will be given to difficulties related to the explainability of highly complex and efficient AI systems.
It will be elaborated on arguments supporting the notion that if AI-solutions were to be discarded in advance because of their not being thoroughly comprehensible, a great deal of the potentiality of intelligent systems would be wasted.
arXiv Detail & Related papers (2021-12-20T14:25:05Z) - Inductive Biases for Deep Learning of Higher-Level Cognition [108.89281493851358]
A fascinating hypothesis is that human and animal intelligence could be explained by a few principles.
This work considers a larger list, focusing on those which concern mostly higher-level and sequential conscious processing.
The objective of clarifying these particular principles is that they could potentially help us build AI systems benefiting from humans' abilities.
arXiv Detail & Related papers (2020-11-30T18:29:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.