Lateralization in Agents' Decision Making: Evidence of Benefits/Costs
from Artificial Intelligence
- URL: http://arxiv.org/abs/2302.01542v1
- Date: Fri, 3 Feb 2023 04:34:44 GMT
- Title: Lateralization in Agents' Decision Making: Evidence of Benefits/Costs
from Artificial Intelligence
- Authors: Abubakar Siddique, Will N. Browne, and Gina M. Grimshaw
- Abstract summary: We describe and test two novel lateralized artificial intelligent systems that simultaneously represent and address given problems.
The advantages arise from the abilities to represent an input signal at both the constituent level and holistic level simultaneously.
The computational costs associated with the lateralized AI systems are either less than the conventional AI systems or countered by providing better solutions.
- Score: 0.1529342790344802
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Lateralization is ubiquitous in vertebrate brains which, as well as its role
in locomotion, is considered an important factor in biological intelligence.
Lateralization has been associated with both poor and good performance. It has
been hypothesized that lateralization has benefits that may counterbalance its
costs. Given that lateralization is ubiquitous, it likely has advantages that
can benefit artificial intelligence. In turn, lateralized artificial
intelligent systems can be used as tools to advance the understanding of
lateralization in biological intelligence. Recently lateralization has been
incorporated into artificially intelligent systems to solve complex problems in
computer vision and navigation domains. Here we describe and test two novel
lateralized artificial intelligent systems that simultaneously represent and
address given problems at constituent and holistic levels. The experimental
results demonstrate that the lateralized systems outperformed state-of-the-art
non-lateralized systems in resolving complex problems. The advantages arise
from the abilities, (i) to represent an input signal at both the constituent
level and holistic level simultaneously, such that the most appropriate
viewpoint controls the system; (ii) to avoid extraneous computations by
generating excite and inhibit signals. The computational costs associated with
the lateralized AI systems are either less than the conventional AI systems or
countered by providing better solutions.
Related papers
- Converging Paradigms: The Synergy of Symbolic and Connectionist AI in LLM-Empowered Autonomous Agents [54.247747237176625]
Article explores the convergence of connectionist and symbolic artificial intelligence (AI)
Traditionally, connectionist AI focuses on neural networks, while symbolic AI emphasizes symbolic representation and logic.
Recent advancements in large language models (LLMs) highlight the potential of connectionist architectures in handling human language as a form of symbols.
arXiv Detail & Related papers (2024-07-11T14:00:53Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - Brain-Inspired Computational Intelligence via Predictive Coding [89.6335791546526]
Predictive coding (PC) has shown promising performance in machine intelligence tasks.
PC can model information processing in different brain areas, can be used in cognitive control and robotics.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - Ontology in Hybrid Intelligence: a concise literature review [3.9160947065896803]
Hybrid Intelligence is gaining popularity to refer to a balanced coexistence between human and artificial intelligence.
Ontology improves quality and accuracy, as well as a specific role to enable extended interoperability.
An application-oriented analysis has shown a significant role in present systems (70+% the cases) and, potentially, in future systems.
arXiv Detail & Related papers (2023-03-30T09:55:29Z) - Analysis of Explainable Artificial Intelligence Methods on Medical Image
Classification [0.0]
The use of deep learning in computer vision tasks such as image classification has led to a rapid increase in the performance of such systems.
Medical image classification systems are being adopted due to their high accuracy and near parity with human physicians in many tasks.
The research techniques being used to gain insight into the black-box models are in the field of explainable artificial intelligence (XAI)
arXiv Detail & Related papers (2022-12-10T06:17:43Z) - Cognitive Architecture for Co-Evolutionary Hybrid Intelligence [0.17767466724342065]
The paper questions the feasibility of a strong (general) data-centric artificial intelligence (AI)
As an alternative, the concept of co-evolutionary hybrid intelligence is proposed.
An architecture seamlessly incorporates a human into the loop of intelligent problem solving is considered.
arXiv Detail & Related papers (2022-09-05T08:26:16Z) - Neurocompositional computing: From the Central Paradox of Cognition to a
new generation of AI systems [120.297940190903]
Recent progress in AI has resulted from the use of limited forms of neurocompositional computing.
New, deeper forms of neurocompositional computing create AI systems that are more robust, accurate, and comprehensible.
arXiv Detail & Related papers (2022-05-02T18:00:10Z) - Scope and Sense of Explainability for AI-Systems [0.0]
Emphasis will be given to difficulties related to the explainability of highly complex and efficient AI systems.
It will be elaborated on arguments supporting the notion that if AI-solutions were to be discarded in advance because of their not being thoroughly comprehensible, a great deal of the potentiality of intelligent systems would be wasted.
arXiv Detail & Related papers (2021-12-20T14:25:05Z) - Inductive Biases for Deep Learning of Higher-Level Cognition [108.89281493851358]
A fascinating hypothesis is that human and animal intelligence could be explained by a few principles.
This work considers a larger list, focusing on those which concern mostly higher-level and sequential conscious processing.
The objective of clarifying these particular principles is that they could potentially help us build AI systems benefiting from humans' abilities.
arXiv Detail & Related papers (2020-11-30T18:29:25Z) - Achilles Heels for AGI/ASI via Decision Theoretic Adversaries [0.9790236766474201]
It is important to know how advanced systems will make choices and in what ways they may fail.
One might suspect that artificially generally intelligent (AGI) and artificially superintelligent (ASI) will be systems that humans cannot reliably outsmart.
This paper presents the Achilles Heel hypothesis which states that even a potentially superintelligent system may nonetheless have stable decision-theoretic delusions.
arXiv Detail & Related papers (2020-10-12T02:53:23Z) - Learning from Learning Machines: Optimisation, Rules, and Social Norms [91.3755431537592]
It appears that the area of AI that is most analogous to the behaviour of economic entities is that of morally good decision-making.
Recent successes of deep learning for AI suggest that more implicit specifications work better than explicit ones for solving such problems.
arXiv Detail & Related papers (2019-12-29T17:42:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.