Beyond Interpretable Benchmarks: Contextual Learning through Cognitive
and Multimodal Perception
- URL: http://arxiv.org/abs/2304.00002v2
- Date: Sat, 30 Sep 2023 03:19:16 GMT
- Title: Beyond Interpretable Benchmarks: Contextual Learning through Cognitive
and Multimodal Perception
- Authors: Nick DiSanto
- Abstract summary: This study contends that the Turing Test is misinterpreted as an attempt to anthropomorphize computer systems.
It emphasizes tacit learning as a cornerstone of general-purpose intelligence, despite its lack of overt interpretability.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With state-of-the-art models achieving high performance on standard
benchmarks, contemporary research paradigms continue to emphasize general
intelligence as an enduring objective. However, this pursuit overlooks the
fundamental disparities between the high-level data perception abilities of
artificial and natural intelligence systems. This study questions the Turing
Test as a criterion of generally intelligent thought and contends that it is
misinterpreted as an attempt to anthropomorphize computer systems. Instead, it
emphasizes tacit learning as a cornerstone of general-purpose intelligence,
despite its lack of overt interpretability. This abstract form of intelligence
necessitates contextual cognitive attributes that are crucial for human-level
perception: generalizable experience, moral responsibility, and implicit
prioritization. The absence of these features yields undeniable perceptual
disparities and constrains the cognitive capacity of artificial systems to
effectively contextualize their environments. Additionally, this study
establishes that, despite extensive exploration of potential architecture for
future systems, little consideration has been given to how such models will
continuously absorb and adapt to contextual data. While conventional models may
continue to improve in benchmark performance, disregarding these contextual
considerations will lead to stagnation in human-like comprehension. Until
general intelligence can be abstracted from task-specific domains and systems
can learn implicitly from their environments, research standards should instead
prioritize the disciplines in which AI thrives.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - The Trap of Presumed Equivalence: Artificial General Intelligence Should Not Be Assessed on the Scale of Human Intelligence [0.0]
A traditional approach to assessing emerging intelligence in the theory of intelligent systems is based on the similarity, "imitation" of human-like actions and behaviors.
We argue that under some natural assumptions, developing intelligent systems will be able to form their own intents and objectives.
arXiv Detail & Related papers (2024-10-14T13:39:58Z) - Improving deep learning with prior knowledge and cognitive models: A
survey on enhancing explainability, adversarial robustness and zero-shot
learning [0.0]
We review current and emerging knowledge-informed and brain-inspired cognitive systems for realizing adversarial defenses.
Brain-inspired cognition methods use computational models that mimic the human mind to enhance intelligent behavior in artificial agents and autonomous robots.
arXiv Detail & Related papers (2024-03-11T18:11:00Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - Brain-Inspired Computational Intelligence via Predictive Coding [89.6335791546526]
Predictive coding (PC) has shown promising performance in machine intelligence tasks.
PC can model information processing in different brain areas, can be used in cognitive control and robotics.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - Brain in a Vat: On Missing Pieces Towards Artificial General
Intelligence in Large Language Models [83.63242931107638]
We propose four characteristics of generally intelligent agents.
We argue that active engagement with objects in the real world delivers more robust signals for forming conceptual representations.
We conclude by outlining promising future research directions in the field of artificial general intelligence.
arXiv Detail & Related papers (2023-07-07T13:58:16Z) - Assessment of cognitive characteristics in intelligent systems and
predictive ability [0.0]
The scale considers the properties of intelligent systems within the environmental context, which develops over time.
The complexity, the 'weight' of the cognitive task and the ability to critically assess it beforehand determine the actual set of cognitive tools.
The degree of 'correctness' and 'adequacy' is determined by the combination of a suitable solution with the temporal characteristics of the event, phenomenon, object or subject under study.
arXiv Detail & Related papers (2022-09-16T23:01:27Z) - WenLan 2.0: Make AI Imagine via a Multimodal Foundation Model [74.4875156387271]
We develop a novel foundation model pre-trained with huge multimodal (visual and textual) data.
We show that state-of-the-art results can be obtained on a wide range of downstream tasks.
arXiv Detail & Related papers (2021-10-27T12:25:21Z) - Neuro-symbolic Architectures for Context Understanding [59.899606495602406]
We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
arXiv Detail & Related papers (2020-03-09T15:04:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.