Testing System Intelligence
- URL: http://arxiv.org/abs/2305.11472v2
- Date: Sat, 12 Aug 2023 07:19:20 GMT
- Title: Testing System Intelligence
- Authors: Joseph Sifakis
- Abstract summary: We argue that building intelligent systems passing the replacement test involves a series of technical problems that are outside the scope of current AI.
We suggest that the replacement test, based on the complementarity of skills between human and machine, can lead to a multitude of intelligence concepts.
- Score: 0.902877390685954
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We discuss the adequacy of tests for intelligent systems and practical
problems raised by their implementation. We propose the replacement test as the
ability of a system to replace successfully another system performing a task in
a given context. We show how it can characterize salient aspects of human
intelligence that cannot be taken into account by the Turing test. We argue
that building intelligent systems passing the replacement test involves a
series of technical problems that are outside the scope of current AI. We
present a framework for implementing the proposed test and validating the
properties of the intelligent systems. We discuss the inherent limitations of
intelligent system validation and advocate new theoretical foundations for
extending existing rigorous test methods. We suggest that the replacement test,
based on the complementarity of skills between human and machine, can lead to a
multitude of intelligence concepts reflecting the ability to combine data-based
and symbolic knowledge to varying degrees.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - The Trap of Presumed Equivalence: Artificial General Intelligence Should Not Be Assessed on the Scale of Human Intelligence [0.0]
A traditional approach to assessing emerging intelligence in the theory of intelligent systems is based on the similarity, 'imitation' of human-like actions and behaviors.
We argue that under some natural assumptions, developing intelligent systems will be able to form their own in-tents and objectives.
arXiv Detail & Related papers (2024-10-14T13:39:58Z) - OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI [73.75520820608232]
We introduce OlympicArena, which includes 11,163 bilingual problems across both text-only and interleaved text-image modalities.
These challenges encompass a wide range of disciplines spanning seven fields and 62 international Olympic competitions, rigorously examined for data leakage.
Our evaluations reveal that even advanced models like GPT-4o only achieve a 39.97% overall accuracy, illustrating current AI limitations in complex reasoning and multimodal integration.
arXiv Detail & Related papers (2024-06-18T16:20:53Z) - Integration of cognitive tasks into artificial general intelligence test
for large models [54.72053150920186]
We advocate for a comprehensive framework of cognitive science-inspired artificial general intelligence (AGI) tests.
The cognitive science-inspired AGI tests encompass the full spectrum of intelligence facets, including crystallized intelligence, fluid intelligence, social intelligence, and embodied intelligence.
arXiv Detail & Related papers (2024-02-04T15:50:42Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - Software Testing of Generative AI Systems: Challenges and Opportunities [5.634825161148484]
I will explore the challenges posed by generative AI systems and discuss potential opportunities for future research in the field of testing.
I will touch on the specific characteristics of GenAI systems that make traditional testing techniques inadequate or insufficient.
arXiv Detail & Related papers (2023-09-07T08:35:49Z) - Brain in a Vat: On Missing Pieces Towards Artificial General
Intelligence in Large Language Models [83.63242931107638]
We propose four characteristics of generally intelligent agents.
We argue that active engagement with objects in the real world delivers more robust signals for forming conceptual representations.
We conclude by outlining promising future research directions in the field of artificial general intelligence.
arXiv Detail & Related papers (2023-07-07T13:58:16Z) - Reasoning-Based Software Testing [9.341830361844337]
Reasoning-Based Software Testing (RBST) is a new way of thinking at the testing problem as a causal reasoning task.
We claim that causal reasoning more naturally emulates the process that a human would do to ''smartly" search the space.
Preliminary results reported in this paper are promising.
arXiv Detail & Related papers (2023-03-02T14:27:21Z) - Mathematics, word problems, common sense, and artificial intelligence [0.0]
We discuss the capacities and limitations of current artificial intelligence (AI) technology to solve word problems that combine elementary knowledge with commonsense reasoning.
We review three approaches that have been developed, using AI natural language technology.
We argue that it is not clear whether these kinds of limitations will be important in developing AI technology for pure mathematical research.
arXiv Detail & Related papers (2023-01-23T21:21:39Z) - Test and Evaluation Framework for Multi-Agent Systems of Autonomous
Intelligent Agents [0.0]
We consider the challenges of developing a unifying test and evaluation framework for complex ensembles of cyber-physical systems with embedded artificial intelligence.
We propose a framework that incorporates test and evaluation throughout not only the development life cycle, but continues into operation as the system learns and adapts.
arXiv Detail & Related papers (2021-01-25T21:42:27Z) - Future Trends for Human-AI Collaboration: A Comprehensive Taxonomy of
AI/AGI Using Multiple Intelligences and Learning Styles [95.58955174499371]
We describe various aspects of multiple human intelligences and learning styles, which may impact on a variety of AI problem domains.
Future AI systems will be able not only to communicate with human users and each other, but also to efficiently exchange knowledge and wisdom.
arXiv Detail & Related papers (2020-08-07T21:00:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.