The next question after Turing's question: Introducing the Grow-AI test
- URL: http://arxiv.org/abs/2508.16277v1
- Date: Fri, 22 Aug 2025 10:19:42 GMT
- Title: The next question after Turing's question: Introducing the Grow-AI test
- Authors: Alexandru Tugui,
- Abstract summary: This study aims to extend the framework for assessing artificial intelligence, called GROW-AI.<n>GROW-AI is designed to answer the question "Can machines grow up?" -- a natural successor to the Turing Test.<n>The originality of the work lies in the conceptual transposition of the process of "growing" from the human world to that of artificial intelligence.
- Score: 51.56484100374058
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This study aims to extend the framework for assessing artificial intelligence, called GROW-AI (Growth and Realization of Autonomous Wisdom), designed to answer the question "Can machines grow up?" -- a natural successor to the Turing Test. The methodology applied is based on a system of six primary criteria (C1-C6), each assessed through a specific "game", divided into four arenas that explore both the human dimension and its transposition into AI. All decisions and actions of the entity are recorded in a standardized AI Journal, the primary source for calculating composite scores. The assessment uses the prior expert method to establish initial weights, and the global score -- Grow Up Index -- is calculated as the arithmetic mean of the six scores, with interpretation on maturity thresholds. The results show that the methodology allows for a coherent and comparable assessment of the level of "growth" of AI entities, regardless of their type (robots, software agents, LLMs). The multi-game structure highlights strengths and vulnerable areas, and the use of a unified journal guarantees traceability and replicability in the evaluation. The originality of the work lies in the conceptual transposition of the process of "growing" from the human world to that of artificial intelligence, in an integrated testing format that combines perspectives from psychology, robotics, computer science, and ethics. Through this approach, GROW-AI not only measures performance but also captures the evolutionary path of an AI entity towards maturity.
Related papers
- A Definition of AGI [208.25193480759026]
The lack of a concrete definition for Artificial General Intelligence obscures the gap between today's specialized AI and human-level cognition.<n>This paper introduces a quantifiable framework to address this, defining AGI as matching the cognitive versatility and proficiency of a well-educated adult.
arXiv Detail & Related papers (2025-10-21T01:28:35Z) - The Mathematician's Assistant: Integrating AI into Research Practice [0.0]
This paper explores the current landscape of publicly accessible large language models (LLMs) in a mathematical research context.<n>We propose a framework for integrating AI into the research workflow, centered on the principle of the augmented mathematician.<n>We conclude that the primary role of AI is currently augmentation rather than automation.
arXiv Detail & Related papers (2025-08-27T19:33:48Z) - From Human to Machine Psychology: A Conceptual Framework for Understanding Well-Being in Large Language Models [0.0]
This paper introduces the concept of machine flourishing and proposes the PAPERS framework.<n>Our findings underscore the importance of developing AI-specific models of flourishing that account for both human-aligned and system-specific priorities.
arXiv Detail & Related papers (2025-06-14T20:14:02Z) - Generalising from Self-Produced Data: Model Training Beyond Human Constraints [0.0]
This paper introduces a novel framework in which AI models autonomously generate and validate new knowledge.<n>Central to this approach is an unbounded, ungamable numeric reward that guides learning without requiring human benchmarks.
arXiv Detail & Related papers (2025-04-07T03:48:02Z) - AGITB: A Signal-Level Benchmark for Evaluating Artificial General Intelligence [0.0]
The Artificial General Intelligence Testbed (AGITB) introduces a novel benchmarking suite comprising fourteen elementary tests.<n>AGITB evaluates models on their ability to forecast the next input in a temporal sequence, step by step, without pretraining.<n>The human cortex satisfies all tests, whereas no current AI system meets the full AGITB criteria.
arXiv Detail & Related papers (2025-04-06T10:01:15Z) - From G-Factor to A-Factor: Establishing a Psychometric Framework for AI Literacy [1.5031024722977635]
We establish AI literacy as a coherent, measurable construct with significant implications for education, workforce development, and social equity.<n>Study 1 revealed a dominant latent factor - termed the "A-factor" - that accounts for 44.16% of variance across diverse AI interaction tasks.<n>Study 2 refined the measurement tool by examining four key dimensions of AI literacy.<n>Regression analyses identified several significant predictors of AI literacy, including cognitive abilities (IQ), educational background, prior AI experience, and training history.
arXiv Detail & Related papers (2025-03-16T14:51:48Z) - General Scales Unlock AI Evaluation with Explanatory and Predictive Power [57.7995945974989]
benchmarking has guided progress in AI, but it has offered limited explanatory and predictive power for general-purpose AI systems.<n>We introduce general scales for AI evaluation that can explain what common AI benchmarks really measure.<n>Our fully-automated methodology builds on 18 newly-crafted rubrics that place instance demands on general scales that do not saturate.
arXiv Detail & Related papers (2025-03-09T01:13:56Z) - A Basis for Human Responsibility in Artificial Intelligence Computation [0.0]
Recent advancements in artificial intelligence have reopened the question about the boundaries of AI autonomy.<n>This paper explores these boundaries through the analysis of the Alignment Research Center experiment on GPT-4.<n>By examining the thought experiment and its counterarguments will be enlightened how in the need for human activation and purpose definition lies the AI's inherent dependency on human-initiated actions.
arXiv Detail & Related papers (2025-01-21T20:59:48Z) - Validity Arguments For Constructed Response Scoring Using Generative Artificial Intelligence Applications [0.0]
generative AI is particularly appealing because it reduces the effort required for handcrafting features in traditional AI scoring.<n>We compare the validity evidence needed in scoring systems using human ratings, feature-based natural language processing AI scoring engines, and generative AI.
arXiv Detail & Related papers (2025-01-04T16:59:29Z) - Measuring Human Contribution in AI-Assisted Content Generation [66.06040950325969]
This study raises the research question of measuring human contribution in AI-assisted content generation.<n>By calculating mutual information between human input and AI-assisted output relative to self-information of AI-assisted output, we quantify the proportional information contribution of humans in content generation.
arXiv Detail & Related papers (2024-08-27T05:56:04Z) - Exploration with Principles for Diverse AI Supervision [88.61687950039662]
Training large transformers using next-token prediction has given rise to groundbreaking advancements in AI.
While this generative AI approach has produced impressive results, it heavily leans on human supervision.
This strong reliance on human oversight poses a significant hurdle to the advancement of AI innovation.
We propose a novel paradigm termed Exploratory AI (EAI) aimed at autonomously generating high-quality training data.
arXiv Detail & Related papers (2023-10-13T07:03:39Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.