Intelligence at the Edge of Chaos
- URL: http://arxiv.org/abs/2410.02536v3
- Date: Sat, 01 Mar 2025 13:21:09 GMT
- Title: Intelligence at the Edge of Chaos
- Authors: Shiyang Zhang, Aakash Patel, Syed A Rizvi, Nianchen Liu, Sizhuang He, Amin Karbasi, Emanuele Zappala, David van Dijk,
- Abstract summary: We investigate how the complexity of rule-based systems influences the capabilities of models trained to predict these rules.<n>Our findings reveal that rules with higher complexity lead to models exhibiting greater intelligence.<n>We conjecture that intelligence arises from the ability to predict complexity and that creating intelligence may require only exposure to complexity.
- Score: 24.864145150537855
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We explore the emergence of intelligent behavior in artificial systems by investigating how the complexity of rule-based systems influences the capabilities of models trained to predict these rules. Our study focuses on elementary cellular automata (ECA), simple yet powerful one-dimensional systems that generate behaviors ranging from trivial to highly complex. By training distinct Large Language Models (LLMs) on different ECAs, we evaluated the relationship between the complexity of the rules' behavior and the intelligence exhibited by the LLMs, as reflected in their performance on downstream tasks. Our findings reveal that rules with higher complexity lead to models exhibiting greater intelligence, as demonstrated by their performance on reasoning and chess move prediction tasks. Both uniform and periodic systems, and often also highly chaotic systems, resulted in poorer downstream performance, highlighting a sweet spot of complexity conducive to intelligence. We conjecture that intelligence arises from the ability to predict complexity and that creating intelligence may require only exposure to complexity.
Related papers
- A Mathematical Theory of Agency and Intelligence [0.0]
We show how much of the total information a system deploys is actually shared between its observations, actions, and outcomes.<n>We prove this shared fraction, which we term bipredictability, P, is intrinsic to any interaction, derivable from first principles.<n>We demonstrate a feedback architecture that monitors P in real time, establishing a prerequisite for adaptive, resilient AI.
arXiv Detail & Related papers (2026-02-26T01:26:21Z) - Modularity is the Bedrock of Natural and Artificial Intelligence [51.60091394435895]
modularity has been shown to be critical for supporting the efficient learning and strong generalization abilities.<n>Despite its role in natural intelligence and its demonstrated benefits across a range of seemingly disparate AI subfields, modularity remains relatively underappreciated in mainstream AI research.<n>In particular, we examine what computational advantages modularity provides, how it has emerged as a solution across several AI research areas, and how modularity can help bridge the gap between natural and artificial intelligence.
arXiv Detail & Related papers (2026-02-21T21:47:09Z) - Intelligence Foundation Model: A New Perspective to Approach Artificial General Intelligence [55.07411490538404]
We propose a new perspective for approaching artificial general intelligence (AGI) through an intelligence foundation model (IFM)<n>IFM aims to acquire the underlying mechanisms of intelligence by learning directly from diverse intelligent behaviors.
arXiv Detail & Related papers (2025-11-13T09:28:41Z) - A Comprehensive Review of AI Agents: Transforming Possibilities in Technology and Beyond [3.96715377510494]
Review aims to guide the next generation of AI agent systems toward more robust, adaptable, and trustworthy autonomous intelligence.<n>We synthesize insights from cognitive science-inspired models, hierarchical reinforcement learning frameworks, and large language model-based reasoning.<n>We discuss the pressing ethical, safety, and interpretability concerns associated with deploying these agents in real-world scenarios.
arXiv Detail & Related papers (2025-08-16T07:38:45Z) - Model-Grounded Symbolic Artificial Intelligence Systems Learning and Reasoning with Model-Grounded Symbolic Artificial Intelligence Systems [7.000073566770884]
Neurosymbolic artificial intelligence (AI) systems combine neural network and classical symbolic AI mechanisms.<n>We develop novel learning and reasoning approaches that preserve structural similarities to traditional learning and reasoning paradigms.
arXiv Detail & Related papers (2025-07-14T01:34:05Z) - Beyond Statistical Learning: Exact Learning Is Essential for General Intelligence [59.07578850674114]
Sound deductive reasoning is an indisputably desirable aspect of general intelligence.<n>It is well-documented that even the most advanced frontier systems regularly and consistently falter on easily-solvable reasoning tasks.<n>We argue that their unsound behavior is a consequence of the statistical learning approach powering their development.
arXiv Detail & Related papers (2025-06-30T14:37:50Z) - Advances and Challenges in Foundation Agents: From Brain-Inspired Intelligence to Evolutionary, Collaborative, and Safe Systems [133.45145180645537]
The advent of large language models (LLMs) has catalyzed a transformative shift in artificial intelligence.
As these agents increasingly drive AI research and practical applications, their design, evaluation, and continuous improvement present intricate, multifaceted challenges.
This survey provides a comprehensive overview, framing intelligent agents within a modular, brain-inspired architecture.
arXiv Detail & Related papers (2025-03-31T18:00:29Z) - Lessons from complexity theory for AI governance [1.6122472145662998]
Complexity theory can help illuminate features of AI that pose central challenges for policymakers.
We examine how efforts to govern AI are marked by deep uncertainty.
We propose a set of complexity-compatible principles concerning the timing and structure of AI governance.
arXiv Detail & Related papers (2025-01-07T07:56:40Z) - Over the Edge of Chaos? Excess Complexity as a Roadblock to Artificial General Intelligence [4.901955678857442]
We posited the existence of critical points, akin to phase transitions in complex systems, where AI performance might plateau or regress into instability upon exceeding a critical complexity threshold.
Our simulations demonstrated how increasing the complexity of the AI system could exceed an upper criticality threshold, leading to unpredictable performance behaviours.
arXiv Detail & Related papers (2024-07-04T05:46:39Z) - Integration of cognitive tasks into artificial general intelligence test
for large models [54.72053150920186]
We advocate for a comprehensive framework of cognitive science-inspired artificial general intelligence (AGI) tests.
The cognitive science-inspired AGI tests encompass the full spectrum of intelligence facets, including crystallized intelligence, fluid intelligence, social intelligence, and embodied intelligence.
arXiv Detail & Related papers (2024-02-04T15:50:42Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - Non-equilibrium physics: from spin glasses to machine and neural
learning [0.0]
Disordered many-body systems exhibit a wide range of emergent phenomena across different scales.
We aim to characterize such emergent intelligence in disordered systems through statistical physics.
We uncover relationships between learning mechanisms and physical dynamics that could serve as guiding principles for designing intelligent systems.
arXiv Detail & Related papers (2023-08-03T04:56:47Z) - Balancing Explainability-Accuracy of Complex Models [8.402048778245165]
We introduce a new approach for complex models based on the co-relation impact.
We propose approaches for both scenarios of independent features and dependent features.
We provide an upper bound of the complexity of our proposed approach for the dependent features.
arXiv Detail & Related papers (2023-05-23T14:20:38Z) - Incremental procedural and sensorimotor learning in cognitive humanoid
robots [52.77024349608834]
This work presents a cognitive agent that can learn procedures incrementally.
We show the cognitive functions required in each substage and how adding new functions helps address tasks previously unsolved by the agent.
Results show that this approach is capable of solving complex tasks incrementally.
arXiv Detail & Related papers (2023-04-30T22:51:31Z) - Collective Intelligence for Deep Learning: A Survey of Recent
Developments [11.247894240593691]
We will provide a historical context of neural network research's involvement with complex systems.
We will highlight several active areas in modern deep learning research that incorporate the principles of collective intelligence.
arXiv Detail & Related papers (2021-11-29T08:39:32Z) - From LSAT: The Progress and Challenges of Complex Reasoning [56.07448735248901]
We study the three challenging and domain-general tasks of the Law School Admission Test (LSAT), including analytical reasoning, logical reasoning and reading comprehension.
We propose a hybrid reasoning system to integrate these three tasks and achieve impressive overall performance on the LSAT tests.
arXiv Detail & Related papers (2021-08-02T05:43:03Z) - Self-organizing Democratized Learning: Towards Large-scale Distributed
Learning Systems [71.14339738190202]
democratized learning (Dem-AI) lays out a holistic philosophy with underlying principles for building large-scale distributed and democratized machine learning systems.
Inspired by Dem-AI philosophy, a novel distributed learning approach is proposed in this paper.
The proposed algorithms demonstrate better results in the generalization performance of learning models in agents compared to the conventional FL algorithms.
arXiv Detail & Related papers (2020-07-07T08:34:48Z) - Distributed and Democratized Learning: Philosophy and Research
Challenges [80.39805582015133]
We propose a novel design philosophy called democratized learning (Dem-AI)
Inspired by the societal groups of humans, the specialized groups of learning agents in the proposed Dem-AI system are self-organized in a hierarchical structure to collectively perform learning tasks more efficiently.
We present a reference design as a guideline to realize future Dem-AI systems, inspired by various interdisciplinary fields.
arXiv Detail & Related papers (2020-03-18T08:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.