The Less Intelligent the Elements, the More Intelligent the Whole. Or, Possibly Not?
- URL: http://arxiv.org/abs/2012.12689v5
- Date: Thu, 06 Nov 2025 18:28:25 GMT
- Title: The Less Intelligent the Elements, the More Intelligent the Whole. Or, Possibly Not?
- Authors: Guido Fioretti,
- Abstract summary: I approach this debate by endowing the preys and predators of the Lotka-Volterra model with behavioral algorithms characterized by different levels of sophistication.<n>The main finding is that by endowing both preys and predators with the capability of making predictions based on linear extrapolation a novel sort of dynamic equilibrium appears.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The agent-based modelling community has a debate on how ``intelligent'' artificial agents should be, and in what ways their local intelligence relates to the emergence of a collective intelligence. I approach this debate by endowing the preys and predators of the Lotka-Volterra model with behavioral algorithms characterized by different levels of sophistication. The main finding is that by endowing both preys and predators with the capability of making predictions based on linear extrapolation a novel sort of dynamic equilibrium appears, where both species co-exist while both populations grow indefinitely. While this broadly confirms that, in general, relatively simple agents favor the emergence of complex collective behavior, it also suggests that one fundamental mechanism is that the capability of individuals to take first-order derivatives of one other's behavior can allow the collective computation of derivatives of any order.
Related papers
- Embedded Universal Predictive Intelligence: a coherent framework for multi-agent learning [57.23345786304694]
We introduce a framework for prospective learning and embedded agency centered on self-prediction.<n>We show that in multi-agent settings, self-prediction enables agents to reason about others running similar algorithms.<n>We extend the theory of AIXI, and study universally intelligent embedded agents which start from a Solomonoff prior.
arXiv Detail & Related papers (2025-11-27T08:46:48Z) - P: A Universal Measure of Predictive Intelligence [0.0]
There is no commonly agreed definition of the intelligence that AI systems are said to possess.<n>No-one has developed a practical measure that would enable us to compare the intelligence of humans, animals and AIs on a single ratio scale.<n>This paper sets out a new universal measure of intelligence that is based on the hypothesis that prediction is the most important component of intelligence.
arXiv Detail & Related papers (2025-05-30T10:05:54Z) - The Society of HiveMind: Multi-Agent Optimization of Foundation Model Swarms to Unlock the Potential of Collective Intelligence [6.322831694506287]
We develop a framework that orchestrates the interaction between multiple AI foundation models.
We find that the framework provides a negligible benefit on tasks that mainly require real-world knowledge.
On the other hand, we remark a significant improvement on tasks that require intensive logical reasoning.
arXiv Detail & Related papers (2025-03-07T14:45:03Z) - EgoAgent: A Joint Predictive Agent Model in Egocentric Worlds [107.62381002403814]
This paper addresses the task of learning an agent model behaving like humans, which can jointly perceive, predict, and act in egocentric worlds.
We propose a joint predictive agent model, named EgoAgent, that simultaneously learns to represent the world, predict future states, and take reasonable actions within a single transformer.
arXiv Detail & Related papers (2025-02-09T11:28:57Z) - Can Language Models Learn to Skip Steps? [59.84848399905409]
We study the ability to skip steps in reasoning.
Unlike humans, who may skip steps to enhance efficiency or to reduce cognitive load, models do not possess such motivations.
Our work presents the first exploration into human-like step-skipping ability.
arXiv Detail & Related papers (2024-11-04T07:10:24Z) - The Hive Mind is a Single Reinforcement Learning Agent [13.347362865770279]
This paper draws from the well-established collective decision-making model of nest-site selection in swarms of honey bees.<n>We show that the emergent distributed cognition is equivalent to a single online reinforcement learning (RL) agent interacting with many parallel environments.<n>Our analysis implies that a group of cognition-limited organisms can be on-par with a more complex, reinforcement-enabled entity.
arXiv Detail & Related papers (2024-10-23T02:49:37Z) - On a Functional Definition of Intelligence [0.0]
Without an agreed-upon definition of intelligence, asking "is this system intelligent?"" is an untestable question.
Most work on precisely capturing what we mean by "intelligence" has come from the fields of philosophy, psychology, and cognitive science.
We present an argument for a purely functional, black-box definition of intelligence, distinct from how that intelligence is actually achieved.
arXiv Detail & Related papers (2023-12-15T05:46:49Z) - The Generative AI Paradox: "What It Can Create, It May Not Understand" [81.89252713236746]
Recent wave of generative AI has sparked excitement and concern over potentially superhuman levels of artificial intelligence.
At the same time, models still show basic errors in understanding that would not be expected even in non-expert humans.
This presents us with an apparent paradox: how do we reconcile seemingly superhuman capabilities with the persistence of errors that few humans would make?
arXiv Detail & Related papers (2023-10-31T18:07:07Z) - AI for Mathematics: A Cognitive Science Perspective [86.02346372284292]
Mathematics is one of the most powerful conceptual systems developed and used by the human species.
Rapid progress in AI, particularly propelled by advances in large language models (LLMs), has sparked renewed, widespread interest in building such systems.
arXiv Detail & Related papers (2023-10-19T02:00:31Z) - Predator-prey survival pressure is sufficient to evolve swarming
behaviors [22.69193229479221]
We propose a minimal predator-prey coevolution framework based on mixed cooperative-competitive multiagent reinforcement learning.
Surprisingly, our analysis of this approach reveals an unexpectedly rich diversity of emergent behaviors for both prey and predators.
arXiv Detail & Related papers (2023-08-24T08:03:11Z) - The Nature of Intelligence [0.0]
The essence of intelligence commonly represented by both humans and AI is unknown.
We show that the nature of intelligence is a series of mathematically functional processes that minimize system entropy.
This essay should be a starting point for a deeper understanding of the universe and us as human beings.
arXiv Detail & Related papers (2023-07-20T23:11:59Z) - Theory of Mind as Intrinsic Motivation for Multi-Agent Reinforcement
Learning [5.314466196448188]
We present a method of grounding semantically meaningful, human-interpretable beliefs within policies modeled by deep networks.
We propose that ability of each agent to predict the beliefs of the other agents can be used as an intrinsic reward signal for multi-agent reinforcement learning.
arXiv Detail & Related papers (2023-07-03T17:07:18Z) - Leveraging Human Feedback to Evolve and Discover Novel Emergent
Behaviors in Robot Swarms [14.404339094377319]
We seek to leverage human input to automatically discover a taxonomy of collective behaviors that can emerge from a particular multi-agent system.
Our proposed approach adapts to user preferences by learning a similarity space over swarm collective behaviors.
We test our approach in simulation on two robot capability models and show that our methods consistently discover a richer set of emergent behaviors than prior work.
arXiv Detail & Related papers (2023-04-25T15:18:06Z) - When to Make Exceptions: Exploring Language Models as Accounts of Human
Moral Judgment [96.77970239683475]
AI systems need to be able to understand, interpret and predict human moral judgments and decisions.
A central challenge for AI safety is capturing the flexibility of the human moral mind.
We present a novel challenge set consisting of rule-breaking question answering.
arXiv Detail & Related papers (2022-10-04T09:04:27Z) - Learning Theory of Mind via Dynamic Traits Attribution [59.9781556714202]
We propose a new neural ToM architecture that learns to generate a latent trait vector of an actor from the past trajectories.
This trait vector then multiplicatively modulates the prediction mechanism via a fast weights' scheme in the prediction neural network.
We empirically show that the fast weights provide a good inductive bias to model the character traits of agents and hence improves mindreading ability.
arXiv Detail & Related papers (2022-04-17T11:21:18Z) - A World-Self Model Towards Understanding Intelligence [0.0]
We will compare human and artificial intelligence, and propose that a certain aspect of human intelligence is the key to connect perception and cognition.
We will present the broader idea of "concept", the principles and mathematical frameworks of the new model World-Self Model (WSM) of intelligence, and finally an unified general framework of intelligence based on WSM.
arXiv Detail & Related papers (2022-03-25T16:42:23Z) - Reward is not enough: can we liberate AI from the reinforcement learning paradigm? [0.0]
Reward is not enough to explain many activities associated with natural and artificial intelligence.
Complexities of intelligent behaviour are not simply second-order complications on top of reward maximisation.
arXiv Detail & Related papers (2022-02-03T18:31:48Z) - Heterogeneous-Agent Trajectory Forecasting Incorporating Class
Uncertainty [54.88405167739227]
We present HAICU, a method for heterogeneous-agent trajectory forecasting that explicitly incorporates agents' class probabilities.
We additionally present PUP, a new challenging real-world autonomous driving dataset.
We demonstrate that incorporating class probabilities in trajectory forecasting significantly improves performance in the face of uncertainty.
arXiv Detail & Related papers (2021-04-26T10:28:34Z) - Inductive Biases for Deep Learning of Higher-Level Cognition [108.89281493851358]
A fascinating hypothesis is that human and animal intelligence could be explained by a few principles.
This work considers a larger list, focusing on those which concern mostly higher-level and sequential conscious processing.
The objective of clarifying these particular principles is that they could potentially help us build AI systems benefiting from humans' abilities.
arXiv Detail & Related papers (2020-11-30T18:29:25Z) - Future Trends for Human-AI Collaboration: A Comprehensive Taxonomy of
AI/AGI Using Multiple Intelligences and Learning Styles [95.58955174499371]
We describe various aspects of multiple human intelligences and learning styles, which may impact on a variety of AI problem domains.
Future AI systems will be able not only to communicate with human users and each other, but also to efficiently exchange knowledge and wisdom.
arXiv Detail & Related papers (2020-08-07T21:00:13Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z) - A Generic Model for Swarm Intelligence and Its Validations [0.456877715768796]
A contradiction-centric model for swarm intelligence is proposed.<n>The model hypothesizes that the emergence of swarm intelligence is rooted in the de-velopment of individuals' internal contradictions.<n>Five swarm intelligence systems are studied to illustrate its broad applicability.
arXiv Detail & Related papers (2017-12-12T09:25:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.