A Generic Model for Swarm Intelligence and Its Validations
- URL: http://arxiv.org/abs/1712.04182v3
- Date: Fri, 17 Jan 2025 06:56:43 GMT
- Title: A Generic Model for Swarm Intelligence and Its Validations
- Authors: Wenpin Jiao,
- Abstract summary: A contradiction-centric model for swarm intelligence is proposed.<n>The model hypothesizes that the emergence of swarm intelligence is rooted in the de-velopment of individuals' internal contradictions.<n>Five swarm intelligence systems are studied to illustrate its broad applicability.
- Score: 0.456877715768796
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The modeling of emergent swarm intelligence constitutes a major challenge and it has been tackled in a number of different ways. However, existing approaches fail to capture the nature of swarm intelligence and they are either too abstract for practical application or not generic enough to describe the various types of emergence phenomena. In this paper, a contradiction-centric model for swarm intelligence is proposed, in which individu-als determine their behaviors based on their internal contradictions whilst they associate and interact to update their contradictions. The model hypothesizes that 1) the emergence of swarm intelligence is rooted in the de-velopment of individuals' internal contradictions and the interactions taking place between individuals and the environment, and 2) swarm intelligence is essentially a combinative reflection of the configurations of individuals' internal contradictions and the distributions of these contradictions across individuals. The model is formally described and five swarm intelligence systems are studied to illustrate its broad applicability. The studies confirm the generic character of the model and its effectiveness for describing the emergence of various kinds of swarm intelligence; and they also demonstrate that the model is straightforward to apply, without the need for complicated computations.
Related papers
- Reasoning aligns language models to human cognition [12.07126784684808]
We introduce an active probabilistic reasoning task that cleanly separates sampling (actively acquiring evidence) from inference (integrating evidence toward a decision)<n> Benchmarking humans and a broad set of contemporary large language models against near-optimal reference policies reveals a consistent pattern.<n>This model places humans and models in a shared low-dimensional cognitive space, reproduces behavioral signatures across agents, and shows how chain-of-thought shifts language models toward human-like regimes of evidence accumulation and belief-to-choice mapping.
arXiv Detail & Related papers (2026-02-09T14:13:39Z) - Reasoning Models Generate Societies of Thought [9.112083442162671]
We show that enhanced reasoning emerges from simulating multi-agent-like interactions.<n>We find that reasoning models like DeepSeek-R1 and QwQ-32B exhibit much greater perspective diversity than instruction-tuned models.
arXiv Detail & Related papers (2026-01-15T19:52:33Z) - Embedded Universal Predictive Intelligence: a coherent framework for multi-agent learning [57.23345786304694]
We introduce a framework for prospective learning and embedded agency centered on self-prediction.<n>We show that in multi-agent settings, self-prediction enables agents to reason about others running similar algorithms.<n>We extend the theory of AIXI, and study universally intelligent embedded agents which start from a Solomonoff prior.
arXiv Detail & Related papers (2025-11-27T08:46:48Z) - EgoAgent: A Joint Predictive Agent Model in Egocentric Worlds [107.62381002403814]
This paper addresses the task of learning an agent model behaving like humans, which can jointly perceive, predict, and act in egocentric worlds.
We propose a joint predictive agent model, named EgoAgent, that simultaneously learns to represent the world, predict future states, and take reasonable actions within a single transformer.
arXiv Detail & Related papers (2025-02-09T11:28:57Z) - The Hive Mind is a Single Reinforcement Learning Agent [13.347362865770279]
This paper draws from the well-established collective decision-making model of nest-site selection in swarms of honey bees.<n>We show that the emergent distributed cognition is equivalent to a single online reinforcement learning (RL) agent interacting with many parallel environments.<n>Our analysis implies that a group of cognition-limited organisms can be on-par with a more complex, reinforcement-enabled entity.
arXiv Detail & Related papers (2024-10-23T02:49:37Z) - Visual-O1: Understanding Ambiguous Instructions via Multi-modal Multi-turn Chain-of-thoughts Reasoning [53.45295657891099]
This paper proposes Visual-O1, a multi-modal multi-turn chain-of-thought reasoning framework.
It simulates human multi-modal multi-turn reasoning, providing instantial experience for highly intelligent models.
Our work highlights the potential of artificial intelligence to work like humans in real-world scenarios with uncertainty and ambiguity.
arXiv Detail & Related papers (2024-10-04T11:18:41Z) - Position: Stop Making Unscientific AGI Performance Claims [6.343515088115924]
Developments in the field of Artificial Intelligence (AI) have created a 'perfect storm' for observing'sparks' of Artificial General Intelligence (AGI)
We argue and empirically demonstrate that the finding of meaningful patterns in latent spaces of models cannot be seen as evidence in favor of AGI.
We conclude that both the methodological setup and common public image of AI are ideal for the misinterpretation that correlations between model representations and some variables of interest are 'caused' by the model's understanding of underlying 'ground truth' relationships.
arXiv Detail & Related papers (2024-02-06T12:42:21Z) - Designing Ecosystems of Intelligence from First Principles [34.429740648284685]
This white paper lays out a vision of research and development in the field of artificial intelligence for the next decade (and beyond)
Its denouement is a cyber-physical ecosystem of natural and synthetic sense-making, in which humans are integral participants.
This vision is premised on active inference, a formulation of adaptive behavior that can be read as a physics of intelligence.
arXiv Detail & Related papers (2022-12-02T18:24:06Z) - Rethinking Trajectory Prediction via "Team Game" [118.59480535826094]
We present a novel formulation for multi-agent trajectory prediction, which explicitly introduces the concept of interactive group consensus.
On two multi-agent settings, i.e. team sports and pedestrians, the proposed framework consistently achieves superior performance compared to existing methods.
arXiv Detail & Related papers (2022-10-17T07:16:44Z) - Multi-Agent Imitation Learning with Copulas [102.27052968901894]
Multi-agent imitation learning aims to train multiple agents to perform tasks from demonstrations by learning a mapping between observations and actions.
In this paper, we propose to use copula, a powerful statistical tool for capturing dependence among random variables, to explicitly model the correlation and coordination in multi-agent systems.
Our proposed model is able to separately learn marginals that capture the local behavioral patterns of each individual agent, as well as a copula function that solely and fully captures the dependence structure among agents.
arXiv Detail & Related papers (2021-07-10T03:49:41Z) - The Less Intelligent the Elements, the More Intelligent the Whole. Or, Possibly Not? [0.0]
I approach this debate by endowing the preys and predators of the Lotka-Volterra model with behavioral algorithms characterized by different levels of sophistication.<n>The main finding is that by endowing both preys and predators with the capability of making predictions based on linear extrapolation a novel sort of dynamic equilibrium appears.
arXiv Detail & Related papers (2020-12-23T14:19:49Z) - A Bayesian Account of Measures of Interpretability in Human-AI
Interaction [34.99424576619341]
Existing approaches for the design of interpretable agent behavior consider different measures of interpretability in isolation.
We propose a revised model where all these behaviors can be meaningfully modeled together.
We will highlight interesting consequences of this unified model and motivate, through results of a user study.
arXiv Detail & Related papers (2020-11-22T03:28:28Z) - Mechanisms for Handling Nested Dependencies in Neural-Network Language
Models and Humans [75.15855405318855]
We studied whether a modern artificial neural network trained with "deep learning" methods mimics a central aspect of human sentence processing.
Although the network was solely trained to predict the next word in a large corpus, analysis showed the emergence of specialized units that successfully handled local and long-distance syntactic agreement.
We tested the model's predictions in a behavioral experiment where humans detected violations in number agreement in sentences with systematic variations in the singular/plural status of multiple nouns.
arXiv Detail & Related papers (2020-06-19T12:00:05Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z) - Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial
Perturbations [65.05561023880351]
Adversarial examples are malicious inputs crafted to induce misclassification.
This paper studies a complementary failure mode, invariance-based adversarial examples.
We show that defenses against sensitivity-based attacks actively harm a model's accuracy on invariance-based attacks.
arXiv Detail & Related papers (2020-02-11T18:50:23Z) - Variational Autoencoders for Opponent Modeling in Multi-Agent Systems [9.405879323049659]
Multi-agent systems exhibit complex behaviors that emanate from the interactions of multiple agents in a shared environment.
In this work, we are interested in controlling one agent in a multi-agent system and successfully learn to interact with the other agents that have fixed policies.
Modeling the behavior of other agents (opponents) is essential in understanding the interactions of the agents in the system.
arXiv Detail & Related papers (2020-01-29T13:38:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.