Designing Ecosystems of Intelligence from First Principles
- URL: http://arxiv.org/abs/2212.01354v2
- Date: Thu, 11 Jan 2024 18:09:36 GMT
- Title: Designing Ecosystems of Intelligence from First Principles
- Authors: Karl J Friston, Maxwell J D Ramstead, Alex B Kiefer, Alexander
Tschantz, Christopher L Buckley, Mahault Albarracin, Riddhi J Pitliya, Conor
Heins, Brennan Klein, Beren Millidge, Dalton A R Sakthivadivel, Toby St Clere
Smithe, Magnus Koudahl, Safae Essafi Tremblay, Capm Petersen, Kaiser Fung,
Jason G Fox, Steven Swanson, Dan Mapes, Gabriel Ren\'e
- Abstract summary: This white paper lays out a vision of research and development in the field of artificial intelligence for the next decade (and beyond)
Its denouement is a cyber-physical ecosystem of natural and synthetic sense-making, in which humans are integral participants.
This vision is premised on active inference, a formulation of adaptive behavior that can be read as a physics of intelligence.
- Score: 34.429740648284685
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This white paper lays out a vision of research and development in the field
of artificial intelligence for the next decade (and beyond). Its denouement is
a cyber-physical ecosystem of natural and synthetic sense-making, in which
humans are integral participants -- what we call ''shared intelligence''. This
vision is premised on active inference, a formulation of adaptive behavior that
can be read as a physics of intelligence, and which inherits from the physics
of self-organization. In this context, we understand intelligence as the
capacity to accumulate evidence for a generative model of one's sensed world --
also known as self-evidencing. Formally, this corresponds to maximizing
(Bayesian) model evidence, via belief updating over several scales: i.e.,
inference, learning, and model selection. Operationally, this self-evidencing
can be realized via (variational) message passing or belief propagation on a
factor graph. Crucially, active inference foregrounds an existential imperative
of intelligent systems; namely, curiosity or the resolution of uncertainty.
This same imperative underwrites belief sharing in ensembles of agents, in
which certain aspects (i.e., factors) of each agent's generative world model
provide a common ground or frame of reference. Active inference plays a
foundational role in this ecology of belief sharing -- leading to a formal
account of collective intelligence that rests on shared narratives and goals.
We also consider the kinds of communication protocols that must be developed to
enable such an ecosystem of intelligences and motivate the development of a
shared hyper-spatial modeling language and transaction protocol, as a first --
and key -- step towards such an ecology.
Related papers
- The Trap of Presumed Equivalence: Artificial General Intelligence Should Not Be Assessed on the Scale of Human Intelligence [0.0]
A traditional approach to assessing emerging intelligence in the theory of intelligent systems is based on the similarity, 'imitation' of human-like actions and behaviors.
We argue that under some natural assumptions, developing intelligent systems will be able to form their own in-tents and objectives.
arXiv Detail & Related papers (2024-10-14T13:39:58Z) - SciAgents: Automating scientific discovery through multi-agent intelligent graph reasoning [0.0]
A key challenge in artificial intelligence is the creation of systems capable of autonomously advancing scientific understanding.
We present SciAgents, an approach that leverages three core concepts.
The framework autonomously generates and refines research hypotheses, elucidating underlying mechanisms, design principles, and unexpected material properties.
Our case studies demonstrate scalable capabilities to combine generative AI, ontological representations, and multi-agent modeling, harnessing a swarm of intelligence' similar to biological systems.
arXiv Detail & Related papers (2024-09-09T12:25:10Z) - Position Paper: Agent AI Towards a Holistic Intelligence [53.35971598180146]
We emphasize developing Agent AI -- an embodied system that integrates large foundation models into agent actions.
In this paper, we propose a novel large action model to achieve embodied intelligent behavior, the Agent Foundation Model.
arXiv Detail & Related papers (2024-02-28T16:09:56Z) - World Models and Predictive Coding for Cognitive and Developmental
Robotics: Frontiers and Challenges [51.92834011423463]
We focus on the two concepts of world models and predictive coding.
In neuroscience, predictive coding proposes that the brain continuously predicts its inputs and adapts to model its own dynamics and control behavior in its environment.
arXiv Detail & Related papers (2023-01-14T06:38:14Z) - Intrinsically Motivated Learning of Causal World Models [0.0]
A promising direction is to build world models capturing the true physical mechanisms hidden behind the sensorimotor interaction with the environment.
Inferring the causal structure of the environment could benefit from well-chosen actions as means to collect relevant interventional data.
arXiv Detail & Related papers (2022-08-09T16:48:28Z) - An Enactivist-Inspired Mathematical Model of Cognition [5.8010446129208155]
We formulate five basic tenets of enactivist cognitive science that we have carefully identified in the relevant literature.
We then develop a mathematical framework to talk about cognitive systems which complies with these enactivist tenets.
arXiv Detail & Related papers (2022-06-10T13:03:47Z) - Acquiring and Modelling Abstract Commonsense Knowledge via Conceptualization [49.00409552570441]
We study the role of conceptualization in commonsense reasoning, and formulate a framework to replicate human conceptual induction.
We apply the framework to ATOMIC, a large-scale human-annotated CKG, aided by the taxonomy Probase.
arXiv Detail & Related papers (2022-06-03T12:24:49Z) - Inductive Biases for Deep Learning of Higher-Level Cognition [108.89281493851358]
A fascinating hypothesis is that human and animal intelligence could be explained by a few principles.
This work considers a larger list, focusing on those which concern mostly higher-level and sequential conscious processing.
The objective of clarifying these particular principles is that they could potentially help us build AI systems benefiting from humans' abilities.
arXiv Detail & Related papers (2020-11-30T18:29:25Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.