Emergence in artificial life
- URL: http://arxiv.org/abs/2105.03216v1
- Date: Fri, 30 Apr 2021 16:40:52 GMT
- Title: Emergence in artificial life
- Authors: Carlos Gershenson
- Abstract summary: emergence has been identified as one of the features of complex systems.
It can be said that life emerges from the interactions of complex molecules.
ALife systems are not so complex, be them soft (simulations), hard (robots), or wet (protocells)
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Concepts similar to emergence have been used since antiquity, but we lack an
agreed definition of emergence. Still, emergence has been identified as one of
the features of complex systems. Most would agree on the statement "life is
complex". Thus, understanding emergence and complexity should benefit the study
of living systems. It can be said that life emerges from the interactions of
complex molecules. But how useful is this to understand living systems?
Artificial life (ALife) has been developed in recent decades to study life
using a synthetic approach: build it to understand it. ALife systems are not so
complex, be them soft (simulations), hard (robots), or wet (protocells). Then,
we can aim at first understanding emergence in ALife, for then using this
knowledge in biology. I argue that to understand emergence and life, it becomes
useful to use information as a framework. In a general sense, emergence can be
defined as information that is not present at one scale but is present at
another scale. This perspective avoids problems of studying emergence from a
materialistic framework, and can be useful to study self-organization and
complexity.
Related papers
- A Complexity-Based Theory of Compositionality [53.025566128892066]
In AI, compositional representations can enable a powerful form of out-of-distribution generalization.
Here, we propose a formal definition of compositionality that accounts for and extends our intuitions about compositionality.
The definition is conceptually simple, quantitative, grounded in algorithmic information theory, and applicable to any representation.
arXiv Detail & Related papers (2024-10-18T18:37:27Z) - Large Language Models for Scientific Synthesis, Inference and
Explanation [56.41963802804953]
We show how large language models can perform scientific synthesis, inference, and explanation.
We show that the large language model can augment this "knowledge" by synthesizing from the scientific literature.
This approach has the further advantage that the large language model can explain the machine learning system's predictions.
arXiv Detail & Related papers (2023-10-12T02:17:59Z) - Incremental procedural and sensorimotor learning in cognitive humanoid
robots [52.77024349608834]
This work presents a cognitive agent that can learn procedures incrementally.
We show the cognitive functions required in each substage and how adding new functions helps address tasks previously unsolved by the agent.
Results show that this approach is capable of solving complex tasks incrementally.
arXiv Detail & Related papers (2023-04-30T22:51:31Z) - On the Liveliness of Artificial Life [0.0]
We define life as "any system with a boundary to confine the system within a definite volume"
We argue that digital organisms may not be the boundary case of life even though some digital organisms are not considered alive.
arXiv Detail & Related papers (2023-02-19T08:31:54Z) - There's Plenty of Room Right Here: Biological Systems as Evolved,
Overloaded, Multi-scale Machines [0.0]
We argue that a useful path forward results from abandoning hard boundaries between categories and adopting an observer-dependent, pragmatic view.
Efforts to re-shape living systems for biomedical or bioengineering purposes require prediction and control of their function at multiple scales.
We argue that an observer-centered framework for the computations performed by evolved and designed systems will improve the understanding of meso-scale events.
arXiv Detail & Related papers (2022-12-20T22:26:40Z) - Hybrid Life: Integrating Biological, Artificial, and Cognitive Systems [0.31498833540989407]
Hybrid Life is an attempt to bring attention to some of the most recent developments within the artificial life community.
It focuses on three complementary themes: 1) theories of systems and agents, 2) hybrid augmentation, with augmented architectures combining living and artificial systems, and 3) hybrid interactions among artificial and biological systems.
arXiv Detail & Related papers (2022-12-01T05:18:06Z) - The Introspective Agent: Interdependence of Strategy, Physiology, and
Sensing for Embodied Agents [51.94554095091305]
We argue for an introspective agent, which considers its own abilities in the context of its environment.
Just as in nature, we hope to reframe strategy as one tool, among many, to succeed in an environment.
arXiv Detail & Related papers (2022-01-02T20:14:01Z) - Artificial life: sustainable self-replicating systems [0.0]
Interdisciplinary field of Artificial Life aims to study life "as it could be"
The word "artificial" refers to the fact that humans are involved in the creation process.
The artificial life forms might be completely unlike natural forms of life, with different chemical compositions.
arXiv Detail & Related papers (2021-05-27T16:54:23Z) - Perspective: Purposeful Failure in Artificial Life and Artificial
Intelligence [0.0]
I argue that failures can be a blueprint characterizing living organisms and biological intelligence.
Imitating biological successes in Artificial Life and Artificial Intelligence can be misleading; imitating failures offers a path towards understanding and emulating life it in artificial systems.
arXiv Detail & Related papers (2021-02-24T05:43:44Z) - Inductive Biases for Deep Learning of Higher-Level Cognition [108.89281493851358]
A fascinating hypothesis is that human and animal intelligence could be explained by a few principles.
This work considers a larger list, focusing on those which concern mostly higher-level and sequential conscious processing.
The objective of clarifying these particular principles is that they could potentially help us build AI systems benefiting from humans' abilities.
arXiv Detail & Related papers (2020-11-30T18:29:25Z) - Dark, Beyond Deep: A Paradigm Shift to Cognitive AI with Humanlike
Common Sense [142.53911271465344]
We argue that the next generation of AI must embrace "dark" humanlike common sense for solving novel tasks.
We identify functionality, physics, intent, causality, and utility (FPICU) as the five core domains of cognitive AI with humanlike common sense.
arXiv Detail & Related papers (2020-04-20T04:07:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.