The dynamical regime and its importance for evolvability, task
performance and generalization
- URL: http://arxiv.org/abs/2103.12184v1
- Date: Mon, 22 Mar 2021 21:22:52 GMT
- Title: The dynamical regime and its importance for evolvability, task
performance and generalization
- Authors: Jan Prosi, Sina Khajehabdollahi, Emmanouil Giannakakis, Georg Martius
and Anna Levina
- Abstract summary: We find that all populations, regardless of their initial regime, evolve to be subcritical in simple tasks.
We conclude that although the subcritical regime is preferable for a simple task, the optimal deviation from criticality depends on the task difficulty.
- Score: 14.059479351946386
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: It has long been hypothesized that operating close to the critical state is
beneficial for natural and artificial systems. We test this hypothesis by
evolving foraging agents controlled by neural networks that can change the
system's dynamical regime throughout evolution. Surprisingly, we find that all
populations, regardless of their initial regime, evolve to be subcritical in
simple tasks and even strongly subcritical populations can reach comparable
performance. We hypothesize that the moderately subcritical regime combines the
benefits of generalizability and adaptability brought by closeness to
criticality with the stability of the dynamics characteristic for subcritical
systems. By a resilience analysis, we find that initially critical agents
maintain their fitness level even under environmental changes and degrade
slowly with increasing perturbation strength. On the other hand, subcritical
agents originally evolved to the same fitness, were often rendered utterly
inadequate and degraded faster. We conclude that although the subcritical
regime is preferable for a simple task, the optimal deviation from criticality
depends on the task difficulty: for harder tasks, agents evolve closer to
criticality. Furthermore, subcritical populations cannot find the path to
decrease their distance to criticality. In summary, our study suggests that
initializing models near criticality is important to find an optimal and
flexible solution.
Related papers
- Improving Domain Generalization in Self-supervised Monocular Depth Estimation via Stabilized Adversarial Training [61.35809887986553]
We propose a general adversarial training framework, named Stabilized Conflict-optimization Adversarial Training (SCAT)
SCAT integrates adversarial data augmentation into self-supervised MDE methods to achieve a balance between stability and generalization.
Experiments on five benchmarks demonstrate that SCAT can achieve state-of-the-art performance and significantly improve the generalization capability of existing self-supervised MDE methods.
arXiv Detail & Related papers (2024-11-04T15:06:57Z) - Learning Deep Dissipative Dynamics [5.862431328401459]
Dissipativity is a crucial indicator for dynamical systems that generalizes stability and input-output stability.
We propose a differentiable projection that transforms any dynamics represented by neural networks into dissipative ones.
Our method strictly guarantees stability, input-output stability, and energy conservation of trained dynamical systems.
arXiv Detail & Related papers (2024-08-21T09:44:43Z) - Neural Contractive Dynamical Systems [13.046426079291376]
Stability guarantees are crucial when ensuring a fully autonomous robot does not take undesirable or potentially harmful actions.
We propose a novel methodology to learn neural contractive dynamical systems, where our neural architecture ensures contraction.
We show that our approach encodes the desired dynamics more accurately than the current state-of-the-art, which provides less strong stability guarantees.
arXiv Detail & Related papers (2024-01-17T17:18:21Z) - Agent Alignment in Evolving Social Norms [65.45423591744434]
We propose an evolutionary framework for agent evolution and alignment, named EvolutionaryAgent.
In an environment where social norms continuously evolve, agents better adapted to the current social norms will have a higher probability of survival and proliferation.
We show that EvolutionaryAgent can align progressively better with the evolving social norms while maintaining its proficiency in general tasks.
arXiv Detail & Related papers (2024-01-09T15:44:44Z) - Revisiting Plasticity in Visual Reinforcement Learning: Data, Modules and Training Stages [56.98243487769916]
Plasticity, the ability of a neural network to evolve with new data, is crucial for high-performance and sample-efficient visual reinforcement learning.
We propose Adaptive RR, which dynamically adjusts the replay ratio based on the critic's plasticity level.
arXiv Detail & Related papers (2023-10-11T12:05:34Z) - Understanding, Predicting and Better Resolving Q-Value Divergence in
Offline-RL [86.0987896274354]
We first identify a fundamental pattern, self-excitation, as the primary cause of Q-value estimation divergence in offline RL.
We then propose a novel Self-Excite Eigenvalue Measure (SEEM) metric to measure the evolving property of Q-network at training.
For the first time, our theory can reliably decide whether the training will diverge at an early stage.
arXiv Detail & Related papers (2023-10-06T17:57:44Z) - When to be critical? Performance and evolvability in different regimes
of neural Ising agents [18.536813548129878]
It has long been hypothesized that operating close to the critical state is beneficial for natural, artificial and their evolutionary systems.
We put this hypothesis to test in a system of evolving foraging agents controlled by neural networks.
Surprisingly, we find that all populations that discover solutions, evolve to be subcritical.
arXiv Detail & Related papers (2023-03-28T17:57:57Z) - Achieving a Better Stability-Plasticity Trade-off via Auxiliary Networks
in Continual Learning [23.15206507040553]
We propose Auxiliary Network Continual Learning (ANCL) to equip the neural network with the ability to learn the current task.
ANCL applies an additional auxiliary network which promotes plasticity to the continually learned model which mainly focuses on stability.
More concretely, the proposed framework materializes in a regularizer that naturally interpolates between plasticity and stability.
arXiv Detail & Related papers (2023-03-16T17:00:42Z) - Beyond Robustness: A Taxonomy of Approaches towards Resilient
Multi-Robot Systems [41.71459547415086]
We analyze how resilience is achieved in networks of agents and multi-robot systems.
We argue that resilience must become a central engineering design consideration.
arXiv Detail & Related papers (2021-09-25T11:25:02Z) - Non-Singular Adversarial Robustness of Neural Networks [58.731070632586594]
Adrial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations.
We formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.
arXiv Detail & Related papers (2021-02-23T20:59:30Z) - Efficient Empowerment Estimation for Unsupervised Stabilization [75.32013242448151]
empowerment principle enables unsupervised stabilization of dynamical systems at upright positions.
We propose an alternative solution based on a trainable representation of a dynamical system as a Gaussian channel.
We show that our method has a lower sample complexity, is more stable in training, possesses the essential properties of the empowerment function, and allows estimation of empowerment from images.
arXiv Detail & Related papers (2020-07-14T21:10:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.