Hypernetworks That Evolve Themselves
- URL: http://arxiv.org/abs/2512.16406v1
- Date: Thu, 18 Dec 2025 11:05:34 GMT
- Title: Hypernetworks That Evolve Themselves
- Authors: Joachim Winther Pedersen, Erwan Plantec, Eleni Nisioti, Marcello Barylli, Milton Montero, Kathrin Korte, Sebastian Risi,
- Abstract summary: We propose Self-Referential Graph HyperNetworks, systems where variation and inheritance is embedded in the network.<n>By uniting hypernetworks, parameter generation, and graph-based representations, Self-Referential GHNs mutate and evaluate themselves while adapting selectable traits.<n>Our findings support the idea that evolvability itself can emerge from neural self-reference.
- Score: 3.4524024382493774
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: How can neural networks evolve themselves without relying on external optimizers? We propose Self-Referential Graph HyperNetworks, systems where the very machinery of variation and inheritance is embedded within the network. By uniting hypernetworks, stochastic parameter generation, and graph-based representations, Self-Referential GHNs mutate and evaluate themselves while adapting mutation rates as selectable traits. Through new reinforcement learning benchmarks with environmental shifts (CartPoleSwitch, LunarLander-Switch), Self-Referential GHNs show swift, reliable adaptation and emergent population dynamics. In the locomotion benchmark Ant-v5, they evolve coherent gaits, showing promising fine-tuning capabilities by autonomously decreasing variation in the population to concentrate around promising solutions. Our findings support the idea that evolvability itself can emerge from neural self-reference. Self-Referential GHNs reflect a step toward synthetic systems that more closely mirror biological evolution, offering tools for autonomous, open-ended learning agents.
Related papers
- Yunjue Agent Tech Report: A Fully Reproducible, Zero-Start In-Situ Self-Evolving Agent System for Open-Ended Tasks [10.622439192272527]
Conventional agent systems struggle in open-ended environments where task distributions continuously drift and external supervision is scarce.<n>We propose the In-Situ Self-Evolving paradigm, which treats sequential task interactions as a continuous stream of experience.<n>Within this framework, we develop Yunjue Agent, a system that iteratively synthesizes, optimize, and reuses tools to navigate emerging challenges.
arXiv Detail & Related papers (2026-01-26T07:27:47Z) - From Agentification to Self-Evolving Agentic AI for Wireless Networks: Concepts, Approaches, and Future Research Directions [70.72279728350763]
Self-evolving agentic artificial intelligence (AI) offers a new paradigm for future wireless systems.<n>Unlike static AI models, self-evolving agents embed an autonomous evolution cycle that updates models, tools, and in response to environmental dynamics.<n>This paper presents a comprehensive overview of self-evolving agentic AI, highlighting its layered architecture, life cycle, and key techniques.
arXiv Detail & Related papers (2025-10-07T05:45:25Z) - Generate, Discriminate, Evolve: Enhancing Context Faithfulness via Fine-Grained Sentence-Level Self-Evolution [61.80716438091887]
GenDiE (Generate, Discriminate, Evolve) is a novel self-evolving framework that enhances context faithfulness through fine-grained sentence-level optimization.<n>By treating each sentence in a response as an independent optimization unit, GenDiE effectively addresses the limitations of previous approaches.<n>Experiments on ASQA (in-domain LFQA) and ConFiQA datasets demonstrate that GenDiE surpasses various baselines in both faithfulness and correctness.
arXiv Detail & Related papers (2025-03-03T16:08:33Z) - Rhythmic sharing: A bio-inspired paradigm for zero-shot adaptive learning in neural networks [0.0]
The brain rapidly adapts to new contexts and learns from limited data, a coveted characteristic that artificial intelligence (AI) algorithms struggle to mimic.<n>We developed a learning paradigm utilizing link strength oscillations, where learning is associated with the coordination of these oscillations.<n>Link oscillations can rapidly change coordination, allowing the network to sense and adapt to subtle contextual changes without supervision.
arXiv Detail & Related papers (2025-02-12T18:58:34Z) - DARLEI: Deep Accelerated Reinforcement Learning with Evolutionary
Intelligence [77.78795329701367]
We present DARLEI, a framework that combines evolutionary algorithms with parallelized reinforcement learning.
We characterize DARLEI's performance under various conditions, revealing factors impacting diversity of evolved morphologies.
We hope to extend DARLEI in future work to include interactions between diverse morphologies in richer environments.
arXiv Detail & Related papers (2023-12-08T16:51:10Z) - SELF: Self-Evolution with Language Feedback [68.6673019284853]
'SELF' (Self-Evolution with Language Feedback) is a novel approach to advance large language models.
It enables LLMs to self-improve through self-reflection, akin to human learning processes.
Our experiments in mathematics and general tasks demonstrate that SELF can enhance the capabilities of LLMs without human intervention.
arXiv Detail & Related papers (2023-10-01T00:52:24Z) - eVAE: Evolutionary Variational Autoencoder [40.29009643819948]
We propose a novel evolutionary variational autoencoder (eVAE) building on the variational information bottleneck (VIB) theory.
eVAE integrates a variational genetic algorithm into VAE with variational evolutionary operators including variational mutation, crossover, and evolution.
eVAE achieves better reconstruction loss, disentanglement, and generation-inference balance than its competitors.
arXiv Detail & Related papers (2023-01-01T23:54:35Z) - Empowered Neural Cellular Automata [0.0]
empowerment measures the amount of control an agent exerts on its environment via its sensorimotor system.
We show that the addition of empowerment as a secondary objective in the evolution of a neural cellular automaton results in higher fitness compared to evolving for morphogenesis alone.
arXiv Detail & Related papers (2022-04-27T19:37:26Z) - Visual Attention Emerges from Recurrent Sparse Reconstruction [82.78753751860603]
We present a new attention formulation built on two prominent features of the human visual attention mechanism: recurrency and sparsity.
We show that self-attention is a special case of VARS with a single-step optimization and no sparsity constraint.
VARS can be readily used as a replacement for self-attention in popular vision transformers, consistently improving their robustness across various benchmarks.
arXiv Detail & Related papers (2022-04-23T00:35:02Z) - IE-GAN: An Improved Evolutionary Generative Adversarial Network Using a
New Fitness Function and a Generic Crossover Operator [20.100388977505002]
We propose an improved E-GAN framework called IE-GAN, which introduces a new fitness function and a generic crossover operator.
In particular, the proposed fitness function can model the evolutionary process of individuals more accurately.
The crossover operator, which has been commonly adopted in evolutionary algorithms, can enable offspring to imitate the superior gene expression of their parents.
arXiv Detail & Related papers (2021-07-25T13:55:07Z) - Epigenetic evolution of deep convolutional models [81.21462458089142]
We build upon a previously proposed neuroevolution framework to evolve deep convolutional models.
We propose a convolutional layer layout which allows kernels of different shapes and sizes to coexist within the same layer.
The proposed layout enables the size and shape of individual kernels within a convolutional layer to be evolved with a corresponding new mutation operator.
arXiv Detail & Related papers (2021-04-12T12:45:16Z) - Neural Networks with Recurrent Generative Feedback [61.90658210112138]
We instantiate this design on convolutional neural networks (CNNs)
In the experiments, CNN-F shows considerably improved adversarial robustness over conventional feedforward CNNs on standard benchmarks.
arXiv Detail & Related papers (2020-07-17T19:32:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.