GigaEvo: An Open Source Optimization Framework Powered By LLMs And Evolution Algorithms
- URL: http://arxiv.org/abs/2511.17592v1
- Date: Mon, 17 Nov 2025 14:44:47 GMT
- Title: GigaEvo: An Open Source Optimization Framework Powered By LLMs And Evolution Algorithms
- Authors: Valentin Khrulkov, Andrey Galichin, Denis Bashkirov, Dmitry Vinichenko, Oleg Travkin, Roman Alferov, Andrey Kuznetsov, Ivan Oseledets,
- Abstract summary: GigaEvo is an open-source framework that enables researchers to study and experiment with hybrid LLM-evolution approaches.<n>We provide detailed descriptions of system architecture, implementation decisions, and experimental methodology to support further research.
- Score: 7.228213026504935
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advances in LLM-guided evolutionary computation, particularly AlphaEvolve (Novikov et al., 2025; Georgiev et al., 2025), have demonstrated remarkable success in discovering novel mathematical constructions and solving challenging optimization problems. However, the high-level descriptions in published work leave many implementation details unspecified, hindering reproducibility and further research. In this report we present GigaEvo, an extensible open-source framework that enables researchers to study and experiment with hybrid LLM-evolution approaches inspired by AlphaEvolve. Our system provides modular implementations of key components: MAP-Elites quality-diversity algorithms, asynchronous DAG-based evaluation pipelines, LLM-driven mutation operators with insight generation and bidirectional lineage tracking, and flexible multi-island evolutionary strategies. In order to assess reproducibility and validate our implementation we evaluate GigaEvo on challenging problems from the AlphaEvolve paper: Heilbronn triangle placement, circle packing in squares, and high-dimensional kissing numbers. The framework emphasizes modularity, concurrency, and ease of experimentation, enabling rapid prototyping through declarative configuration. We provide detailed descriptions of system architecture, implementation decisions, and experimental methodology to support further research in LLM driven evolutionary methods. The GigaEvo framework and all experimental code are available at https://github.com/AIRI-Institute/gigaevo-core.
Related papers
- EvoX: Meta-Evolution for Automated Discovery [115.89434419482797]
EvoX is an adaptive evolution method that optimize its own evolution process.<n>It continuously updates how prior solutions are selected and varied based on progress.<n>It outperforms existing AI-driven evolutionary methods including AlphaEvolve, OpenEvolve, GEPA, and ShinkaEvolve on the majority of tasks.
arXiv Detail & Related papers (2026-02-26T18:54:41Z) - DeltaEvolve: Accelerating Scientific Discovery through Momentum-Driven Evolution [28.737322041874293]
LLM-driven evolutionary systems have shown promise for automated science discovery.<n>Existing approaches such as AlphaEvolve rely on full-code histories that are context-inefficient.<n>We propose DeltaEvolve, a momentum-driven evolutionary framework that replaces full-code history with structured semantic delta.
arXiv Detail & Related papers (2026-02-02T23:47:54Z) - Beyond Algorithm Evolution: An LLM-Driven Framework for the Co-Evolution of Swarm Intelligence Optimization Algorithms and Prompts [2.7320188728052064]
This paper proposes a novel framework for the collaborative evolution of both swarm intelligence algorithms and guiding prompts.<n>The framework was rigorously evaluated on a range of NP problems, where it demonstrated superior performance.<n>Our work establishes a new paradigm for swarm intelligence optimization algorithms, underscoring the indispensable role of prompt evolution.
arXiv Detail & Related papers (2025-12-10T00:37:16Z) - ThetaEvolve: Test-time Learning on Open Problems [110.5756538358217]
We introduce ThetaEvolve, an open-source framework that simplifies and extends AlphaEvolve to efficiently scale both in-context learning and Reinforcement Learning (RL) at test time.<n>We find that ThetaEvolve with RL at test-time consistently outperforms inference-only baselines.
arXiv Detail & Related papers (2025-11-28T18:58:14Z) - CodeEvolve: An open source evolutionary coding agent for algorithm discovery and optimization [0.6198237241838559]
We introduce CodeEvolve, an open-source evolutionary coding agent that unites Large Language Models with genetic algorithms to solve complex computational problems.<n>Our framework adapts powerful evolutionary concepts to the Large Language Models domain, building upon recent methods for generalized scientific discovery.<n>We conduct a rigorous evaluation of CodeEvolve on a subset of the mathematical benchmarks used to evaluate Google DeepMind's closed-source AlphaEvolve.
arXiv Detail & Related papers (2025-10-15T22:58:06Z) - AlphaEvolve: A coding agent for scientific and algorithmic discovery [63.13852052551106]
We present AlphaEvolve, an evolutionary coding agent that substantially enhances capabilities of state-of-the-art LLMs.<n>AlphaEvolve orchestrates an autonomous pipeline of LLMs, whose task is to improve an algorithm by making direct changes to the code.<n>We demonstrate the broad applicability of this approach by applying it to a number of important computational problems.
arXiv Detail & Related papers (2025-06-16T06:37:18Z) - Algorithm Discovery With LLMs: Evolutionary Search Meets Reinforcement Learning [12.037588566211348]
We propose to augment evolutionary search by continuously refining the search operator through reinforcement learning (RL) fine-tuning.<n>Our experiments demonstrate that integrating RL with evolutionary search accelerates the discovery of superior algorithms.
arXiv Detail & Related papers (2025-04-07T14:14:15Z) - A Survey on Self-Evolution of Large Language Models [116.54238664264928]
Large language models (LLMs) have significantly advanced in various fields and intelligent agent applications.
To address this issue, self-evolution approaches that enable LLMs to autonomously acquire, refine, and learn from experiences generated by the model itself are rapidly growing.
arXiv Detail & Related papers (2024-04-22T17:43:23Z) - When Large Language Models Meet Evolutionary Algorithms: Potential Enhancements and Challenges [50.280704114978384]
Pre-trained large language models (LLMs) exhibit powerful capabilities for generating natural text.<n> Evolutionary algorithms (EAs) can discover diverse solutions to complex real-world problems.
arXiv Detail & Related papers (2024-01-19T05:58:30Z) - EvoPrompt: Connecting LLMs with Evolutionary Algorithms Yields Powerful Prompt Optimizers [67.64162164254809]
EvoPrompt is a framework for discrete prompt optimization.<n>It borrows the idea of evolutionary algorithms (EAs) as they exhibit good performance and fast convergence.<n>It significantly outperforms human-engineered prompts and existing methods for automatic prompt generation.
arXiv Detail & Related papers (2023-09-15T16:50:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.