Controlled Self-Evolution for Algorithmic Code Optimization
- URL: http://arxiv.org/abs/2601.07348v4
- Date: Thu, 15 Jan 2026 04:14:52 GMT
- Title: Controlled Self-Evolution for Algorithmic Code Optimization
- Authors: Tu Hu, Ronghao Chen, Shuo Zhang, Jianghao Yin, Mou Xiao Feng, Jingping Liu, Shaolei Zhang, Wenqi Jiang, Yuqi Fang, Sen Hu, Huacan Wang, Yi Xu,
- Abstract summary: Self-evolution methods enhance code generation through iterative "generate-verify-refine" cycles.<n>Existing approaches fail to discover solutions with superior complexity within limited budgets.<n>We propose Controlled Self-Evolution (CSE), which consists of three key components.
- Score: 33.82967000330864
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-evolution methods enhance code generation through iterative "generate-verify-refine" cycles, yet existing approaches suffer from low exploration efficiency, failing to discover solutions with superior complexity within limited budgets. This inefficiency stems from initialization bias trapping evolution in poor solution regions, uncontrolled stochastic operations lacking feedback guidance, and insufficient experience utilization across tasks. To address these bottlenecks, we propose Controlled Self-Evolution (CSE), which consists of three key components. Diversified Planning Initialization generates structurally distinct algorithmic strategies for broad solution space coverage. Genetic Evolution replaces stochastic operations with feedback-guided mechanisms, enabling targeted mutation and compositional crossover. Hierarchical Evolution Memory captures both successful and failed experiences at inter-task and intra-task levels. Experiments on EffiBench-X demonstrate that CSE consistently outperforms all baselines across various LLM backbones. Furthermore, CSE achieves higher efficiency from early generations and maintains continuous improvement throughout evolution. Our code is publicly available at https://github.com/QuantaAlpha/EvoControl.
Related papers
- EvoX: Meta-Evolution for Automated Discovery [115.89434419482797]
EvoX is an adaptive evolution method that optimize its own evolution process.<n>It continuously updates how prior solutions are selected and varied based on progress.<n>It outperforms existing AI-driven evolutionary methods including AlphaEvolve, OpenEvolve, GEPA, and ShinkaEvolve on the majority of tasks.
arXiv Detail & Related papers (2026-02-26T18:54:41Z) - AdaEvolve: Adaptive LLM Driven Zeroth-Order Optimization [61.535567824938205]
We introduce AdaEvolve, a framework that reformulates LLM-driven evolution as a hierarchical adaptive optimization problem.<n>AdaEvolve consistently outperforms the open-ended baselines across 185 different open-ended optimization problems.
arXiv Detail & Related papers (2026-02-23T18:45:31Z) - Continuous Program Search [4.198653054660764]
We frame this as an operator-design problem: learn a continuous program space where latent distance has behavioral meaning, then design mutation operators that exploit this structure without changing the evolutionary algorithm.<n>We make locality measurable by tracking action-level divergence under controlled latent perturbations, identifying an empirical trust region for behavior-local continuous variation.<n>Although isotropic mutation occasionally attains higher peak performance, geometry-compiled mutation yields faster, more reliable progress, demonstrating that semantically aligned mutation can substantially improve search efficiency without modifying the underlying evolutionary algorithm.
arXiv Detail & Related papers (2026-02-07T18:41:14Z) - LoongFlow: Directed Evolutionary Search via a Cognitive Plan-Execute-Summarize Paradigm [8.050281821865978]
LoongFlow is a self-evolving agent framework that achieves state-of-the-art solution quality with significantly reduced computational costs.<n>Unlike "blind" mutation operators, LoongFlow integrates Large Language Models into a cognitive "Plan-Execute-Summarize" (PES) paradigm.<n>To sustain long-term architectural coherence, we incorporate a hybrid evolutionary memory system.
arXiv Detail & Related papers (2025-12-30T08:39:28Z) - EvoLattice: Persistent Internal-Population Evolution through Multi-Alternative Quality-Diversity Graph Representations for LLM-Guided Program Discovery [2.1756081703276]
EvoLattice is a framework that represents an entire population of candidate programs or agent behaviors within a single directed acyclic graph.<n>Each node stores multiple persistent alternatives, and every valid path through the graph defines a distinct candidate.<n>EvoLattice produces statistics that reveal how local design choices affect global performance.
arXiv Detail & Related papers (2025-12-15T19:43:06Z) - TOPSIS-like metaheuristic for LABS problem [70.49434432747293]
We introduce socio-cognitive mutation mechanisms that integrate strategies of following the best solutions and avoiding the worst.<n>By guiding search agents to imitate high-performing solutions and avoid poor ones, these operators enhance both solution diversity and convergence efficiency.
arXiv Detail & Related papers (2025-11-08T00:47:37Z) - Synergizing Reinforcement Learning and Genetic Algorithms for Neural Combinatorial Optimization [25.633698252033756]
We propose the Evolutionary Augmentation Mechanism (EAM) to synergize the learning efficiency of DRL with the global search power of GAs.<n>EAM operates by generating solutions from a learned policy and refining them through domain-specific genetic operations such as crossover and mutation.<n>EAM can be seamlessly integrated with state-of-the-art DRL solvers such as the Attention Model, POMO, and SymNCO.
arXiv Detail & Related papers (2025-06-11T05:17:30Z) - Evolution-based Region Adversarial Prompt Learning for Robustness Enhancement in Vision-Language Models [52.8949080772873]
We propose an evolution-based region adversarial prompt tuning method called ER-APT.<n>In each training iteration, we first generate AEs using traditional gradient-based methods.<n> Subsequently, a genetic evolution mechanism incorporating selection, mutation, and crossover is applied to optimize the AEs.<n>The final evolved AEs are used for prompt tuning, achieving region-based adversarial optimization instead of conventional single-point adversarial prompt tuning.
arXiv Detail & Related papers (2025-03-17T07:08:47Z) - Evolving Pareto-Optimal Actor-Critic Algorithms for Generalizability and
Stability [67.8426046908398]
Generalizability and stability are two key objectives for operating reinforcement learning (RL) agents in the real world.
This paper presents MetaPG, an evolutionary method for automated design of actor-critic loss functions.
arXiv Detail & Related papers (2022-04-08T20:46:16Z) - AdaLead: A simple and robust adaptive greedy search algorithm for
sequence design [55.41644538483948]
We develop an easy-to-directed, scalable, and robust evolutionary greedy algorithm (AdaLead)
AdaLead is a remarkably strong benchmark that out-competes more complex state of the art approaches in a variety of biologically motivated sequence design challenges.
arXiv Detail & Related papers (2020-10-05T16:40:38Z) - Maximum Mutation Reinforcement Learning for Scalable Control [25.935468948833073]
Reinforcement Learning (RL) has demonstrated data efficiency and optimal control over large state spaces at the cost of scalable performance.
We present the Evolution-based Soft Actor-Critic (ESAC), a scalable RL algorithm.
arXiv Detail & Related papers (2020-07-24T16:29:19Z) - EOS: a Parallel, Self-Adaptive, Multi-Population Evolutionary Algorithm
for Constrained Global Optimization [68.8204255655161]
EOS is a global optimization algorithm for constrained and unconstrained problems of real-valued variables.
It implements a number of improvements to the well-known Differential Evolution (DE) algorithm.
Results prove that EOSis capable of achieving increased performance compared to state-of-the-art single-population self-adaptive DE algorithms.
arXiv Detail & Related papers (2020-07-09T10:19:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.