REMoH: A Reflective Evolution of Multi-objective Heuristics approach via Large Language Models
- URL: http://arxiv.org/abs/2506.07759v1
- Date: Mon, 09 Jun 2025 13:38:28 GMT
- Title: REMoH: A Reflective Evolution of Multi-objective Heuristics approach via Large Language Models
- Authors: Diego ForniƩs-Tabuenca, Alejandro Uribe, Urtzi Otamendi, Arkaitz Artetxe, Juan Carlos Rivera, Oier Lopez de Lacalle,
- Abstract summary: Multi-objective optimization is fundamental in complex decision-making tasks.<n>Recent advances in Large Language Models (LLMs) offer enhanced explainability, adaptability, and reasoning.<n>This work proposes Reflective Evolution of Multi-objective Heuristics (REMoH), a novel framework integrating NSGA-II with LLM-based generation.
- Score: 39.85828629779943
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-objective optimization is fundamental in complex decision-making tasks. Traditional algorithms, while effective, often demand extensive problem-specific modeling and struggle to adapt to nonlinear structures. Recent advances in Large Language Models (LLMs) offer enhanced explainability, adaptability, and reasoning. This work proposes Reflective Evolution of Multi-objective Heuristics (REMoH), a novel framework integrating NSGA-II with LLM-based heuristic generation. A key innovation is a reflection mechanism that uses clustering and search-space reflection to guide the creation of diverse, high-quality heuristics, improving convergence and maintaining solution diversity. The approach is evaluated on the Flexible Job Shop Scheduling Problem (FJSSP) in-depth benchmarking against state-of-the-art methods using three instance datasets: Dauzere, Barnes, and Brandimarte. Results demonstrate that REMoH achieves competitive results compared to state-of-the-art approaches with reduced modeling effort and enhanced adaptability. These findings underscore the potential of LLMs to augment traditional optimization, offering greater flexibility, interpretability, and robustness in multi-objective scenarios.
Related papers
- Pareto-Grid-Guided Large Language Models for Fast and High-Quality Heuristics Design in Multi-Objective Combinatorial Optimization [3.952819864255911]
Multi-objective optimization problems (MOCOP) frequently arise in practical applications that require the simultaneous optimization of conflicting objectives.<n>We introduce Multi-heuristics for MOCOP via Pareto-Grid-guided Evolution of LLMs (MPaGE)<n>MPaGE utilizes LLMs to prioritize with semantically distinct logical structures during variation, thus promoting diversity and mitigating redundancy within the population.
arXiv Detail & Related papers (2025-07-28T15:26:43Z) - Reinforcement Learning for Multi-Objective Multi-Echelon Supply Chain Optimisation [3.1194372040101928]
The model is evaluated using a multi-objective reinforcement learning (RL) method, benchmarked against an originally single-objective RL algorithm modified with weighted sum.<n>We conduct experiments on varying network complexities, mimicking typical real-world challenges using a customisable simulator.<n>The model determines production and delivery quantities across supply chain routes to achieve near-optimal trade-offs between competing objectives.
arXiv Detail & Related papers (2025-07-26T04:30:11Z) - Discrete Tokenization for Multimodal LLMs: A Comprehensive Survey [69.45421620616486]
This work presents the first structured taxonomy and analysis of discrete tokenization methods designed for large language models (LLMs)<n>We categorize 8 representative VQ variants that span classical and modern paradigms and analyze their algorithmic principles, training dynamics, and integration challenges with LLM pipelines.<n>We identify key challenges including codebook collapse, unstable gradient estimation, and modality-specific encoding constraints.
arXiv Detail & Related papers (2025-07-21T10:52:14Z) - Benchmarking MOEAs for solving continuous multi-objective RL problems [3.8936716676293917]
Multi-objective reinforcement learning (MORL) addresses the challenge of simultaneously optimizing multiple, often conflicting, rewards.<n>This paper investigates the applicability and limitations of multi-objective evolutionary algorithms in solving complex MORL problems.
arXiv Detail & Related papers (2025-05-19T20:54:20Z) - Generative Reliability-Based Design Optimization Using In-Context Learning Capabilities of Large Language Models [0.8356765961526956]
Large Language Models (LLMs) have demonstrated remarkable in-context learning capabilities.<n>This paper proposes a generative design method by leveraging the in-context learning capabilities of LLMs.
arXiv Detail & Related papers (2025-03-28T13:10:04Z) - Can Large Language Models Be Trusted as Evolutionary Optimizers for Network-Structured Combinatorial Problems? [8.082897040940447]
Large Language Models (LLMs) have impressive capabilities in language understanding and reasoning across diverse domains.<n>In this work, we propose a systematic framework to evaluate the capacity of LLMs to engage with problem structures.<n>We develop a cost-efficient population-level optimization strategy that significantly improves efficiency compared to traditional individual-level approaches.
arXiv Detail & Related papers (2025-01-25T05:19:19Z) - Progressive Multimodal Reasoning via Active Retrieval [64.74746997923967]
Multi-step multimodal reasoning tasks pose significant challenges for large language models (MLLMs)<n>We propose AR-MCTS, a universal framework designed to progressively improve the reasoning capabilities of MLLMs.<n>We show that AR-MCTS can optimize sampling diversity and accuracy, yielding reliable multimodal reasoning.
arXiv Detail & Related papers (2024-12-19T13:25:39Z) - Unified Generative and Discriminative Training for Multi-modal Large Language Models [88.84491005030316]
Generative training has enabled Vision-Language Models (VLMs) to tackle various complex tasks.
Discriminative training, exemplified by models like CLIP, excels in zero-shot image-text classification and retrieval.
This paper proposes a unified approach that integrates the strengths of both paradigms.
arXiv Detail & Related papers (2024-11-01T01:51:31Z) - Autonomous Multi-Objective Optimization Using Large Language Model [28.14607885386587]
Multi-objective optimization problems (MOPs) are ubiquitous in real-world applications.
We propose a new framework that autonomously designs EA operators for solving MOPs.
arXiv Detail & Related papers (2024-06-13T10:35:16Z) - Context-aware Diversity Enhancement for Neural Multi-Objective Combinatorial Optimization [19.631213689157995]
Multi-objective optimization (MOCO) problems are prevalent in various real-world applications.<n>We propose a Context-aware Diversity Enhancement algorithm named CDE.<n>The proposed CDE can effectively and efficiently grasp the context information, resulting in diversity enhancement.
arXiv Detail & Related papers (2024-05-14T13:42:19Z) - Model Composition for Multimodal Large Language Models [71.5729418523411]
We propose a new paradigm through the model composition of existing MLLMs to create a new model that retains the modal understanding capabilities of each original model.
Our basic implementation, NaiveMC, demonstrates the effectiveness of this paradigm by reusing modality encoders and merging LLM parameters.
arXiv Detail & Related papers (2024-02-20T06:38:10Z) - Latent Variable Representation for Reinforcement Learning [131.03944557979725]
It remains unclear theoretically and empirically how latent variable models may facilitate learning, planning, and exploration to improve the sample efficiency of model-based reinforcement learning.
We provide a representation view of the latent variable models for state-action value functions, which allows both tractable variational learning algorithm and effective implementation of the optimism/pessimism principle.
In particular, we propose a computationally efficient planning algorithm with UCB exploration by incorporating kernel embeddings of latent variable models.
arXiv Detail & Related papers (2022-12-17T00:26:31Z) - Optimization-Inspired Learning with Architecture Augmentations and
Control Mechanisms for Low-Level Vision [74.9260745577362]
This paper proposes a unified optimization-inspired learning framework to aggregate Generative, Discriminative, and Corrective (GDC) principles.
We construct three propagative modules to effectively solve the optimization models with flexible combinations.
Experiments across varied low-level vision tasks validate the efficacy and adaptability of GDC.
arXiv Detail & Related papers (2020-12-10T03:24:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.