PymooLab: An Open-Source Visual Analytics Framework for Multi-Objective Optimization using LLM-Based Code Generation and MCDM
- URL: http://arxiv.org/abs/2603.01345v1
- Date: Mon, 02 Mar 2026 00:56:32 GMT
- Title: PymooLab: An Open-Source Visual Analytics Framework for Multi-Objective Optimization using LLM-Based Code Generation and MCDM
- Authors: Thiago Santos, Sebastiao Xavier, Luiz Gustavo de Oliveira Carneiro, Gustavo de Souza,
- Abstract summary: PymooLab is an open-source visual analytics environment built on top of textitpymoo.<n>It unifies configuration, execution monitoring, and formal decision support in a single reproducible workflow.<n>For computationally intensive studies, PymooLab relies on the native textitpymoo acceleration pathway through JAX.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Multi-objective optimization is now a core paradigm in engineering design and scientific discovery. Yet mainstream evolutionary frameworks, including \textit{pymoo}, still depend on imperative coding for problem definition, algorithm configuration, and post-hoc analysis. That requirement creates a non-trivial barrier for practitioners without strong software-engineering training and often complicates reproducible experimentation. We address this gap through PymooLab, an open-source visual analytics environment built on top of \textit{pymoo}. The platform unifies configuration, execution monitoring, and formal decision support in a single reproducible workflow that automatically records hyperparameters, evaluation budgets, and random seeds. Its decoupled object-oriented architecture preserves compatibility with the base ecosystem while enabling LLM-assisted code generation for rapid model formulation. The interface also embeds interactive Multi-Criteria Decision Making (MCDM) tools, which reduces the cognitive burden of Pareto-front inspection. For computationally intensive studies, PymooLab relies on the native \textit{pymoo} acceleration pathway through JAX, improving scalability in high-dimensional evaluations. Overall, the framework combines visual experimentation, LLM-based modeling, and deterministic orchestration to narrow the gap between rigorous operations research and practical accessibility for domain experts. Source code is publicly available at https://github.com/METISBR/pymoolab.
Related papers
- An Agentic Framework for Autonomous Materials Computation [70.24472585135929]
Large Language Models (LLMs) have emerged as powerful tools for accelerating scientific discovery.<n>Recent advances integrate LLMs into agentic frameworks, enabling retrieval, reasoning, and tool use for complex scientific experiments.<n>Here, we present a domain-specialized agent designed for reliable automation of first-principles materials computations.
arXiv Detail & Related papers (2025-12-22T15:03:57Z) - Seismology modeling agent: A smart assistant for geophysical researchers [14.28965530601497]
This paper proposes an intelligent, interactive workflow powered by Large Language Models (LLMs)<n>We introduce the first Model Context Protocol (MCP) server suite for SPECFEM.<n>The framework supports both fully automated execution and human-in-the-loop collaboration.
arXiv Detail & Related papers (2025-12-16T14:18:26Z) - SelfAI: Building a Self-Training AI System with LLM Agents [79.10991818561907]
SelfAI is a general multi-agent platform that combines a User Agent for translating high-level research objectives into standardized experimental configurations.<n>An Experiment Manager orchestrates parallel, fault-tolerant training across heterogeneous hardware while maintaining a structured knowledge base for continuous feedback.<n>Across regression, computer vision, scientific computing, medical imaging, and drug discovery benchmarks, SelfAI consistently achieves strong performance and reduces redundant trials.
arXiv Detail & Related papers (2025-11-29T09:18:39Z) - AI for Distributed Systems Design: Scalable Cloud Optimization Through Repeated LLMs Sampling And Simulators [3.1594665317979698]
We explore AI-driven distributed-systems policy design by combining code generation from large language models with deterministic verification in a domain-specific simulator.<n>We report preliminary results on throughput improvements across multiple models.<n>We conjecture that AI will be crucial for scaling this methodology by helping to bootstrap new simulators.
arXiv Detail & Related papers (2025-10-20T16:10:24Z) - Sample-Efficient Online Learning in LM Agents via Hindsight Trajectory Rewriting [92.57796055887995]
We introduce ECHO, a prompting framework that adapts hindsight experience replay from reinforcement learning for language model agents.<n> ECHO generates optimized trajectories for alternative goals that could have been achieved during failed attempts.<n>We evaluate ECHO on stateful versions of XMiniGrid, a text-based navigation and planning benchmark, and PeopleJoinQA, a collaborative information-gathering enterprise simulation.
arXiv Detail & Related papers (2025-10-11T18:11:09Z) - VerlTool: Towards Holistic Agentic Reinforcement Learning with Tool Use [78.29315418819074]
We introduce VerlTool, a unified and modular framework that addresses limitations through systematic design principles.<n>Our framework formalizes ARLT as multi-turn trajectories with multi-modal observation tokens (text/image/video), extending beyond single-turn RLVR paradigms.<n>The modular plugin architecture enables rapid tool integration requiring only lightweight Python definitions.
arXiv Detail & Related papers (2025-09-01T01:45:18Z) - Rethinking Testing for LLM Applications: Characteristics, Challenges, and a Lightweight Interaction Protocol [83.83217247686402]
Large Language Models (LLMs) have evolved from simple text generators into complex software systems that integrate retrieval augmentation, tool invocation, and multi-turn interactions.<n>Their inherent non-determinism, dynamism, and context dependence pose fundamental challenges for quality assurance.<n>This paper decomposes LLM applications into a three-layer architecture: textbftextitSystem Shell Layer, textbftextitPrompt Orchestration Layer, and textbftextitLLM Inference Core.
arXiv Detail & Related papers (2025-08-28T13:00:28Z) - HEAS: Hierarchical Evolutionary Agent Simulation Framework for Cross-Scale Modeling and Multi-Objective Search [4.807104001943257]
Hierarchical Simulation Agent (HEAS) is a Python framework that unifies layered agent-based modeling with evolutionary optimization and tournament evaluation.<n>HEAS represents models as hierarchies of lightweight processes ("streams") scheduled in deterministic layers that read and write a shared context.<n> compact API and CLI-simulate, optimize, evaluate-expose single- and multi-objective evolution.
arXiv Detail & Related papers (2025-08-21T13:35:46Z) - LLM4CMO: Large Language Model-aided Algorithm Design for Constrained Multiobjective Optimization [54.35609820607923]
Large language models (LLMs) offer new opportunities for assisting with algorithm design.<n>We propose LLM4CMO, a novel CMOEA based on a dual-population, two-stage framework.<n>LLMs can serve as efficient co-designers in the development of complex evolutionary optimization algorithms.
arXiv Detail & Related papers (2025-08-16T02:00:57Z) - Do LLMs Overthink Basic Math Reasoning? Benchmarking the Accuracy-Efficiency Tradeoff in Language Models [6.312798900093575]
Large language models (LLMs) achieve impressive performance on complex mathematical benchmarks yet sometimes fail on basic math reasoning.<n>This paper focuses on the fundamental tradeoff between accuracy and overthinking.<n>We introduce the Overthinking Score, a harmonic-mean metric combining accuracy and token-efficiency for holistic model evaluation.
arXiv Detail & Related papers (2025-07-05T12:31:17Z) - MLE-Dojo: Interactive Environments for Empowering LLM Agents in Machine Learning Engineering [57.156093929365255]
Gym-style framework for systematically reinforcement learning, evaluating, and improving autonomous large language model (LLM) agents.<n>MLE-Dojo covers diverse, open-ended MLE tasks carefully curated to reflect realistic engineering scenarios.<n>Its fully executable environment supports comprehensive agent training via both supervised fine-tuning and reinforcement learning.
arXiv Detail & Related papers (2025-05-12T17:35:43Z) - BLADE: Benchmark suite for LLM-driven Automated Design and Evolution of iterative optimisation heuristics [2.2485774453793037]
BLADE is a framework for benchmarking LLM-driven AAD methods in a continuous black-box optimisation context.<n>It integrates benchmark problems with instance generators and textual descriptions aimed at capability-focused testing, such as specialisation and information exploitation.<n> BLADE provides an out-of-the-box' solution to systematically evaluate LLM-driven AAD approaches.
arXiv Detail & Related papers (2025-04-28T18:34:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.