SOLID: a Framework of Synergizing Optimization and LLMs for Intelligent Decision-Making
- URL: http://arxiv.org/abs/2511.15202v1
- Date: Wed, 19 Nov 2025 07:44:36 GMT
- Title: SOLID: a Framework of Synergizing Optimization and LLMs for Intelligent Decision-Making
- Authors: Yinsheng Wang, Tario G You, Léonard Boussioux, Shan Liu,
- Abstract summary: SOLID is a framework that integrates mathematical optimization with the contextual capabilities of large language models (LLMs)<n>The framework retains theoretical convergence guarantees under convexity assumptions, providing insight into the design of LLMs prompt.<n> Empirical results demonstrate convergence under various scenarios and indicate annualized returns compared to a baseline-only method.
- Score: 13.975554366504587
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces SOLID (Synergizing Optimization and Large Language Models for Intelligent Decision-Making), a novel framework that integrates mathematical optimization with the contextual capabilities of large language models (LLMs). SOLID facilitates iterative collaboration between optimization and LLMs agents through dual prices and deviation penalties. This interaction improves the quality of the decisions while maintaining modularity and data privacy. The framework retains theoretical convergence guarantees under convexity assumptions, providing insight into the design of LLMs prompt. To evaluate SOLID, we applied it to a stock portfolio investment case with historical prices and financial news as inputs. Empirical results demonstrate convergence under various scenarios and indicate improved annualized returns compared to a baseline optimizer-only method, validating the synergy of the two agents. SOLID offers a promising framework for advancing automated and intelligent decision-making across diverse domains.
Related papers
- ART: Adaptive Response Tuning Framework -- A Multi-Agent Tournament-Based Approach to LLM Response Optimization [0.0]
Large Language Models (LLMs) have demonstrated remarkable capabilities in natural language understanding and generation.<n>This paper presents ART (Adaptive Response Tuning), a novel framework that employs tournament-style ELO ranking and multi-agent reasoning.
arXiv Detail & Related papers (2025-11-29T20:16:11Z) - DAOpt: Modeling and Evaluation of Data-Driven Optimization under Uncertainty with LLMs [16.64448837405414]
Recent advances in large language models (LLMs) have accelerated research on automated optimization modeling.<n>We propose the DAOpt framework including a new dataset OptU, a multi-agent decision-making module, and a simulation environment for evaluating LLMs.
arXiv Detail & Related papers (2025-09-24T08:19:28Z) - LLM4CMO: Large Language Model-aided Algorithm Design for Constrained Multiobjective Optimization [54.35609820607923]
Large language models (LLMs) offer new opportunities for assisting with algorithm design.<n>We propose LLM4CMO, a novel CMOEA based on a dual-population, two-stage framework.<n>LLMs can serve as efficient co-designers in the development of complex evolutionary optimization algorithms.
arXiv Detail & Related papers (2025-08-16T02:00:57Z) - From Natural Language to Solver-Ready Power System Optimization: An LLM-Assisted, Validation-in-the-Loop Framework [1.7136832159667206]
This paper introduces a novel Large Language Models (LLMs)-assisted agent that automatically converts natural-language descriptions of power system optimization scenarios into compact, solver-ready formulations.<n>The proposed method focuses on discovering a mathematically compatible formulation that can be efficiently solved by off-the-shelf optimization solvers.
arXiv Detail & Related papers (2025-08-11T16:22:57Z) - Large Language Model-Driven Surrogate-Assisted Evolutionary Algorithm for Expensive Optimization [22.024630467760264]
Surrogate-assisted evolutionary algorithms (SAEAs) are a key tool for addressing costly optimization tasks.<n>This paper proposes LLM-SAEA, a novel approach that integrates large language models (LLMs) to configure both surrogate models and infill sampling criteria online.
arXiv Detail & Related papers (2025-06-20T13:44:21Z) - Collab: Controlled Decoding using Mixture of Agents for LLM Alignment [90.6117569025754]
Reinforcement learning from human feedback has emerged as an effective technique to align Large Language models.<n>Controlled Decoding provides a mechanism for aligning a model at inference time without retraining.<n>We propose a mixture of agent-based decoding strategies leveraging the existing off-the-shelf aligned LLM policies.
arXiv Detail & Related papers (2025-03-27T17:34:25Z) - Dynamic Optimizations of LLM Ensembles with Two-Stage Reinforcement Learning Agents [31.341487297459995]
This paper introduces RL-Focal, a two-stage RL agent framework that routes and ensembles LLMs.<n>By focal diversity, we enhance performance across tasks by effectively promoting reward-aware and policy-adaptive ensemble selection and inference fusion.
arXiv Detail & Related papers (2025-02-06T20:44:26Z) - Federated Fine-Tuning of LLMs: Framework Comparison and Research Directions [59.5243730853157]
Federated learning (FL) provides a privacy-preserving solution for fine-tuning pre-trained large language models (LLMs) using distributed private datasets.<n>This article conducts a comparative analysis of three advanced federated LLM (FedLLM) frameworks that integrate knowledge distillation (KD) and split learning (SL) to mitigate these issues.
arXiv Detail & Related papers (2025-01-08T11:37:06Z) - Optima: Optimizing Effectiveness and Efficiency for LLM-Based Multi-Agent System [75.25394449773052]
Large Language Model (LLM) based multi-agent systems (MAS) show remarkable potential in collaborative problem-solving.<n>Yet they still face critical challenges: low communication efficiency, poor scalability, and a lack of effective parameter-updating optimization methods.<n>We present Optima, a novel framework that addresses these issues by significantly enhancing both communication efficiency and task effectiveness.
arXiv Detail & Related papers (2024-10-10T17:00:06Z) - LLaMA-Berry: Pairwise Optimization for O1-like Olympiad-Level Mathematical Reasoning [56.273799410256075]
The framework combines Monte Carlo Tree Search (MCTS) with iterative Self-Refine to optimize the reasoning path.
The framework has been tested on general and advanced benchmarks, showing superior performance in terms of search efficiency and problem-solving capability.
arXiv Detail & Related papers (2024-10-03T18:12:29Z) - Bridging Large Language Models and Optimization: A Unified Framework for Text-attributed Combinatorial Optimization [21.232626415696267]
The Language-based Neural COP solver (LNCS) is a novel framework that is unified for the end-to-end resolution of diverse text-attributed COPs.<n>Extensive experiments validate the effectiveness and generalizability of the LNCS, highlighting its potential as a unified and practical framework for real-world COP applications.
arXiv Detail & Related papers (2024-08-22T08:42:44Z) - Large Language Model as a Catalyst: A Paradigm Shift in Base Station Siting Optimization [62.16747639440893]
Large language models (LLMs) and their associated technologies advance, particularly in the realms of prompt engineering and agent engineering.<n>Our proposed framework incorporates retrieval-augmented generation (RAG) to enhance the system's ability to acquire domain-specific knowledge and generate solutions.
arXiv Detail & Related papers (2024-08-07T08:43:32Z) - Enhancing Decision-Making in Optimization through LLM-Assisted Inference: A Neural Networks Perspective [1.0420394952839245]
This paper explores the seamless integration of Generative AI (GenAI) and Evolutionary Algorithms (EAs)
Focusing on the transformative role of Large Language Models (LLMs), our study investigates the potential of LLM-Assisted Inference to automate and enhance decision-making processes.
arXiv Detail & Related papers (2024-05-12T08:22:53Z) - Benchmarking PtO and PnO Methods in the Predictive Combinatorial Optimization Regime [59.27851754647913]
Predictive optimization is the precise modeling of many real-world applications, including energy cost-aware scheduling and budget allocation on advertising.
We develop a modular framework to benchmark 11 existing PtO/PnO methods on 8 problems, including a new industrial dataset for advertising.
Our study shows that PnO approaches are better than PtO on 7 out of 8 benchmarks, but there is no silver bullet found for the specific design choices of PnO.
arXiv Detail & Related papers (2023-11-13T13:19:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.