Promptomatix: An Automatic Prompt Optimization Framework for Large Language Models
- URL: http://arxiv.org/abs/2507.14241v3
- Date: Thu, 24 Jul 2025 21:49:26 GMT
- Title: Promptomatix: An Automatic Prompt Optimization Framework for Large Language Models
- Authors: Rithesh Murthy, Ming Zhu, Liangwei Yang, Jielin Qiu, Juntao Tan, Shelby Heinecke, Caiming Xiong, Silvio Savarese, Huan Wang,
- Abstract summary: Large Language Models (LLMs) perform best with well-crafted prompts, yet prompt engineering remains manual, inconsistent, and inaccessible to non-experts.<n>Promptomatix transforms natural language task descriptions into high-quality prompts without requiring manual tuning or domain expertise.<n>System analyzes user intent, generates synthetic training data, selects prompting strategies, and refines prompts using cost-aware objectives.
- Score: 72.4723784999432
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large Language Models (LLMs) perform best with well-crafted prompts, yet prompt engineering remains manual, inconsistent, and inaccessible to non-experts. We introduce Promptomatix, an automatic prompt optimization framework that transforms natural language task descriptions into high-quality prompts without requiring manual tuning or domain expertise. Promptomatix supports both a lightweight meta-prompt-based optimizer and a DSPy-powered compiler, with modular design enabling future extension to more advanced frameworks. The system analyzes user intent, generates synthetic training data, selects prompting strategies, and refines prompts using cost-aware objectives. Evaluated across 5 task categories, Promptomatix achieves competitive or superior performance compared to existing libraries, while reducing prompt length and computational overhead making prompt optimization scalable and efficient.
Related papers
- Grammar-Guided Evolutionary Search for Discrete Prompt Optimisation [63.97051732013936]
We propose an evolutionary search approach to automated discrete prompt optimisation consisting of two phases.<n>In the first phase, grammar-guided genetic programming is invoked to synthesise prompt-creating programmes.<n>In the second phase, local search is applied to explore the neighbourhoods of best-performing programmes.
arXiv Detail & Related papers (2025-07-14T14:34:15Z) - ORPP: Self-Optimizing Role-playing Prompts to Enhance Language Model Capabilities [64.24517317344959]
High-quality prompts are crucial for eliciting outstanding performance from large language models on complex tasks.<n>We propose ORPP, a framework that enhances model performance by optimizing and generating role-playing prompts.<n>We show that ORPP not only matches but in most cases surpasses existing mainstream prompt optimization methods in terms of performance.
arXiv Detail & Related papers (2025-06-03T05:51:35Z) - SIPDO: Closed-Loop Prompt Optimization via Synthetic Data Feedback [17.851957960438483]
We introduce SIPDO (Self-Improving Prompts through Data-Augmented Optimization), a closed-loop framework for prompt learning.<n> SIPDO couples a synthetic data generator with a prompt, where the generator produces new examples that reveal current prompt weaknesses and refine the prompt in response.<n>This feedback-driven loop enables systematic improvement of prompt performance without assuming access to external supervision or new tasks.
arXiv Detail & Related papers (2025-05-26T04:56:48Z) - A Sequential Optimal Learning Approach to Automated Prompt Engineering in Large Language Models [14.483240353801074]
This paper proposes an optimal learning framework for automated prompt engineering.<n>It is designed to sequentially identify effective prompt features while efficiently allocating a limited evaluation budget.<n>Our framework provides a solution to deploying automated prompt engineering in a wider range applications.
arXiv Detail & Related papers (2025-01-07T03:51:10Z) - iPrOp: Interactive Prompt Optimization for Large Language Models with a Human in the Loop [10.210078164737245]
$textitiPrOp$ is a novel interactive prompt optimization approach.<n>We aim to provide users with task-specific guidance to enhance human engagement in the optimization process.
arXiv Detail & Related papers (2024-12-17T08:09:15Z) - GReaTer: Gradients over Reasoning Makes Smaller Language Models Strong Prompt Optimizers [52.17222304851524]
We introduce GReaTer, a novel prompt optimization technique that directly incorporates gradient information over task-specific reasoning.<n>By utilizing task loss gradients, GReaTer enables self-optimization of prompts for open-source, lightweight language models.<n> GReaTer consistently outperforms previous state-of-the-art prompt optimization methods.
arXiv Detail & Related papers (2024-12-12T20:59:43Z) - PromptWizard: Task-Aware Prompt Optimization Framework [2.618253052454435]
Large language models (LLMs) have transformed AI across diverse domains.
Manual prompt engineering is both labor-intensive and domain-specific.
We introduce PromptWizard, a novel, fully automated framework for discrete prompt optimization.
arXiv Detail & Related papers (2024-05-28T17:08:31Z) - Symbolic Prompt Program Search: A Structure-Aware Approach to Efficient Compile-Time Prompt Optimization [14.012833238074332]
We introduce SAMMO, a framework to perform compile-time optimizations of prompt programs.
SAMMO represents prompt programs on a symbolic level which allows for a rich set of transformations.
We show that SAMMO generalizes previous methods and improves the performance of complex prompts on (1) instruction tuning, (2) RAG pipeline tuning, and (3) prompt compression.
arXiv Detail & Related papers (2024-04-02T21:35:54Z) - Efficient Prompting Methods for Large Language Models: A Survey [50.82812214830023]
Efficient Prompting Methods have attracted a wide range of attention.<n>We discuss Automatic Prompt Engineering for different prompt components and Prompt Compression in continuous and discrete spaces.
arXiv Detail & Related papers (2024-04-01T12:19:08Z) - Intent-based Prompt Calibration: Enhancing prompt optimization with
synthetic boundary cases [2.6159111710501506]
We introduce a new method for automatic prompt engineering, using a calibration process that iteratively refines the prompt to the user intent.
We demonstrate the effectiveness of our method with respect to strong proprietary models on real-world tasks such as moderation and generation.
arXiv Detail & Related papers (2024-02-05T15:28:43Z) - RLPrompt: Optimizing Discrete Text Prompts With Reinforcement Learning [84.75064077323098]
This paper proposes RLPrompt, an efficient discrete prompt optimization approach with reinforcement learning (RL)
RLPrompt is flexibly applicable to different types of LMs, such as masked gibberish (e.g., grammaBERT) and left-to-right models (e.g., GPTs)
Experiments on few-shot classification and unsupervised text style transfer show superior performance over a wide range of existing finetuning or prompting methods.
arXiv Detail & Related papers (2022-05-25T07:50:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.