Assessing an evolutionary search engine for small language models, prompts, and evaluation metrics
- URL: http://arxiv.org/abs/2506.21512v2
- Date: Mon, 21 Jul 2025 20:48:26 GMT
- Title: Assessing an evolutionary search engine for small language models, prompts, and evaluation metrics
- Authors: Cláudio Lúcio do Val Lopes, Lucca Machado,
- Abstract summary: concurrent optimization of language models and instructional prompts presents a significant challenge for deploying efficient and effective AI systems.<n>This paper introduces and assesses a bi-objective evolutionary search engine designed to navigate this complex space.<n>We employ the NSGA-II algorithm and prompt grammar to simultaneously optimize for task accuracy and token efficiency across some reasoning tasks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The concurrent optimization of language models and instructional prompts presents a significant challenge for deploying efficient and effective AI systems, particularly when balancing performance against computational costs like token usage. This paper introduces and assesses a bi-objective evolutionary search engine designed to navigate this complex space, focusing specifically on Small Language Models (SLMs). We employ the NSGA-II algorithm and prompt grammar to simultaneously optimize for task accuracy and token efficiency across some reasoning tasks. Our results successfully identify diverse, high-performing model-prompt combinations, quantitatively revealing the critical trade-off between the two objectives. This research highlights task-specific affinities between particular SLMs and prompt structures (e.g., instructions, context, chain of thought). The generated practical Pareto fronts offer decision-makers a portfolio of optimized solutions adaptable to their specific constraints. This automated approach moves beyond traditional manual tuning, providing a foundational framework for discovering effective human-AI interaction patterns.
Related papers
- MOPrompt: Multi-objective Semantic Evolution for Prompt Optimization [0.0699049312989311]
MOPrompt is a novel framework designed to optimize prompts for both accuracy and context size (measured in tokens) simultaneously.<n>We evaluate MOPrompt on a sentiment analysis task in Portuguese, using Gemma-2B and Sabiazinho-3 as evaluation models.
arXiv Detail & Related papers (2025-08-03T01:50:43Z) - Systematic Evaluation of Optimization Techniques for Long-Context Language Models [15.377591633726396]
Large language models (LLMs) excel across diverse natural language processing tasks but face resource demands and limited context windows.<n>This paper systematically benchmarks these optimizations, characterizing memory usage, latency, and throughput, and studies how these methods impact the quality of text generation.
arXiv Detail & Related papers (2025-08-01T04:17:24Z) - Grammar-Guided Evolutionary Search for Discrete Prompt Optimisation [63.97051732013936]
We propose an evolutionary search approach to automated discrete prompt optimisation consisting of two phases.<n>In the first phase, grammar-guided genetic programming is invoked to synthesise prompt-creating programmes.<n>In the second phase, local search is applied to explore the neighbourhoods of best-performing programmes.
arXiv Detail & Related papers (2025-07-14T14:34:15Z) - Tournament of Prompts: Evolving LLM Instructions Through Structured Debates and Elo Ratings [0.9437165725355702]
We introduce DEEVO, a novel framework that guides prompt evolution through a debate-driven evaluation with an Elo-based selection.<n>Using Elo ratings as a fitness proxy, DEEVO simultaneously drives improvement and preserves valuable diversity in the prompt population.
arXiv Detail & Related papers (2025-05-30T19:33:41Z) - New Dataset and Methods for Fine-Grained Compositional Referring Expression Comprehension via Specialist-MLLM Collaboration [49.180693704510006]
Referring Expression (REC) is a cross-modal task that evaluates the interplay of language understanding, image comprehension, and language-to-image grounding.<n>It serves as an essential testing ground for Multimodal Large Language Models (MLLMs)
arXiv Detail & Related papers (2025-02-27T13:58:44Z) - A Survey on Inference Optimization Techniques for Mixture of Experts Models [50.40325411764262]
Large-scale Mixture of Experts (MoE) models offer enhanced model capacity and computational efficiency through conditional computation.<n> deploying and running inference on these models presents significant challenges in computational resources, latency, and energy efficiency.<n>This survey analyzes optimization techniques for MoE models across the entire system stack.
arXiv Detail & Related papers (2024-12-18T14:11:15Z) - Goal-Conditioned Supervised Learning for Multi-Objective Recommendation [8.593384839118658]
Multi-objective learning endeavors to concurrently optimize multiple objectives using a single model.<n>This paper introduces a Multi-Objective Goal-Conditioned Supervised Learning framework for automatically learning to achieve multiple objectives from offline sequential data.
arXiv Detail & Related papers (2024-12-12T03:47:40Z) - Graph-Structured Speculative Decoding [52.94367724136063]
Speculative decoding has emerged as a promising technique to accelerate the inference of Large Language Models.
We introduce an innovative approach utilizing a directed acyclic graph (DAG) to manage the drafted hypotheses.
We observe a remarkable speedup of 1.73$times$ to 1.96$times$, significantly surpassing standard speculative decoding.
arXiv Detail & Related papers (2024-07-23T06:21:24Z) - Fine-Tuning Large Vision-Language Models as Decision-Making Agents via Reinforcement Learning [79.38140606606126]
We propose an algorithmic framework that fine-tunes vision-language models (VLMs) with reinforcement learning (RL)
Our framework provides a task description and then prompts the VLM to generate chain-of-thought (CoT) reasoning.
We demonstrate that our proposed framework enhances the decision-making capabilities of VLM agents across various tasks.
arXiv Detail & Related papers (2024-05-16T17:50:19Z) - PhaseEvo: Towards Unified In-Context Prompt Optimization for Large
Language Models [9.362082187605356]
We present PhaseEvo, an efficient automatic prompt optimization framework that combines the generative capability of LLMs with the global search proficiency of evolution algorithms.
PhaseEvo significantly outperforms the state-of-the-art baseline methods by a large margin whilst maintaining good efficiency.
arXiv Detail & Related papers (2024-02-17T17:47:10Z) - Entropy-Regularized Token-Level Policy Optimization for Language Agent Reinforcement [67.1393112206885]
Large Language Models (LLMs) have shown promise as intelligent agents in interactive decision-making tasks.
We introduce Entropy-Regularized Token-level Policy Optimization (ETPO), an entropy-augmented RL method tailored for optimizing LLMs at the token level.
We assess the effectiveness of ETPO within a simulated environment that models data science code generation as a series of multi-step interactive tasks.
arXiv Detail & Related papers (2024-02-09T07:45:26Z) - Evolutionary Multi-Objective Optimization of Large Language Model
Prompts for Balancing Sentiments [0.0]
We propose a evolutionary multi-objective (EMO) approach specifically tailored for prompt optimization called EMO-Prompts.
Our results demonstrate that EMO-Prompts effectively generates prompts capable of guiding the LLM to produce texts embodying two conflicting emotions simultaneously.
arXiv Detail & Related papers (2024-01-18T10:21:15Z) - Empirical Study of Zero-Shot NER with ChatGPT [19.534329209433626]
Large language models (LLMs) exhibited powerful capability in various natural language processing tasks.
This work focuses on exploring LLM performance on zero-shot information extraction.
Inspired by the remarkable reasoning capability of LLM on symbolic and arithmetic reasoning, we adapt the prevalent reasoning methods to NER.
arXiv Detail & Related papers (2023-10-16T03:40:03Z) - Visualizing the Relationship Between Encoded Linguistic Information and
Task Performance [53.223789395577796]
We study the dynamic relationship between the encoded linguistic information and task performance from the viewpoint of Pareto Optimality.
We conduct experiments on two popular NLP tasks, i.e., machine translation and language modeling, and investigate the relationship between several kinds of linguistic information and task performances.
Our empirical findings suggest that some syntactic information is helpful for NLP tasks whereas encoding more syntactic information does not necessarily lead to better performance.
arXiv Detail & Related papers (2022-03-29T19:03:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.