Search-based Optimisation of LLM Learning Shots for Story Point
Estimation
- URL: http://arxiv.org/abs/2403.08430v1
- Date: Wed, 13 Mar 2024 11:29:37 GMT
- Title: Search-based Optimisation of LLM Learning Shots for Story Point
Estimation
- Authors: Vali Tawosi, Salwa Alamir, Xiaomo Liu
- Abstract summary: We use Search-Based methods to optimise the number and combination of examples that can improve an LLM's estimation performance.
Our preliminary results show that our SBSE technique improves the estimation performance of the LLM by 59.34% on average.
- Score: 3.5365325264937897
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One of the ways Large Language Models (LLMs) are used to perform machine
learning tasks is to provide them with a few examples before asking them to
produce a prediction. This is a meta-learning process known as few-shot
learning. In this paper, we use available Search-Based methods to optimise the
number and combination of examples that can improve an LLM's estimation
performance, when it is used to estimate story points for new agile tasks. Our
preliminary results show that our SBSE technique improves the estimation
performance of the LLM by 59.34% on average (in terms of mean absolute error of
the estimation) over three datasets against a zero-shot setting.
Related papers
- SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning [70.21358720599821]
Large language models (LLMs) hold the promise of solving diverse tasks when provided with appropriate natural language prompts.
We propose SELF-GUIDE, a multi-stage mechanism in which we synthesize task-specific input-output pairs from the student LLM.
We report an absolute improvement of approximately 15% for classification tasks and 18% for generation tasks in the benchmark's metrics.
arXiv Detail & Related papers (2024-07-16T04:41:58Z) - LLM-Select: Feature Selection with Large Language Models [64.5099482021597]
Large language models (LLMs) are capable of selecting the most predictive features, with performance rivaling the standard tools of data science.
Our findings suggest that LLMs may be useful not only for selecting the best features for training but also for deciding which features to collect in the first place.
arXiv Detail & Related papers (2024-07-02T22:23:40Z) - SLMRec: Empowering Small Language Models for Sequential Recommendation [25.920216777752]
Sequential Recommendation task involves predicting the next item a user is likely to interact with.
Recent research demonstrates the great impact of LLMs on sequential recommendation systems.
Due to the huge size of LLMs, it is inefficient and impractical to apply a LLM-based model in real-world platforms.
arXiv Detail & Related papers (2024-05-28T07:12:06Z) - Large Language Model Enhanced Machine Learning Estimators for Classification [24.391150322835713]
Pre-trained large language models (LLM) have emerged as a powerful tool for simulating various scenarios.
We propose a few approaches to integrate LLM into a classical machine learning estimator to further enhance the prediction performance.
arXiv Detail & Related papers (2024-05-08T22:28:57Z) - Evaluating Large Language Models for Health-Related Text Classification Tasks with Public Social Media Data [3.9459077974367833]
Large language models (LLMs) have demonstrated remarkable success in NLP tasks.
We benchmarked one supervised classic machine learning model based on Support Vector Machines (SVMs), three supervised pretrained language models (PLMs) based on RoBERTa, BERTweet, and SocBERT, and two LLM based classifiers (GPT3.5 and GPT4), across 6 text classification tasks.
Our comprehensive experiments demonstrate that employ-ing data augmentation using LLMs (GPT-4) with relatively small human-annotated data to train lightweight supervised classification models achieves superior results compared to training with human-annotated data
arXiv Detail & Related papers (2024-03-27T22:05:10Z) - Metric-aware LLM inference for regression and scoring [52.764328080398805]
Large language models (LLMs) have demonstrated strong results on a range of NLP tasks.
We show that this inference strategy can be suboptimal for a range of regression and scoring tasks, and associated evaluation metrics.
We propose aware metric LLM inference: a decision theoretic approach optimizing for custom regression and scoring metrics at inference time.
arXiv Detail & Related papers (2024-03-07T03:24:34Z) - LLM-augmented Preference Learning from Natural Language [19.700169351688768]
Large Language Models (LLMs) are equipped to deal with larger context lengths.
LLMs can consistently outperform the SotA when the target text is large.
Few-shot learning yields better performance than zero-shot learning.
arXiv Detail & Related papers (2023-10-12T17:17:27Z) - Evaluating and Explaining Large Language Models for Code Using Syntactic
Structures [74.93762031957883]
This paper introduces ASTxplainer, an explainability method specific to Large Language Models for code.
At its core, ASTxplainer provides an automated method for aligning token predictions with AST nodes.
We perform an empirical evaluation on 12 popular LLMs for code using a curated dataset of the most popular GitHub projects.
arXiv Detail & Related papers (2023-08-07T18:50:57Z) - Language models are weak learners [71.33837923104808]
We show that prompt-based large language models can operate effectively as weak learners.
We incorporate these models into a boosting approach, which can leverage the knowledge within the model to outperform traditional tree-based boosting.
Results illustrate the potential for prompt-based LLMs to function not just as few-shot learners themselves, but as components of larger machine learning pipelines.
arXiv Detail & Related papers (2023-06-25T02:39:19Z) - Efficient Nearest Neighbor Language Models [114.40866461741795]
Non-parametric neural language models (NLMs) learn predictive distributions of text utilizing an external datastore.
We show how to achieve up to a 6x speed-up in inference speed while retaining comparable performance.
arXiv Detail & Related papers (2021-09-09T12:32:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.