Can Machine Learning Support the Selection of Studies for Systematic Literature Review Updates?
- URL: http://arxiv.org/abs/2502.08050v1
- Date: Wed, 12 Feb 2025 01:13:52 GMT
- Title: Can Machine Learning Support the Selection of Studies for Systematic Literature Review Updates?
- Authors: Marcelo Costalonga, Bianca Minetto NapoleĆ£o, Maria Teresa Baldassarre, Katia Romero Felizardo, Igor Steinmacher, Marcos Kalinowski,
- Abstract summary: Systematic literature reviews ( SLRs) are essential for synthesizing evidence in Software Engineering (SE)
This study aims to evaluate if Machine Learning (ML) text classification models can support reviewers in the study selection for SLR updates.
- Score: 12.89869400063012
- License:
- Abstract: [Background] Systematic literature reviews (SLRs) are essential for synthesizing evidence in Software Engineering (SE), but keeping them up-to-date requires substantial effort. Study selection, one of the most labor-intensive steps, involves reviewing numerous studies and requires multiple reviewers to minimize bias and avoid loss of evidence. [Objective] This study aims to evaluate if Machine Learning (ML) text classification models can support reviewers in the study selection for SLR updates. [Method] We reproduce the study selection of an SLR update performed by three SE researchers. We trained two supervised ML models (Random Forest and Support Vector Machines) with different configurations using data from the original SLR. We calculated the study selection effectiveness of the ML models for the SLR update in terms of precision, recall, and F-measure. We also compared the performance of human-ML pairs with human-only pairs when selecting studies. [Results] The ML models achieved a modest F-score of 0.33, which is insufficient for reliable automation. However, we found that such models can reduce the study selection effort by 33.9% without loss of evidence (keeping a 100% recall). Our analysis also showed that the initial screening by pairs of human reviewers produces results that are much better aligned with the final SLR update result. [Conclusion] Based on our results, we conclude that although ML models can help reduce the effort involved in SLR updates, achieving rigorous and reliable outcomes still requires the expertise of experienced human reviewers for the initial screening phase.
Related papers
- S$^2$R: Teaching LLMs to Self-verify and Self-correct via Reinforcement Learning [51.84977135926156]
We introduce S$2$R, an efficient framework that enhances LLM reasoning by teaching models to self-verify and self-correct during inference.
Our results demonstrate that Qwen2.5-math-7B achieves an accuracy improvement from 51.0% to 81.6%, outperforming models trained on an equivalent amount of long-CoT distilled data.
arXiv Detail & Related papers (2025-02-18T13:40:22Z) - What Makes In-context Learning Effective for Mathematical Reasoning: A Theoretical Analysis [81.15503859645149]
In this paper, we aim to theoretically analyze the impact of in-context demonstrations on large language models' reasoning performance.
We propose a straightforward, generalizable, and low-complexity demonstration selection method named LMS3.
arXiv Detail & Related papers (2024-12-11T11:38:11Z) - Automated Feedback in Math Education: A Comparative Analysis of LLMs for Open-Ended Responses [0.0]
This study aims to explore the potential of Large Language Models (LLMs) in facilitating automated feedback in math education.
We employ Mistral, a version of Llama catered to math, and fine-tune this model for evaluating student responses by leveraging a dataset of student responses and teacher-written feedback for middle-school math problems.
We evaluate the model's performance in scoring accuracy and the quality of feedback by utilizing judgments from 2 teachers.
arXiv Detail & Related papers (2024-10-29T16:57:45Z) - Forewarned is Forearmed: Leveraging LLMs for Data Synthesis through Failure-Inducing Exploration [90.41908331897639]
Large language models (LLMs) have significantly benefited from training on diverse, high-quality task-specific data.
We present a novel approach, ReverseGen, designed to automatically generate effective training samples.
arXiv Detail & Related papers (2024-10-22T06:43:28Z) - Self-Exploring Language Models: Active Preference Elicitation for Online Alignment [88.56809269990625]
We propose a bilevel objective optimistically biased towards potentially high-reward responses to actively explore out-of-distribution regions.
Our experimental results demonstrate that when fine-tuned on Zephyr-7B-SFT and Llama-3-8B-Instruct models, Self-Exploring Language Models (SELM) significantly boosts the performance on instruction-following benchmarks.
arXiv Detail & Related papers (2024-05-29T17:59:07Z) - Investigating Automatic Scoring and Feedback using Large Language Models [46.1232919707345]
This paper explores the efficacy of PEFT-based quantized models, employing classification or regression head, to fine-tune language models for automatic grading and feedback generation.
The results show that prediction of grade scores via finetuned LLMs are highly accurate, achieving less than 3% error in grade percentage on average.
arXiv Detail & Related papers (2024-05-01T16:13:54Z) - System for systematic literature review using multiple AI agents:
Concept and an empirical evaluation [5.194208843843004]
We introduce a novel multi-AI agent model designed to fully automate the process of conducting Systematic Literature Reviews.
The model operates through a user-friendly interface where researchers input their topic.
It generates a search string used to retrieve relevant academic papers.
The model then autonomously summarizes the abstracts of these papers.
arXiv Detail & Related papers (2024-03-13T10:27:52Z) - Emerging Results on Automated Support for Searching and Selecting
Evidence for Systematic Literature Review Updates [1.1153433121962064]
We present emerging results on an automated approach to support searching and selecting studies for SLR updates in Software Engineering.
We developed an automated tool prototype to perform the snowballing search technique and support selecting relevant studies for SLR updates using Machine Learning (ML) algorithms.
arXiv Detail & Related papers (2024-02-07T23:39:20Z) - BLESS: Benchmarking Large Language Models on Sentence Simplification [55.461555829492866]
We present BLESS, a performance benchmark of the most recent state-of-the-art large language models (LLMs) on the task of text simplification (TS)
We assess a total of 44 models, differing in size, architecture, pre-training methods, and accessibility, on three test sets from different domains (Wikipedia, news, and medical) under a few-shot setting.
Our evaluation indicates that the best LLMs, despite not being trained on TS, perform comparably with state-of-the-art TS baselines.
arXiv Detail & Related papers (2023-10-24T12:18:17Z) - Machine Learning Data Suitability and Performance Testing Using Fault
Injection Testing Framework [0.0]
This paper presents the Fault Injection for Undesirable Learning in input Data (FIUL-Data) testing framework.
Data mutators explore vulnerabilities of ML systems against the effects of different fault injections.
This paper evaluates the framework using data from analytical chemistry, comprising retention time measurements of anti-sense oligonucleotides.
arXiv Detail & Related papers (2023-09-20T12:58:35Z) - On Learning to Summarize with Large Language Models as References [101.79795027550959]
Large language models (LLMs) are favored by human annotators over the original reference summaries in commonly used summarization datasets.
We study an LLM-as-reference learning setting for smaller text summarization models to investigate whether their performance can be substantially improved.
arXiv Detail & Related papers (2023-05-23T16:56:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.