How Many Papers Should You Review? A Research Synthesis of Systematic
Literature Reviews in Software Engineering
- URL: http://arxiv.org/abs/2307.06056v1
- Date: Wed, 12 Jul 2023 10:18:58 GMT
- Title: How Many Papers Should You Review? A Research Synthesis of Systematic
Literature Reviews in Software Engineering
- Authors: Xiaofeng Wang, Henry Edison, Dron Khanna and Usman Rafiq
- Abstract summary: We aim to provide more understanding of when an SLR in Software Engineering should be conducted.
A research synthesis was conducted on a sample of 170 SLRs published in top-tier SE journals.
The results of our study can be used by SE researchers as an indicator or benchmark to understand whether an SLR is conducted at a good time.
- Score: 5.6292136785289175
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: [Context] Systematic Literature Review (SLR) has been a major type of study
published in Software Engineering (SE) venues for about two decades. However,
there is a lack of understanding of whether an SLR is really needed in
comparison to a more conventional literature review. Very often, SE researchers
embark on an SLR with such doubts. We aspire to provide more understanding of
when an SLR in SE should be conducted. [Objective] The first step of our
investigation was focused on the dataset, i.e., the reviewed papers, in an SLR,
which indicates the development of a research topic or area. The objective of
this step is to provide a better understanding of the characteristics of the
datasets of SLRs in SE. [Method] A research synthesis was conducted on a sample
of 170 SLRs published in top-tier SE journals. We extracted and analysed the
quantitative attributes of the datasets of these SLRs. [Results] The findings
show that the median size of the datasets in our sample is 57 reviewed papers,
and the median review period covered is 14 years. The number of reviewed papers
and review period have a very weak and non-significant positive correlation.
[Conclusions] The results of our study can be used by SE researchers as an
indicator or benchmark to understand whether an SLR is conducted at a good
time.
Related papers
- LLMs Assist NLP Researchers: Critique Paper (Meta-)Reviewing [106.45895712717612]
Large language models (LLMs) have shown remarkable versatility in various generative tasks.
This study focuses on the topic of LLMs assist NLP Researchers.
To our knowledge, this is the first work to provide such a comprehensive analysis.
arXiv Detail & Related papers (2024-06-24T01:30:22Z) - Peer Review as A Multi-Turn and Long-Context Dialogue with Role-Based Interactions [62.0123588983514]
Large Language Models (LLMs) have demonstrated wide-ranging applications across various fields.
We reformulate the peer-review process as a multi-turn, long-context dialogue, incorporating distinct roles for authors, reviewers, and decision makers.
We construct a comprehensive dataset containing over 26,841 papers with 92,017 reviews collected from multiple sources.
arXiv Detail & Related papers (2024-06-09T08:24:17Z) - System for systematic literature review using multiple AI agents:
Concept and an empirical evaluation [5.194208843843004]
We introduce a novel multi-AI agent model designed to fully automate the process of conducting Systematic Literature Reviews.
The model operates through a user-friendly interface where researchers input their topic.
It generates a search string used to retrieve relevant academic papers.
The model then autonomously summarizes the abstracts of these papers.
arXiv Detail & Related papers (2024-03-13T10:27:52Z) - A Literature Review of Literature Reviews in Pattern Analysis and Machine Intelligence [58.6354685593418]
This paper proposes several article-level, field-normalized, and large language model-empowered bibliometric indicators to evaluate reviews.
The newly emerging AI-generated literature reviews are also appraised.
This work offers insights into the current challenges of literature reviews and envisions future directions for their development.
arXiv Detail & Related papers (2024-02-20T11:28:50Z) - Artificial Intelligence for Literature Reviews: Opportunities and Challenges [0.0]
This manuscript presents a comprehensive review of the use of Artificial Intelligence in Systematic Literature Reviews.
A SLR is a rigorous and organised methodology that assesses and integrates previous research on a given topic.
We examine 21 leading SLR tools using a framework that combines 23 traditional features with 11 AI features.
arXiv Detail & Related papers (2024-02-13T16:05:51Z) - Expanding Horizons in HCI Research Through LLM-Driven Qualitative
Analysis [3.5253513747455303]
We introduce a new approach to qualitative analysis in HCI using Large Language Models (LLMs)
Our findings indicate that LLMs not only match the efficacy of traditional analysis methods but also offer unique insights.
arXiv Detail & Related papers (2024-01-07T12:39:31Z) - CSMeD: Bridging the Dataset Gap in Automated Citation Screening for
Systematic Literature Reviews [10.207938863784829]
We introduce CSMeD, a meta-dataset consolidating nine publicly released collections.
CSMeD serves as a comprehensive resource for training and evaluating the performance of automated citation screening models.
We introduce CSMeD-FT, a new dataset designed explicitly for evaluating the full text publication screening task.
arXiv Detail & Related papers (2023-11-21T09:36:11Z) - L-Eval: Instituting Standardized Evaluation for Long Context Language
Models [91.05820785008527]
We propose L-Eval to institute a more standardized evaluation for long context language models (LCLMs)
We build a new evaluation suite containing 20 sub-tasks, 508 long documents, and over 2,000 human-labeled query-response pairs.
Results show that popular n-gram matching metrics generally can not correlate well with human judgment.
arXiv Detail & Related papers (2023-07-20T17:59:41Z) - Sentiment Analysis in the Era of Large Language Models: A Reality Check [69.97942065617664]
This paper investigates the capabilities of large language models (LLMs) in performing various sentiment analysis tasks.
We evaluate performance across 13 tasks on 26 datasets and compare the results against small language models (SLMs) trained on domain-specific datasets.
arXiv Detail & Related papers (2023-05-24T10:45:25Z) - Investigating Fairness Disparities in Peer Review: A Language Model
Enhanced Approach [77.61131357420201]
We conduct a thorough and rigorous study on fairness disparities in peer review with the help of large language models (LMs)
We collect, assemble, and maintain a comprehensive relational database for the International Conference on Learning Representations (ICLR) conference from 2017 to date.
We postulate and study fairness disparities on multiple protective attributes of interest, including author gender, geography, author, and institutional prestige.
arXiv Detail & Related papers (2022-11-07T16:19:42Z) - A Comprehensive Review of Sign Language Recognition: Different Types,
Modalities, and Datasets [0.0]
SLR usage has increased in many applications, but the environment, background image resolution, modalities, and datasets affect the performance a lot.
This review paper facilitates a comprehensive overview of SLR and discusses the needs, challenges, and problems associated with SLR.
Research progress and existing state-of-the-art SLR models over the past decade have been reviewed.
arXiv Detail & Related papers (2022-04-07T09:49:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.