Open Source Software for Efficient and Transparent Reviews
- URL: http://arxiv.org/abs/2006.12166v3
- Date: Fri, 4 Dec 2020 08:25:18 GMT
- Title: Open Source Software for Efficient and Transparent Reviews
- Authors: Rens van de Schoot, Jonathan de Bruin, Raoul Schram, Parisa Zahedi,
Jan de Boer, Felix Weijdema, Bianca Kramer, Martijn Huijts, Maarten
Hoogerwerf, Gerbrich Ferdinands, Albert Harkema, Joukje Willemsen, Yongchao
Ma, Qixiang Fang, Sybren Hindriks, Lars Tummers, Daniel Oberski
- Abstract summary: ASReview is an open source machine learning-aided pipeline applying active learning.
We demonstrate by means of simulation studies that ASReview can yield far more efficient reviewing than manual reviewing.
- Score: 0.11179881480027788
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To help researchers conduct a systematic review or meta-analysis as
efficiently and transparently as possible, we designed a tool (ASReview) to
accelerate the step of screening titles and abstracts. For many tasks -
including but not limited to systematic reviews and meta-analyses - the
scientific literature needs to be checked systematically. Currently, scholars
and practitioners screen thousands of studies by hand to determine which
studies to include in their review or meta-analysis. This is error prone and
inefficient because of extremely imbalanced data: only a fraction of the
screened studies is relevant. The future of systematic reviewing will be an
interaction with machine learning algorithms to deal with the enormous increase
of available text. We therefore developed an open source machine learning-aided
pipeline applying active learning: ASReview. We demonstrate by means of
simulation studies that ASReview can yield far more efficient reviewing than
manual reviewing, while providing high quality. Furthermore, we describe the
options of the free and open source research software and present the results
from user experience tests. We invite the community to contribute to open
source projects such as our own that provide measurable and reproducible
improvements over current practice.
Related papers
- LLAssist: Simple Tools for Automating Literature Review Using Large Language Models [0.0]
LLAssist is an open-source tool designed to streamline literature reviews in academic research.
It uses Large Language Models (LLMs) and Natural Language Processing (NLP) techniques to automate key aspects of the review process.
arXiv Detail & Related papers (2024-07-19T02:48:54Z) - SyROCCo: Enhancing Systematic Reviews using Machine Learning [6.805429133535976]
This paper explores the use of machine learning techniques to help navigate the systematic review process.
The application of ML techniques to subsequent stages of a review, such as data extraction and evidence mapping, is in its infancy.
arXiv Detail & Related papers (2024-06-24T11:04:43Z) - Towards Coarse-to-Fine Evaluation of Inference Efficiency for Large Language Models [95.96734086126469]
Large language models (LLMs) can serve as the assistant to help users accomplish their jobs, and also support the development of advanced applications.
For the wide application of LLMs, the inference efficiency is an essential concern, which has been widely studied in existing work.
We perform a detailed coarse-to-fine analysis of the inference performance of various code libraries.
arXiv Detail & Related papers (2024-04-17T15:57:50Z) - Automated Extraction and Maturity Analysis of Open Source Clinical Informatics Repositories from Scientific Literature [0.0]
This study introduces an automated methodology to bridge the gap by systematically extracting GitHub repository URLs from academic papers indexed in arXiv.
Our approach encompasses querying the arXiv API for relevant papers, cleaning extracted GitHub URLs, fetching comprehensive repository information via the GitHub API, and analyzing repository maturity based on defined metrics such as stars, forks, open issues, and contributors.
arXiv Detail & Related papers (2024-03-20T17:06:51Z) - A Literature Review of Literature Reviews in Pattern Analysis and Machine Intelligence [58.6354685593418]
This paper proposes several article-level, field-normalized, and large language model-empowered bibliometric indicators to evaluate reviews.
The newly emerging AI-generated literature reviews are also appraised.
This work offers insights into the current challenges of literature reviews and envisions future directions for their development.
arXiv Detail & Related papers (2024-02-20T11:28:50Z) - A Reliable Knowledge Processing Framework for Combustion Science using
Foundation Models [0.0]
The study introduces an approach to process diverse combustion research data, spanning experimental studies, simulations, and literature.
The developed approach minimizes computational and economic expenses while optimizing data privacy and accuracy.
The framework consistently delivers accurate domain-specific responses with minimal human oversight.
arXiv Detail & Related papers (2023-12-31T17:15:25Z) - Automated Grading and Feedback Tools for Programming Education: A
Systematic Review [7.776434991976473]
Most papers assess the correctness of assignments in object-oriented languages.
Few tools assess the maintainability, readability or documentation of the source code.
Most tools offered fully automated assessment to allow for near-instantaneous feedback.
arXiv Detail & Related papers (2023-06-20T17:54:50Z) - Investigating Fairness Disparities in Peer Review: A Language Model
Enhanced Approach [77.61131357420201]
We conduct a thorough and rigorous study on fairness disparities in peer review with the help of large language models (LMs)
We collect, assemble, and maintain a comprehensive relational database for the International Conference on Learning Representations (ICLR) conference from 2017 to date.
We postulate and study fairness disparities on multiple protective attributes of interest, including author gender, geography, author, and institutional prestige.
arXiv Detail & Related papers (2022-11-07T16:19:42Z) - Towards Informed Design and Validation Assistance in Computer Games
Using Imitation Learning [65.12226891589592]
This paper proposes a new approach to automated game validation and testing.
Our method leverages a data-driven imitation learning technique, which requires little effort and time and no knowledge of machine learning or programming.
arXiv Detail & Related papers (2022-08-15T11:08:44Z) - What Makes Good Contrastive Learning on Small-Scale Wearable-based
Tasks? [59.51457877578138]
We study contrastive learning on the wearable-based activity recognition task.
This paper presents an open-source PyTorch library textttCL-HAR, which can serve as a practical tool for researchers.
arXiv Detail & Related papers (2022-02-12T06:10:15Z) - Bayesian active learning for production, a systematic study and a
reusable library [85.32971950095742]
In this paper, we analyse the main drawbacks of current active learning techniques.
We do a systematic study on the effects of the most common issues of real-world datasets on the deep active learning process.
We derive two techniques that can speed up the active learning loop such as partial uncertainty sampling and larger query size.
arXiv Detail & Related papers (2020-06-17T14:51:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.