SWARM-SLR -- Streamlined Workflow Automation for Machine-actionable Systematic Literature Reviews
- URL: http://arxiv.org/abs/2407.18657v1
- Date: Fri, 26 Jul 2024 10:46:14 GMT
- Title: SWARM-SLR -- Streamlined Workflow Automation for Machine-actionable Systematic Literature Reviews
- Authors: Tim Wittenborg, Oliver Karras, Sören Auer,
- Abstract summary: We propose the Streamlined Automation for Machine-actionable Systematic Literature Reviews (SWARM- SLR) to crowdsource the improvement of SLR efficiency.
By guidelines from the literature, we have composed a set of 65 requirements, spanning from planning to reporting a review.
Existing tools were assessed against these requirements and synthesized into the SWARM- SLR workflow prototype, a ready-for-operation software support tool.
- Score: 0.4915744683251149
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Authoring survey or review articles still requires significant tedious manual effort, despite many advancements in research knowledge management having the potential to improve efficiency, reproducibility, and reuse. However, these advancements bring forth an increasing number of approaches, tools, and systems, which often cover only specific stages and lack a comprehensive workflow utilizing their task-specific strengths. We propose the Streamlined Workflow Automation for Machine-actionable Systematic Literature Reviews (SWARM-SLR) to crowdsource the improvement of SLR efficiency while maintaining scientific integrity in a state-of-the-art knowledge discovery and distribution process. The workflow aims to domain-independently support researchers in collaboratively and sustainably managing the rising scholarly knowledge corpus. By synthesizing guidelines from the literature, we have composed a set of 65 requirements, spanning from planning to reporting a review. Existing tools were assessed against these requirements and synthesized into the SWARM-SLR workflow prototype, a ready-for-operation software support tool. The SWARM-SLR was evaluated via two online surveys, which largely confirmed the validity of the 65 requirements and situated 11 tools to the different life-cycle stages. The SWARM-SLR workflow was similarly evaluated and found to be supporting almost the entire span of an SLR, excelling specifically in search and retrieval, information extraction, knowledge synthesis, and distribution. Our SWARM-SLR requirements and workflow support tool streamlines the SLR support for researchers, allowing sustainable collaboration by linking individual efficiency improvements to crowdsourced knowledge management. If these efforts are continued, we expect the increasing number of tools to be manageable and usable inside fully structured, (semi-)automated literature review workflows.
Related papers
- LLM Agents Making Agent Tools [2.5529148902034637]
Tool use has turned large language models (LLMs) into powerful agents that can perform complex multi-step tasks.
We propose ToolMaker, a novel agentic framework that autonomously transforms papers with code into LLM-compatible tools.
Given a short task description and a repository URL, ToolMaker autonomously installs required dependencies and generates code to perform the task.
arXiv Detail & Related papers (2025-02-17T11:44:11Z) - From Human Annotation to LLMs: SILICON Annotation Workflow for Management Research [13.818244562506138]
This paper introduces the SILICON" (textbfSystematic textbfInference with textbfLLMs for textbfInformation textbfClassificatitextbfon and textbfNotation) workflow.
The workflow integrates established principles of human annotation with systematic prompt optimization and model selection.
We validate the SILICON workflow through seven case studies covering common management research tasks.
arXiv Detail & Related papers (2024-12-19T02:21:41Z) - PROMPTHEUS: A Human-Centered Pipeline to Streamline SLRs with LLMs [0.0]
PROMPTHEUS is an AI-driven pipeline solution for Systematic Literature Reviews.
It automates key stages of the SLR process, including systematic search, data extraction, topic modeling, and summarization.
It achieves high precision, provides coherent topic organization, and reduces review time.
arXiv Detail & Related papers (2024-10-21T13:05:33Z) - From Exploration to Mastery: Enabling LLMs to Master Tools via Self-Driven Interactions [60.733557487886635]
This paper focuses on bridging the comprehension gap between Large Language Models and external tools.
We propose a novel framework, DRAFT, aimed at Dynamically refining tool documentation.
Extensive experiments on multiple datasets demonstrate that DRAFT's iterative, feedback-based refinement significantly ameliorates documentation quality.
arXiv Detail & Related papers (2024-10-10T17:58:44Z) - WorkArena++: Towards Compositional Planning and Reasoning-based Common Knowledge Work Tasks [85.95607119635102]
Large language models (LLMs) can mimic human-like intelligence.
WorkArena++ is designed to evaluate the planning, problem-solving, logical/arithmetic reasoning, retrieval, and contextual understanding abilities of web agents.
arXiv Detail & Related papers (2024-07-07T07:15:49Z) - Can Long-Context Language Models Subsume Retrieval, RAG, SQL, and More? [54.667202878390526]
Long-context language models (LCLMs) have the potential to revolutionize our approach to tasks traditionally reliant on external tools like retrieval systems or databases.
We introduce LOFT, a benchmark of real-world tasks requiring context up to millions of tokens designed to evaluate LCLMs' performance on in-context retrieval and reasoning.
Our findings reveal LCLMs' surprising ability to rival state-of-the-art retrieval and RAG systems, despite never having been explicitly trained for these tasks.
arXiv Detail & Related papers (2024-06-19T00:28:58Z) - MarkLLM: An Open-Source Toolkit for LLM Watermarking [80.00466284110269]
MarkLLM is an open-source toolkit for implementing LLM watermarking algorithms.
For evaluation, MarkLLM offers a comprehensive suite of 12 tools spanning three perspectives, along with two types of automated evaluation pipelines.
arXiv Detail & Related papers (2024-05-16T12:40:01Z) - Automating Research Synthesis with Domain-Specific Large Language Model Fine-Tuning [0.9110413356918055]
This research pioneers the use of fine-tuned Large Language Models (LLMs) to automate Systematic Literature Reviews ( SLRs)
Our study employed the latest fine-tuning methodologies together with open-sourced LLMs, and demonstrated a practical and efficient approach to automating the final execution stages of an SLR process.
The results maintained high fidelity in factual accuracy in LLM responses, and were validated through the replication of an existing PRISMA-conforming SLR.
arXiv Detail & Related papers (2024-04-08T00:08:29Z) - System for systematic literature review using multiple AI agents:
Concept and an empirical evaluation [5.194208843843004]
We introduce a novel multi-AI agent model designed to fully automate the process of conducting Systematic Literature Reviews.
The model operates through a user-friendly interface where researchers input their topic.
It generates a search string used to retrieve relevant academic papers.
The model then autonomously summarizes the abstracts of these papers.
arXiv Detail & Related papers (2024-03-13T10:27:52Z) - WorkArena: How Capable Are Web Agents at Solving Common Knowledge Work Tasks? [83.19032025950986]
We study the use of large language model-based agents for interacting with software via web browsers.
WorkArena is a benchmark of 33 tasks based on the widely-used ServiceNow platform.
BrowserGym is an environment for the design and evaluation of such agents.
arXiv Detail & Related papers (2024-03-12T14:58:45Z) - TaskBench: Benchmarking Large Language Models for Task Automation [82.2932794189585]
We introduce TaskBench, a framework to evaluate the capability of large language models (LLMs) in task automation.
Specifically, task decomposition, tool selection, and parameter prediction are assessed.
Our approach combines automated construction with rigorous human verification, ensuring high consistency with human evaluation.
arXiv Detail & Related papers (2023-11-30T18:02:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.