CRUISE-Screening: Living Literature Reviews Toolbox
- URL: http://arxiv.org/abs/2309.01684v1
- Date: Mon, 4 Sep 2023 15:58:43 GMT
- Title: CRUISE-Screening: Living Literature Reviews Toolbox
- Authors: Wojciech Kusa, Petr Knoth, Allan Hanbury
- Abstract summary: CRUISE-Screening is a web-based application for conducting living literature reviews.
It is connected to several search engines via an API, which allows for updating the search results periodically.
- Score: 8.292338880619061
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Keeping up with research and finding related work is still a time-consuming
task for academics. Researchers sift through thousands of studies to identify a
few relevant ones. Automation techniques can help by increasing the efficiency
and effectiveness of this task. To this end, we developed CRUISE-Screening, a
web-based application for conducting living literature reviews - a type of
literature review that is continuously updated to reflect the latest research
in a particular field. CRUISE-Screening is connected to several search engines
via an API, which allows for updating the search results periodically.
Moreover, it can facilitate the process of screening for relevant publications
by using text classification and question answering models. CRUISE-Screening
can be used both by researchers conducting literature reviews and by those
working on automating the citation screening process to validate their
algorithms. The application is open-source:
https://github.com/ProjectDoSSIER/cruise-screening, and a demo is available
under this URL: https://citation-screening.ec.tuwien.ac.at. We discuss the
limitations of our tool in Appendix A.
Related papers
- LLAssist: Simple Tools for Automating Literature Review Using Large Language Models [0.0]
LLAssist is an open-source tool designed to streamline literature reviews in academic research.
It uses Large Language Models (LLMs) and Natural Language Processing (NLP) techniques to automate key aspects of the review process.
arXiv Detail & Related papers (2024-07-19T02:48:54Z) - WorkArena: How Capable Are Web Agents at Solving Common Knowledge Work Tasks? [83.19032025950986]
We study the use of large language model-based agents for interacting with software via web browsers.
WorkArena is a benchmark of 33 tasks based on the widely-used ServiceNow platform.
BrowserGym is an environment for the design and evaluation of such agents.
arXiv Detail & Related papers (2024-03-12T14:58:45Z) - LitLLM: A Toolkit for Scientific Literature Review [15.080020634480272]
Toolkit operates on Retrieval Augmented Generation (RAG) principles.
System first initiates a web search to retrieve relevant papers.
Second, the system re-ranks the retrieved papers based on the user-provided abstract.
Third, the related work section is generated based on the re-ranked results and the abstract.
arXiv Detail & Related papers (2024-02-02T02:41:28Z) - Cache & Distil: Optimising API Calls to Large Language Models [82.32065572907125]
Large-scale deployment of generative AI tools often depends on costly API calls to a Large Language Model (LLM) to fulfil user queries.
To curtail the frequency of these calls, one can employ a smaller language model -- a student.
This student gradually gains proficiency in independently handling an increasing number of user requests.
arXiv Detail & Related papers (2023-10-20T15:01:55Z) - AI Literature Review Suite [0.0]
I present an AI Literature Review Suite that integrates several functionalities to provide a comprehensive literature review.
This tool leverages the power of open access science, large language models (LLMs) and natural language processing to enable the searching, downloading, and organizing of PDF files.
The suite also features integrated programs for organization, interaction and query, and literature review summaries.
arXiv Detail & Related papers (2023-07-27T17:30:31Z) - A Semi-Automated Solution Approach Recommender for a Given Use Case: a Case Study for AI/ML in Oncology via Scopus and OpenAI [0.6749750044497732]
Our proposed tool, SARBOLD-LLM, allows discovering and choosing among methods related to a given problem.
It provides additional information about their uses in the literature to derive decision-making insights.
It is a useful tool to select which methods to investigate first and comes as a complement to surveys.
arXiv Detail & Related papers (2023-07-10T14:07:28Z) - CiteBench: A benchmark for Scientific Citation Text Generation [69.37571393032026]
CiteBench is a benchmark for citation text generation.
We make the code for CiteBench publicly available at https://github.com/UKPLab/citebench.
arXiv Detail & Related papers (2022-12-19T16:10:56Z) - ALBench: A Framework for Evaluating Active Learning in Object Detection [102.81795062493536]
This paper contributes an active learning benchmark framework named as ALBench for evaluating active learning in object detection.
Developed on an automatic deep model training system, this ALBench framework is easy-to-use, compatible with different active learning algorithms, and ensures the same training and testing protocols.
arXiv Detail & Related papers (2022-07-27T07:46:23Z) - A Systematic Literature Review on the Use of Deep Learning in Software
Engineering Research [22.21817722054742]
An increasingly popular set of techniques adopted by software engineering (SE) researchers to automate development tasks are those rooted in the concept of Deep Learning (DL)
This paper presents a systematic literature review of research at the intersection of SE & DL.
We center our analysis around the components of learning, a set of principles that govern the application of machine learning techniques to a given problem domain.
arXiv Detail & Related papers (2020-09-14T15:28:28Z) - CATCH: Context-based Meta Reinforcement Learning for Transferrable
Architecture Search [102.67142711824748]
CATCH is a novel Context-bAsed meTa reinforcement learning algorithm for transferrable arChitecture searcH.
The combination of meta-learning and RL allows CATCH to efficiently adapt to new tasks while being agnostic to search spaces.
It is also capable of handling cross-domain architecture search as competitive networks on ImageNet, COCO, and Cityscapes are identified.
arXiv Detail & Related papers (2020-07-18T09:35:53Z) - Mining Implicit Relevance Feedback from User Behavior for Web Question
Answering [92.45607094299181]
We make the first study to explore the correlation between user behavior and passage relevance.
Our approach significantly improves the accuracy of passage ranking without extra human labeled data.
In practice, this work has proved effective to substantially reduce the human labeling cost for the QA service in a global commercial search engine.
arXiv Detail & Related papers (2020-06-13T07:02:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.