DenseReviewer: A Screening Prioritisation Tool for Systematic Review based on Dense Retrieval
- URL: http://arxiv.org/abs/2502.03400v1
- Date: Wed, 05 Feb 2025 17:42:59 GMT
- Title: DenseReviewer: A Screening Prioritisation Tool for Systematic Review based on Dense Retrieval
- Authors: Xinyu Mao, Teerapong Leelanupab, Harrisen Scells, Guido Zuccon,
- Abstract summary: Prioritising relevant studies to be screened allows downstream systematic review creation tasks to start earlier and save time.
Our method outperforms previous active learning methods in both effectiveness and efficiency.
We describe the tool's design and showcase how it can aid screening.
- Score: 26.324382303173135
- License:
- Abstract: Screening is a time-consuming and labour-intensive yet required task for medical systematic reviews, as tens of thousands of studies often need to be screened. Prioritising relevant studies to be screened allows downstream systematic review creation tasks to start earlier and save time. In previous work, we developed a dense retrieval method to prioritise relevant studies with reviewer feedback during the title and abstract screening stage. Our method outperforms previous active learning methods in both effectiveness and efficiency. In this demo, we extend this prior work by creating (1) a web-based screening tool that enables end-users to screen studies exploiting state-of-the-art methods and (2) a Python library that integrates models and feedback mechanisms and allows researchers to develop and demonstrate new active learning methods. We describe the tool's design and showcase how it can aid screening. The tool is available at https://densereviewer.ielab.io. The source code is also open sourced at https://github.com/ielab/densereviewer.
Related papers
- Dense Retrieval with Continuous Explicit Feedback for Systematic Review Screening Prioritisation [28.80089773616623]
The goal of screening prioritisation in systematic reviews is to identify relevant documents with high recall and rank them in early positions for review.
Recent studies have shown that neural models have good potential on this task, but their time-consuming fine-tuning and inference discourage their widespread use for screening prioritisation.
We propose an alternative approach that still relies on neural models, but leverages dense representations and relevance feedback to enhance screening prioritisation.
arXiv Detail & Related papers (2024-06-30T09:25:42Z) - Adaptive Retention & Correction: Test-Time Training for Continual Learning [114.5656325514408]
A common problem in continual learning is the classification layer's bias towards the most recent task.
We name our approach Adaptive Retention & Correction (ARC)
ARC achieves an average performance increase of 2.7% and 2.6% on the CIFAR-100 and Imagenet-R datasets.
arXiv Detail & Related papers (2024-05-23T08:43:09Z) - CRUISE-Screening: Living Literature Reviews Toolbox [8.292338880619061]
CRUISE-Screening is a web-based application for conducting living literature reviews.
It is connected to several search engines via an API, which allows for updating the search results periodically.
arXiv Detail & Related papers (2023-09-04T15:58:43Z) - An Empirical Study of End-to-End Temporal Action Detection [82.64373812690127]
Temporal action detection (TAD) is an important yet challenging task in video understanding.
Rather than end-to-end learning, most existing methods adopt a head-only learning paradigm.
We validate the advantage of end-to-end learning over head-only learning and observe up to 11% performance improvement.
arXiv Detail & Related papers (2022-04-06T16:46:30Z) - What Makes Good Contrastive Learning on Small-Scale Wearable-based
Tasks? [59.51457877578138]
We study contrastive learning on the wearable-based activity recognition task.
This paper presents an open-source PyTorch library textttCL-HAR, which can serve as a practical tool for researchers.
arXiv Detail & Related papers (2022-02-12T06:10:15Z) - Parrot: Data-Driven Behavioral Priors for Reinforcement Learning [79.32403825036792]
We propose a method for pre-training behavioral priors that can capture complex input-output relationships observed in successful trials.
We show how this learned prior can be used for rapidly learning new tasks without impeding the RL agent's ability to try out novel behaviors.
arXiv Detail & Related papers (2020-11-19T18:47:40Z) - Autonomous Learning of Features for Control: Experiments with Embodied
and Situated Agents [0.0]
We introduce a method that permits to continue the training of the feature-extraction module during the training of the policy network.
We show that sequence-to-sequence learning yields better results than the methods considered in previous studies.
arXiv Detail & Related papers (2020-09-15T14:34:42Z) - Open Source Software for Efficient and Transparent Reviews [0.11179881480027788]
ASReview is an open source machine learning-aided pipeline applying active learning.
We demonstrate by means of simulation studies that ASReview can yield far more efficient reviewing than manual reviewing.
arXiv Detail & Related papers (2020-06-22T11:57:10Z) - Bayesian active learning for production, a systematic study and a
reusable library [85.32971950095742]
In this paper, we analyse the main drawbacks of current active learning techniques.
We do a systematic study on the effects of the most common issues of real-world datasets on the deep active learning process.
We derive two techniques that can speed up the active learning loop such as partial uncertainty sampling and larger query size.
arXiv Detail & Related papers (2020-06-17T14:51:11Z) - Confident Coreset for Active Learning in Medical Image Analysis [57.436224561482966]
We propose a novel active learning method, confident coreset, which considers both uncertainty and distribution for effectively selecting informative samples.
By comparative experiments on two medical image analysis tasks, we show that our method outperforms other active learning methods.
arXiv Detail & Related papers (2020-04-05T13:46:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.