Patient Cohort Retrieval using Transformer Language Models
- URL: http://arxiv.org/abs/2009.05121v1
- Date: Thu, 10 Sep 2020 19:40:41 GMT
- Title: Patient Cohort Retrieval using Transformer Language Models
- Authors: Sarvesh Soni and Kirk Roberts
- Abstract summary: We propose a framework for retrieving patient cohorts using neural language models without the need of explicit feature engineering and domain expertise.
We find that a majority of our models outperform the BM25 baseline method on various evaluation metrics.
- Score: 7.784753717089568
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We apply deep learning-based language models to the task of patient cohort
retrieval (CR) with the aim to assess their efficacy. The task of CR requires
the extraction of relevant documents from the electronic health records (EHRs)
on the basis of a given query. Given the recent advancements in the field of
document retrieval, we map the task of CR to a document retrieval task and
apply various deep neural models implemented for the general domain tasks. In
this paper, we propose a framework for retrieving patient cohorts using neural
language models without the need of explicit feature engineering and domain
expertise. We find that a majority of our models outperform the BM25 baseline
method on various evaluation metrics.
Related papers
- Lessons Learned on Information Retrieval in Electronic Health Records: A Comparison of Embedding Models and Pooling Strategies [8.822087602255504]
Applying large language models to the clinical domain is challenging due to the context-heavy nature of processing medical records.
This paper explores how different embedding models and pooling methods affect information retrieval for the clinical domain.
arXiv Detail & Related papers (2024-09-23T16:16:08Z) - CRAFT Your Dataset: Task-Specific Synthetic Dataset Generation Through Corpus Retrieval and Augmentation [51.2289822267563]
We propose Corpus Retrieval and Augmentation for Fine-Tuning (CRAFT), a method for generating synthetic datasets.
We use large-scale public web-crawled corpora and similarity-based document retrieval to find other relevant human-written documents.
We demonstrate that CRAFT can efficiently generate large-scale task-specific training datasets for four diverse tasks.
arXiv Detail & Related papers (2024-09-03T17:54:40Z) - Medical Vision-Language Pre-Training for Brain Abnormalities [96.1408455065347]
We show how to automatically collect medical image-text aligned data for pretraining from public resources such as PubMed.
In particular, we present a pipeline that streamlines the pre-training process by initially collecting a large brain image-text dataset.
We also investigate the unique challenge of mapping subfigures to subcaptions in the medical domain.
arXiv Detail & Related papers (2024-04-27T05:03:42Z) - Building blocks for complex tasks: Robust generative event extraction
for radiology reports under domain shifts [11.845850292404768]
We show that multi-pass T5-based text-to-text generative models exhibit better generalization across exam modalities compared to approaches that employ BERT-based task-specific classification layers.
We then develop methods that reduce the inference cost of the model, making large-scale corpus processing more feasible for clinical applications.
arXiv Detail & Related papers (2023-06-15T23:16:58Z) - An Iterative Optimizing Framework for Radiology Report Summarization with ChatGPT [80.33783969507458]
The 'Impression' section of a radiology report is a critical basis for communication between radiologists and other physicians.
Recent studies have achieved promising results in automatic impression generation using large-scale medical text data.
These models often require substantial amounts of medical text data and have poor generalization performance.
arXiv Detail & Related papers (2023-04-17T17:13:42Z) - CorpusBrain: Pre-train a Generative Retrieval Model for
Knowledge-Intensive Language Tasks [62.22920673080208]
Single-step generative model can dramatically simplify the search process and be optimized in end-to-end manner.
We name the pre-trained generative retrieval model as CorpusBrain as all information about the corpus is encoded in its parameters without the need of constructing additional index.
arXiv Detail & Related papers (2022-08-16T10:22:49Z) - Self-supervised Answer Retrieval on Clinical Notes [68.87777592015402]
We introduce CAPR, a rule-based self-supervision objective for training Transformer language models for domain-specific passage matching.
We apply our objective in four Transformer-based architectures: Contextual Document Vectors, Bi-, Poly- and Cross-encoders.
We report that CAPR outperforms strong baselines in the retrieval of domain-specific passages and effectively generalizes across rule-based and human-labeled passages.
arXiv Detail & Related papers (2021-08-02T10:42:52Z) - Learning Contextualized Document Representations for Healthcare Answer
Retrieval [68.02029435111193]
Contextual Discourse Vectors (CDV) is a distributed document representation for efficient answer retrieval from long documents.
Our model leverages a dual encoder architecture with hierarchical LSTM layers and multi-task training to encode the position of clinical entities and aspects alongside the document discourse.
We show that our generalized model significantly outperforms several state-of-the-art baselines for healthcare passage ranking.
arXiv Detail & Related papers (2020-02-03T15:47:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.