Publicly Shareable Clinical Large Language Model Built on Synthetic Clinical Notes
- URL: http://arxiv.org/abs/2309.00237v4
- Date: Mon, 29 Jul 2024 15:52:22 GMT
- Title: Publicly Shareable Clinical Large Language Model Built on Synthetic Clinical Notes
- Authors: Sunjun Kweon, Junu Kim, Jiyoun Kim, Sujeong Im, Eunbyeol Cho, Seongsu Bae, Jungwoo Oh, Gyubok Lee, Jong Hak Moon, Seng Chan You, Seungjin Baek, Chang Hoon Han, Yoon Bin Jung, Yohan Jo, Edward Choi,
- Abstract summary: We create synthetic large-scale clinical notes using publicly available case reports extracted from biomedical literature.
We then use these synthetic notes to train our specialized clinical large language model, Asclepius.
We benchmark Asclepius against several other large language models, including GPT-3.5-turbo and other open-source alternatives.
- Score: 11.106831545858656
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The development of large language models tailored for handling patients' clinical notes is often hindered by the limited accessibility and usability of these notes due to strict privacy regulations. To address these challenges, we first create synthetic large-scale clinical notes using publicly available case reports extracted from biomedical literature. We then use these synthetic notes to train our specialized clinical large language model, Asclepius. While Asclepius is trained on synthetic data, we assess its potential performance in real-world applications by evaluating it using real clinical notes. We benchmark Asclepius against several other large language models, including GPT-3.5-turbo and other open-source alternatives. To further validate our approach using synthetic notes, we also compare Asclepius with its variants trained on real clinical notes. Our findings convincingly demonstrate that synthetic clinical notes can serve as viable substitutes for real ones when constructing high-performing clinical language models. This conclusion is supported by detailed evaluations conducted by both GPT-4 and medical professionals. All resources including weights, codes, and data used in the development of Asclepius are made publicly accessible for future research. (https://github.com/starmpcc/Asclepius)
Related papers
- Synthetic4Health: Generating Annotated Synthetic Clinical Letters [6.822926897514792]
Since clinical letters contain sensitive information, clinical-related datasets can not be widely applied in model training, medical research, and teaching.
This work aims to generate reliable, various, and de-identified synthetic clinical letters.
arXiv Detail & Related papers (2024-09-14T18:15:07Z) - De-identification is not always enough [9.292345527034348]
We show that de-identification of real clinical notes does not protect records against a membership inference attack.
When synthetically generated notes closely match the performance of real data, they also exhibit similar privacy concerns to the real data.
arXiv Detail & Related papers (2024-01-31T21:14:01Z) - Dynamic Q&A of Clinical Documents with Large Language Models [3.021316686584699]
This work introduces a natural language interface using large language models (LLMs) for dynamic question-answering on clinical notes.
Experiments, utilizing various embedding models and advanced LLMs, show Wizard Vicuna's superior accuracy, albeit with high compute demands.
arXiv Detail & Related papers (2024-01-19T14:50:22Z) - Investigating Alternative Feature Extraction Pipelines For Clinical Note
Phenotyping [0.0]
Using computational systems for the extraction of medical attributes offers many applications.
BERT-based models can be used to transform clinical notes into a series of representations.
We propose an alternative pipeline utilizing ScispaCyNeumann for extraction of common diseases.
arXiv Detail & Related papers (2023-10-05T02:51:51Z) - Making the Most Out of the Limited Context Length: Predictive Power
Varies with Clinical Note Type and Note Section [70.37720062263176]
We propose a framework to analyze the sections with high predictive power.
Using MIMIC-III, we show that: 1) predictive power distribution is different between nursing notes and discharge notes and 2) combining different types of notes could improve performance when the context length is large.
arXiv Detail & Related papers (2023-07-13T20:04:05Z) - Cross-Lingual Knowledge Transfer for Clinical Phenotyping [55.92262310716537]
We investigate cross-lingual knowledge transfer strategies to execute this task for clinics that do not use the English language.
We evaluate these strategies for a Greek and a Spanish clinic leveraging clinical notes from different clinical domains.
Our results show that using multilingual data overall improves clinical phenotyping models and can compensate for data sparseness.
arXiv Detail & Related papers (2022-08-03T08:33:21Z) - Assessing mortality prediction through different representation models
based on concepts extracted from clinical notes [2.707154152696381]
Learning of embedding is a method for converting notes into a format that makes them comparable.
Transformer-based representation models have recently made a great leap forward.
We performed experiments to measure the usefulness of the learned embedding vectors in the task of hospital mortality prediction.
arXiv Detail & Related papers (2022-07-22T04:34:33Z) - Human Evaluation and Correlation with Automatic Metrics in Consultation
Note Generation [56.25869366777579]
In recent years, machine learning models have rapidly become better at generating clinical consultation notes.
We present an extensive human evaluation study where 5 clinicians listen to 57 mock consultations, write their own notes, post-edit a number of automatically generated notes, and extract all the errors.
We find that a simple, character-based Levenshtein distance metric performs on par if not better than common model-based metrics like BertScore.
arXiv Detail & Related papers (2022-04-01T14:04:16Z) - Towards more patient friendly clinical notes through language models and
ontologies [57.51898902864543]
We present a novel approach to automated medical text based on word simplification and language modelling.
We use a new dataset pairs of publicly available medical sentences and a version of them simplified by clinicians.
Our method based on a language model trained on medical forum data generates simpler sentences while preserving both grammar and the original meaning.
arXiv Detail & Related papers (2021-12-23T16:11:19Z) - Self-supervised Answer Retrieval on Clinical Notes [68.87777592015402]
We introduce CAPR, a rule-based self-supervision objective for training Transformer language models for domain-specific passage matching.
We apply our objective in four Transformer-based architectures: Contextual Document Vectors, Bi-, Poly- and Cross-encoders.
We report that CAPR outperforms strong baselines in the retrieval of domain-specific passages and effectively generalizes across rule-based and human-labeled passages.
arXiv Detail & Related papers (2021-08-02T10:42:52Z) - An Interpretable End-to-end Fine-tuning Approach for Long Clinical Text [72.62848911347466]
Unstructured clinical text in EHRs contains crucial information for applications including decision support, trial matching, and retrospective research.
Recent work has applied BERT-based models to clinical information extraction and text classification, given these models' state-of-the-art performance in other NLP domains.
In this work, we propose a novel fine-tuning approach called SnipBERT. Instead of using entire notes, SnipBERT identifies crucial snippets and feeds them into a truncated BERT-based model in a hierarchical manner.
arXiv Detail & Related papers (2020-11-12T17:14:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.