Conceptualizing Machine Learning for Dynamic Information Retrieval of
Electronic Health Record Notes
- URL: http://arxiv.org/abs/2308.08494v1
- Date: Wed, 9 Aug 2023 21:04:19 GMT
- Title: Conceptualizing Machine Learning for Dynamic Information Retrieval of
Electronic Health Record Notes
- Authors: Sharon Jiang, Shannon Shen, Monica Agrawal, Barbara Lam, Nicholas
Kurtzman, Steven Horng, David Karger, David Sontag
- Abstract summary: This work conceptualizes the use of EHR audit logs for machine learning as a source of supervision of note relevance in a specific clinical context.
We show that our methods can achieve an AUC of 0.963 for predicting which notes will be read in an individual note writing session.
- Score: 6.1656026560972
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The large amount of time clinicians spend sifting through patient notes and
documenting in electronic health records (EHRs) is a leading cause of clinician
burnout. By proactively and dynamically retrieving relevant notes during the
documentation process, we can reduce the effort required to find relevant
patient history. In this work, we conceptualize the use of EHR audit logs for
machine learning as a source of supervision of note relevance in a specific
clinical context, at a particular point in time. Our evaluation focuses on the
dynamic retrieval in the emergency department, a high acuity setting with
unique patterns of information retrieval and note writing. We show that our
methods can achieve an AUC of 0.963 for predicting which notes will be read in
an individual note writing session. We additionally conduct a user study with
several clinicians and find that our framework can help clinicians retrieve
relevant information more efficiently. Demonstrating that our framework and
methods can perform well in this demanding setting is a promising proof of
concept that they will translate to other clinical settings and data modalities
(e.g., labs, medications, imaging).
Related papers
- Improving Clinical Note Generation from Complex Doctor-Patient Conversation [20.2157016701399]
We present three key contributions to the field of clinical note generation using large language models (LLMs)
First, we introduce CliniKnote, a dataset consisting of 1,200 complex doctor-patient conversations paired with their full clinical notes.
Second, we propose K-SOAP, which enhances traditional SOAPcitepodder20soap (Subjective, Objective, Assessment, and Plan) notes by adding a keyword section at the top, allowing for quick identification of essential information.
Third, we develop an automatic pipeline to generate K-SOAP notes from doctor-patient conversations and benchmark various modern LLMs using various
arXiv Detail & Related papers (2024-08-26T18:39:31Z) - Retrieval-Augmented and Knowledge-Grounded Language Models for Faithful Clinical Medicine [68.7814360102644]
We propose the Re$3$Writer method with retrieval-augmented generation and knowledge-grounded reasoning.
We demonstrate the effectiveness of our method in generating patient discharge instructions.
arXiv Detail & Related papers (2022-10-23T16:34:39Z) - User-Driven Research of Medical Note Generation Software [49.85146209418244]
We present three rounds of user studies carried out in the context of developing a medical note generation system.
We discuss the participating clinicians' impressions and views of how the system ought to be adapted to be of value to them.
We describe a three-week test run of the system in a live telehealth clinical practice.
arXiv Detail & Related papers (2022-05-05T10:18:06Z) - Human Evaluation and Correlation with Automatic Metrics in Consultation
Note Generation [56.25869366777579]
In recent years, machine learning models have rapidly become better at generating clinical consultation notes.
We present an extensive human evaluation study where 5 clinicians listen to 57 mock consultations, write their own notes, post-edit a number of automatically generated notes, and extract all the errors.
We find that a simple, character-based Levenshtein distance metric performs on par if not better than common model-based metrics like BertScore.
arXiv Detail & Related papers (2022-04-01T14:04:16Z) - Towards more patient friendly clinical notes through language models and
ontologies [57.51898902864543]
We present a novel approach to automated medical text based on word simplification and language modelling.
We use a new dataset pairs of publicly available medical sentences and a version of them simplified by clinicians.
Our method based on a language model trained on medical forum data generates simpler sentences while preserving both grammar and the original meaning.
arXiv Detail & Related papers (2021-12-23T16:11:19Z) - Self-supervised Answer Retrieval on Clinical Notes [68.87777592015402]
We introduce CAPR, a rule-based self-supervision objective for training Transformer language models for domain-specific passage matching.
We apply our objective in four Transformer-based architectures: Contextual Document Vectors, Bi-, Poly- and Cross-encoders.
We report that CAPR outperforms strong baselines in the retrieval of domain-specific passages and effectively generalizes across rule-based and human-labeled passages.
arXiv Detail & Related papers (2021-08-02T10:42:52Z) - Attention-based Clinical Note Summarization [1.52292571922932]
We propose a multi-head attention-based mechanism to perform extractive summarization of meaningful phrases in clinical notes.
This method finds major sentences for a summary by correlating tokens, segments and positional embeddings.
arXiv Detail & Related papers (2021-04-18T19:40:26Z) - Performance of Automatic De-identification Across Different Note Types [0.8399688944263842]
Concerns about patient privacy and confidentiality limit the use of clinical notes for research.
We present the performance of a state-of-the art de-id system called NeuroNER1 on a diverse set of notes from University of Washington.
arXiv Detail & Related papers (2021-02-17T00:55:40Z) - BiteNet: Bidirectional Temporal Encoder Network to Predict Medical
Outcomes [53.163089893876645]
We propose a novel self-attention mechanism that captures the contextual dependency and temporal relationships within a patient's healthcare journey.
An end-to-end bidirectional temporal encoder network (BiteNet) then learns representations of the patient's journeys.
We have evaluated the effectiveness of our methods on two supervised prediction and two unsupervised clustering tasks with a real-world EHR dataset.
arXiv Detail & Related papers (2020-09-24T00:42:36Z) - Ontology-driven weak supervision for clinical entity classification in
electronic health records [6.815543071244677]
We present Trove, a framework for weakly supervised entity classification using medical and expert-generated rules.
Our approach, unlike hand-labeled notes, is easy to share and modify, while offering performance comparable to manually labeled training data.
arXiv Detail & Related papers (2020-08-05T07:42:09Z) - Fast, Structured Clinical Documentation via Contextual Autocomplete [6.919190099968202]
We present a system that uses a learned autocompletion mechanism to facilitate rapid creation of semi-structured clinical documentation.
We dynamically suggest relevant clinical concepts as a doctor drafts a note by leveraging features from both unstructured and structured medical data.
As our algorithm is used to write a note, we can automatically annotate the documentation with clean labels of clinical concepts drawn from medical vocabularies.
arXiv Detail & Related papers (2020-07-29T23:43:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.