Hierarchical Annotation for Building A Suite of Clinical Natural
Language Processing Tasks: Progress Note Understanding
- URL: http://arxiv.org/abs/2204.03035v1
- Date: Wed, 6 Apr 2022 18:38:08 GMT
- Title: Hierarchical Annotation for Building A Suite of Clinical Natural
Language Processing Tasks: Progress Note Understanding
- Authors: Yanjun Gao, Dmitriy Dligach, Timothy Miller, Samuel Tesch, Ryan
Laffin, Matthew M. Churpek, Majid Afshar
- Abstract summary: This work introduces a hierarchical annotation schema with three stages to address clinical text understanding, clinical reasoning, and summarization.
We created an annotated corpus based on an extensive collection of publicly available daily progress notes.
We also define a new suite of tasks, Progress Note Understanding, with three tasks utilizing the three annotation stages.
- Score: 4.5939673461957335
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Applying methods in natural language processing on electronic health records
(EHR) data is a growing field. Existing corpus and annotation focus on modeling
textual features and relation prediction. However, there is a paucity of
annotated corpus built to model clinical diagnostic thinking, a process
involving text understanding, domain knowledge abstraction and reasoning. This
work introduces a hierarchical annotation schema with three stages to address
clinical text understanding, clinical reasoning, and summarization. We created
an annotated corpus based on an extensive collection of publicly available
daily progress notes, a type of EHR documentation that is collected in time
series in a problem-oriented format. The conventional format for a progress
note follows a Subjective, Objective, Assessment and Plan heading (SOAP). We
also define a new suite of tasks, Progress Note Understanding, with three tasks
utilizing the three annotation stages. The novel suite of tasks was designed to
train and evaluate future NLP models for clinical text understanding, clinical
knowledge representation, inference, and summarization.
Related papers
- Improving Clinical Note Generation from Complex Doctor-Patient Conversation [20.2157016701399]
We present three key contributions to the field of clinical note generation using large language models (LLMs)
First, we introduce CliniKnote, a dataset consisting of 1,200 complex doctor-patient conversations paired with their full clinical notes.
Second, we propose K-SOAP, which enhances traditional SOAPcitepodder20soap (Subjective, Objective, Assessment, and Plan) notes by adding a keyword section at the top, allowing for quick identification of essential information.
Third, we develop an automatic pipeline to generate K-SOAP notes from doctor-patient conversations and benchmark various modern LLMs using various
arXiv Detail & Related papers (2024-08-26T18:39:31Z) - Development and validation of a natural language processing algorithm to
pseudonymize documents in the context of a clinical data warehouse [53.797797404164946]
The study highlights the difficulties faced in sharing tools and resources in this domain.
We annotated a corpus of clinical documents according to 12 types of identifying entities.
We build a hybrid system, merging the results of a deep learning model as well as manual rules.
arXiv Detail & Related papers (2023-03-23T17:17:46Z) - Summarizing Patients Problems from Hospital Progress Notes Using
Pre-trained Sequence-to-Sequence Models [9.879960506853145]
Problem list summarization requires a model to understand, abstract, and generate clinical documentation.
We propose a new NLP task that aims to generate a list of problems in a patient's daily care plan using input from the provider's progress notes during hospitalization.
arXiv Detail & Related papers (2022-08-17T17:07:35Z) - A Unified Framework of Medical Information Annotation and Extraction for
Chinese Clinical Text [1.4841452489515765]
Current state-of-the-art (SOTA) NLP models are highly integrated with deep learning techniques.
This study presents an engineering framework of medical entity recognition, relation extraction and attribute extraction.
arXiv Detail & Related papers (2022-03-08T03:19:16Z) - Towards more patient friendly clinical notes through language models and
ontologies [57.51898902864543]
We present a novel approach to automated medical text based on word simplification and language modelling.
We use a new dataset pairs of publicly available medical sentences and a version of them simplified by clinicians.
Our method based on a language model trained on medical forum data generates simpler sentences while preserving both grammar and the original meaning.
arXiv Detail & Related papers (2021-12-23T16:11:19Z) - Self-supervised Answer Retrieval on Clinical Notes [68.87777592015402]
We introduce CAPR, a rule-based self-supervision objective for training Transformer language models for domain-specific passage matching.
We apply our objective in four Transformer-based architectures: Contextual Document Vectors, Bi-, Poly- and Cross-encoders.
We report that CAPR outperforms strong baselines in the retrieval of domain-specific passages and effectively generalizes across rule-based and human-labeled passages.
arXiv Detail & Related papers (2021-08-02T10:42:52Z) - Benchmarking Automated Clinical Language Simplification: Dataset,
Algorithm, and Evaluation [48.87254340298189]
We construct a new dataset named MedLane to support the development and evaluation of automated clinical language simplification approaches.
We propose a new model called DECLARE that follows the human annotation procedure and achieves state-of-the-art performance.
arXiv Detail & Related papers (2020-12-04T06:09:02Z) - An Interpretable End-to-end Fine-tuning Approach for Long Clinical Text [72.62848911347466]
Unstructured clinical text in EHRs contains crucial information for applications including decision support, trial matching, and retrospective research.
Recent work has applied BERT-based models to clinical information extraction and text classification, given these models' state-of-the-art performance in other NLP domains.
In this work, we propose a novel fine-tuning approach called SnipBERT. Instead of using entire notes, SnipBERT identifies crucial snippets and feeds them into a truncated BERT-based model in a hierarchical manner.
arXiv Detail & Related papers (2020-11-12T17:14:32Z) - Learning Contextualized Document Representations for Healthcare Answer
Retrieval [68.02029435111193]
Contextual Discourse Vectors (CDV) is a distributed document representation for efficient answer retrieval from long documents.
Our model leverages a dual encoder architecture with hierarchical LSTM layers and multi-task training to encode the position of clinical entities and aspects alongside the document discourse.
We show that our generalized model significantly outperforms several state-of-the-art baselines for healthcare passage ranking.
arXiv Detail & Related papers (2020-02-03T15:47:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.