Towards Clinical Encounter Summarization: Learning to Compose Discharge
Summaries from Prior Notes
- URL: http://arxiv.org/abs/2104.13498v1
- Date: Tue, 27 Apr 2021 22:45:54 GMT
- Title: Towards Clinical Encounter Summarization: Learning to Compose Discharge
Summaries from Prior Notes
- Authors: Han-Chin Shing, Chaitanya Shivade, Nima Pourdamghani, Feng Nan, Philip
Resnik, Douglas Oard and Parminder Bhatia
- Abstract summary: This paper introduces the task of generating discharge summaries for a clinical encounter.
We introduce two new measures, faithfulness and hallucination rate for evaluation.
Results across seven medical sections and five models show that a summarization architecture that supports traceability yields promising results.
- Score: 15.689048077818324
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The records of a clinical encounter can be extensive and complex, thus
placing a premium on tools that can extract and summarize relevant information.
This paper introduces the task of generating discharge summaries for a clinical
encounter. Summaries in this setting need to be faithful, traceable, and scale
to multiple long documents, motivating the use of extract-then-abstract
summarization cascades. We introduce two new measures, faithfulness and
hallucination rate for evaluation in this task, which complement existing
measures for fluency and informativeness. Results across seven medical sections
and five models show that a summarization architecture that supports
traceability yields promising results, and that a sentence-rewriting approach
performs consistently on the measure used for faithfulness
(faithfulness-adjusted $F_3$) over a diverse range of generated sections.
Related papers
- Towards Enhancing Coherence in Extractive Summarization: Dataset and Experiments with LLMs [70.15262704746378]
We propose a systematically created human-annotated dataset consisting of coherent summaries for five publicly available datasets and natural language user feedback.
Preliminary experiments with Falcon-40B and Llama-2-13B show significant performance improvements (10% Rouge-L) in terms of producing coherent summaries.
arXiv Detail & Related papers (2024-07-05T20:25:04Z) - Self-Verification Improves Few-Shot Clinical Information Extraction [73.6905567014859]
Large language models (LLMs) have shown the potential to accelerate clinical curation via few-shot in-context learning.
They still struggle with issues regarding accuracy and interpretability, especially in mission-critical domains such as health.
Here, we explore a general mitigation framework using self-verification, which leverages the LLM to provide provenance for its own extraction and check its own outputs.
arXiv Detail & Related papers (2023-05-30T22:05:11Z) - Development and validation of a natural language processing algorithm to
pseudonymize documents in the context of a clinical data warehouse [53.797797404164946]
The study highlights the difficulties faced in sharing tools and resources in this domain.
We annotated a corpus of clinical documents according to 12 types of identifying entities.
We build a hybrid system, merging the results of a deep learning model as well as manual rules.
arXiv Detail & Related papers (2023-03-23T17:17:46Z) - A Meta-Evaluation of Faithfulness Metrics for Long-Form Hospital-Course
Summarization [2.8575516056239576]
Long-form clinical summarization of hospital admissions has real-world significance because of its potential to help both clinicians and patients.
We benchmark faithfulness metrics against fine-grained human annotations for model-generated summaries of a patient's Brief Hospital Course.
arXiv Detail & Related papers (2023-03-07T14:57:06Z) - NapSS: Paragraph-level Medical Text Simplification via Narrative
Prompting and Sentence-matching Summarization [46.772517928718216]
We propose a summarize-then-simplify two-stage strategy, which we call NapSS.
NapSS identifies the relevant content to simplify while ensuring that the original narrative flow is preserved.
Our model achieves significantly better than the seq2seq baseline on an English medical corpus.
arXiv Detail & Related papers (2023-02-11T02:20:25Z) - Discharge Summary Hospital Course Summarisation of In Patient Electronic
Health Record Text with Clinical Concept Guided Deep Pre-Trained Transformer
Models [1.1393603788068778]
Brief Hospital Course (BHC) summaries are succinct summaries of an entire hospital encounter, embedded within discharge summaries.
We demonstrate a range of methods for BHC summarisation demonstrating the performance of deep learning summarisation models.
arXiv Detail & Related papers (2022-11-14T05:39:45Z) - Salience Allocation as Guidance for Abstractive Summarization [61.31826412150143]
We propose a novel summarization approach with a flexible and reliable salience guidance, namely SEASON (SaliencE Allocation as Guidance for Abstractive SummarizatiON)
SEASON utilizes the allocation of salience expectation to guide abstractive summarization and adapts well to articles in different abstractiveness.
arXiv Detail & Related papers (2022-10-22T02:13:44Z) - Self-supervised Answer Retrieval on Clinical Notes [68.87777592015402]
We introduce CAPR, a rule-based self-supervision objective for training Transformer language models for domain-specific passage matching.
We apply our objective in four Transformer-based architectures: Contextual Document Vectors, Bi-, Poly- and Cross-encoders.
We report that CAPR outperforms strong baselines in the retrieval of domain-specific passages and effectively generalizes across rule-based and human-labeled passages.
arXiv Detail & Related papers (2021-08-02T10:42:52Z) - CLIP: A Dataset for Extracting Action Items for Physicians from Hospital
Discharge Notes [17.107315598110183]
We create a dataset of clinical action items annotated over MIMIC-III, the largest publicly available dataset of real clinical notes.
This dataset, which we call CLIP, is annotated by physicians and covers documents representing 100K sentences.
We describe the task of extracting the action items from these documents as multi-aspect extractive summarization, with each aspect representing a type of action to be taken.
arXiv Detail & Related papers (2021-06-04T14:49:02Z) - Generating SOAP Notes from Doctor-Patient Conversations Using Modular
Summarization Techniques [43.13248746968624]
We introduce the first complete pipelines to leverage deep summarization models to generate SOAP notes.
We propose Cluster2Sent, an algorithm that extracts important utterances relevant to each summary section.
Our results speak to the benefits of structuring summaries into sections and annotating supporting evidence when constructing summarization corpora.
arXiv Detail & Related papers (2020-05-04T19:10:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.