SPeC: A Soft Prompt-Based Calibration on Performance Variability of
Large Language Model in Clinical Notes Summarization
- URL: http://arxiv.org/abs/2303.13035v3
- Date: Fri, 4 Aug 2023 07:49:26 GMT
- Title: SPeC: A Soft Prompt-Based Calibration on Performance Variability of
Large Language Model in Clinical Notes Summarization
- Authors: Yu-Neng Chuang, Ruixiang Tang, Xiaoqian Jiang, Xia Hu
- Abstract summary: We introduce a model-agnostic pipeline that employs soft prompts to diminish variance while preserving the advantages of prompt-based summarization.
Experimental findings indicate that our method not only bolsters performance but also effectively curbs variance for various language models.
- Score: 50.01382938451978
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Electronic health records (EHRs) store an extensive array of patient
information, encompassing medical histories, diagnoses, treatments, and test
outcomes. These records are crucial for enabling healthcare providers to make
well-informed decisions regarding patient care. Summarizing clinical notes
further assists healthcare professionals in pinpointing potential health risks
and making better-informed decisions. This process contributes to reducing
errors and enhancing patient outcomes by ensuring providers have access to the
most pertinent and current patient data. Recent research has shown that
incorporating prompts with large language models (LLMs) substantially boosts
the efficacy of summarization tasks. However, we show that this approach also
leads to increased output variance, resulting in notably divergent outputs even
when prompts share similar meanings. To tackle this challenge, we introduce a
model-agnostic Soft Prompt-Based Calibration (SPeC) pipeline that employs soft
prompts to diminish variance while preserving the advantages of prompt-based
summarization. Experimental findings on multiple clinical note tasks and LLMs
indicate that our method not only bolsters performance but also effectively
curbs variance for various LLMs, providing a more uniform and dependable
solution for summarizing vital medical information.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.