Synthetic Clinical Notes for Rare ICD Codes: A Data-Centric Framework for Long-Tail Medical Coding
- URL: http://arxiv.org/abs/2511.14112v1
- Date: Tue, 18 Nov 2025 03:52:12 GMT
- Title: Synthetic Clinical Notes for Rare ICD Codes: A Data-Centric Framework for Long-Tail Medical Coding
- Authors: Truong Vo, Weiyi Wu, Kaize Ding,
- Abstract summary: Thousands of rare and zero-shot ICD codes are severely underrepresented in datasets like MIMIC-III.<n>We generate 90,000 synthetic notes covering 7,902 ICD codes, significantly expanding the training distribution.<n>Experiments show that our approach modestly improves macro-F1 while maintaining strong micro-F1.
- Score: 26.840057002860235
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automatic ICD coding from clinical text is a critical task in medical NLP but remains hindered by the extreme long-tail distribution of diagnostic codes. Thousands of rare and zero-shot ICD codes are severely underrepresented in datasets like MIMIC-III, leading to low macro-F1 scores. In this work, we propose a data-centric framework that generates high-quality synthetic discharge summaries to mitigate this imbalance. Our method constructs realistic multi-label code sets anchored on rare codes by leveraging real-world co-occurrence patterns, ICD descriptions, synonyms, taxonomy, and similar clinical notes. Using these structured prompts, we generate 90,000 synthetic notes covering 7,902 ICD codes, significantly expanding the training distribution. We fine-tune two state-of-the-art transformer-based models, PLM-ICD and GKI-ICD, on both the original and extended datasets. Experiments show that our approach modestly improves macro-F1 while maintaining strong micro-F1, outperforming prior SOTA. While the gain may seem marginal relative to the computational cost, our results demonstrate that carefully crafted synthetic data can enhance equity in long-tail ICD code prediction.
Related papers
- Probability-Biased Attention over Directed Bipartite Graphs for Long-Tail ICD Coding [12.66839524860715]
We propose a learning method that models fine-grained co-occurrence relationships among codes.<n>Our method achieves state-of-the-art performance with particularly notable improvements in Macro-F1.
arXiv Detail & Related papers (2025-10-31T04:47:09Z) - Evaluating Hierarchical Clinical Document Classification Using Reasoning-Based LLMs [7.026393789313748]
This study evaluates how well large language models (LLMs) can classify ICD-10 codes from hospital discharge summaries.<n> Reasoning-based models generally outperformed non-reasoning ones, with Gemini 2.5 Pro performing best overall.
arXiv Detail & Related papers (2025-07-02T00:53:54Z) - CoRelation: Boosting Automatic ICD Coding Through Contextualized Code
Relation Learning [56.782963838838036]
We propose a novel approach, a contextualized and flexible framework, to enhance the learning of ICD code representations.
Our approach employs a dependent learning paradigm that considers the context of clinical notes in modeling all possible code relations.
arXiv Detail & Related papers (2024-02-24T03:25:28Z) - Can GPT-3.5 Generate and Code Discharge Summaries? [45.633849969788315]
We generated and coded 9,606 discharge summaries based on lists of ICD-10 code descriptions.
Neural coding models were trained on baseline and augmented data.
We report micro- and macro-F1 scores on the full codeset, generation codes, and their families.
arXiv Detail & Related papers (2024-01-24T15:10:13Z) - Automated clinical coding using off-the-shelf large language models [10.365958121087305]
The task of assigning diagnostic ICD codes to patient hospital admissions is typically performed by expert human coders.
Efforts towards automated ICD coding are dominated by supervised deep learning models.
In this work, we leverage off-the-shelf pre-trained generative large language models to develop a practical solution.
arXiv Detail & Related papers (2023-10-10T11:56:48Z) - Multi-label Few-shot ICD Coding as Autoregressive Generation with Prompt [7.554528566861559]
This study transforms this multi-label classification task into an autoregressive generation task.
Instead of directly predicting the high dimensional space of ICD codes, our model generates the lower dimension of text descriptions.
Experiments on MIMIC-III-few show that our model performs with a marco F1 30.2, which substantially outperforms the previous MIMIC-III-full SOTA model.
arXiv Detail & Related papers (2022-11-24T22:10:50Z) - ICDBigBird: A Contextual Embedding Model for ICD Code Classification [71.58299917476195]
Contextual word embedding models have achieved state-of-the-art results in multiple NLP tasks.
ICDBigBird is a BigBird-based model which can integrate a Graph Convolutional Network (GCN)
Our experiments on a real-world clinical dataset demonstrate the effectiveness of our BigBird-based model on the ICD classification task.
arXiv Detail & Related papers (2022-04-21T20:59:56Z) - Generalizing electrocardiogram delineation: training convolutional
neural networks with synthetic data augmentation [63.51064808536065]
Existing databases for ECG delineation are small, being insufficient in size and in the array of pathological conditions they represent.
This article delves has two main contributions. First, a pseudo-synthetic data generation algorithm was developed, based in probabilistically composing ECG traces given "pools" of fundamental segments, as cropped from the original databases, and a set of rules for their arrangement into coherent synthetic traces.
Second, two novel segmentation-based loss functions have been developed, which attempt at enforcing the prediction of an exact number of independent structures and at producing closer segmentation boundaries by focusing on a reduced number of samples.
arXiv Detail & Related papers (2021-11-25T10:11:41Z) - Few-Shot Electronic Health Record Coding through Graph Contrastive
Learning [64.8138823920883]
We seek to improve the performance for both frequent and rare ICD codes by using a contrastive graph-based EHR coding framework, CoGraph.
CoGraph learns similarities and dissimilarities between HEWE graphs from different ICD codes so that information can be transferred among them.
Two graph contrastive learning schemes, GSCL and GECL, exploit the HEWE graph structures so as to encode transferable features.
arXiv Detail & Related papers (2021-06-29T14:53:17Z) - Medical Code Prediction from Discharge Summary: Document to Sequence
BERT using Sequence Attention [0.0]
We propose a model based on bidirectional encoder representations from transformer (BERT) using the sequence attention method for automatic ICD code assignment.
We evaluate our ap-proach on the MIMIC-III benchmark dataset.
arXiv Detail & Related papers (2021-06-15T07:35:50Z) - TransICD: Transformer Based Code-wise Attention Model for Explainable
ICD Coding [5.273190477622007]
International Classification of Disease (ICD) coding procedure has been shown to be effective and crucial to the billing system in medical sector.
Currently, ICD codes are assigned to a clinical note manually which is likely to cause many errors.
In this project, we apply a transformer-based architecture to capture the interdependence among the tokens of a document and then use a code-wise attention mechanism to learn code-specific representations of the entire document.
arXiv Detail & Related papers (2021-03-28T05:34:32Z) - Federated Deep AUC Maximization for Heterogeneous Data with a Constant
Communication Complexity [77.78624443410216]
We propose improved FDAM algorithms for detecting heterogeneous chest data.
A result of this paper is that the communication of the proposed algorithm is strongly independent of the number of machines and also independent of the accuracy level.
Experiments have demonstrated the effectiveness of our FDAM algorithm on benchmark datasets and on medical chest Xray images from different organizations.
arXiv Detail & Related papers (2021-02-09T04:05:19Z) - Collaborative residual learners for automatic icd10 prediction using
prescribed medications [45.82374977939355]
We propose a novel collaborative residual learning based model to automatically predict ICD10 codes employing only prescriptions data.
We obtain multi-label classification accuracy of 0.71 and 0.57 of average precision, 0.57 and 0.38 of F1-score and 0.73 and 0.44 of accuracy in predicting principal diagnosis for inpatient and outpatient datasets respectively.
arXiv Detail & Related papers (2020-12-16T07:07:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.