Does the Magic of BERT Apply to Medical Code Assignment? A Quantitative
Study
- URL: http://arxiv.org/abs/2103.06511v1
- Date: Thu, 11 Mar 2021 07:23:45 GMT
- Title: Does the Magic of BERT Apply to Medical Code Assignment? A Quantitative
Study
- Authors: Shaoxiong Ji and Matti H\"oltt\"a and Pekka Marttinen
- Abstract summary: It is not clear if pretrained models are useful for medical code prediction without further architecture engineering.
We propose a hierarchical fine-tuning architecture to capture interactions between distant words and adopt label-wise attention to exploit label information.
Contrary to current trends, we demonstrate that a carefully trained classical CNN outperforms attention-based models on a MIMIC-III subset with frequent codes.
- Score: 2.871614744079523
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Unsupervised pretraining is an integral part of many natural language
processing systems, and transfer learning with language models has achieved
remarkable results in many downstream tasks. In the clinical application of
medical code assignment, diagnosis and procedure codes are inferred from
lengthy clinical notes such as hospital discharge summaries. However, it is not
clear if pretrained models are useful for medical code prediction without
further architecture engineering. This paper conducts a comprehensive
quantitative analysis of various contextualized language models' performance,
pretrained in different domains, for medical code assignment from clinical
notes. We propose a hierarchical fine-tuning architecture to capture
interactions between distant words and adopt label-wise attention to exploit
label information. Contrary to current trends, we demonstrate that a carefully
trained classical CNN outperforms attention-based models on a MIMIC-III subset
with frequent codes. Our empirical findings suggest directions for improving
the medical code assignment application.
Related papers
- Medical Vision-Language Pre-Training for Brain Abnormalities [96.1408455065347]
We show how to automatically collect medical image-text aligned data for pretraining from public resources such as PubMed.
In particular, we present a pipeline that streamlines the pre-training process by initially collecting a large brain image-text dataset.
We also investigate the unique challenge of mapping subfigures to subcaptions in the medical domain.
arXiv Detail & Related papers (2024-04-27T05:03:42Z) - Automated Medical Coding on MIMIC-III and MIMIC-IV: A Critical Review
and Replicability Study [60.56194508762205]
We reproduce, compare, and analyze state-of-the-art automated medical coding machine learning models.
We show that several models underperform due to weak configurations, poorly sampled train-test splits, and insufficient evaluation.
We present the first comprehensive results on the newly released MIMIC-IV dataset using the reproduced models.
arXiv Detail & Related papers (2023-04-21T11:54:44Z) - Assessing mortality prediction through different representation models
based on concepts extracted from clinical notes [2.707154152696381]
Learning of embedding is a method for converting notes into a format that makes them comparable.
Transformer-based representation models have recently made a great leap forward.
We performed experiments to measure the usefulness of the learned embedding vectors in the task of hospital mortality prediction.
arXiv Detail & Related papers (2022-07-22T04:34:33Z) - Self-supervised Answer Retrieval on Clinical Notes [68.87777592015402]
We introduce CAPR, a rule-based self-supervision objective for training Transformer language models for domain-specific passage matching.
We apply our objective in four Transformer-based architectures: Contextual Document Vectors, Bi-, Poly- and Cross-encoders.
We report that CAPR outperforms strong baselines in the retrieval of domain-specific passages and effectively generalizes across rule-based and human-labeled passages.
arXiv Detail & Related papers (2021-08-02T10:42:52Z) - CBLUE: A Chinese Biomedical Language Understanding Evaluation Benchmark [51.38557174322772]
We present the first Chinese Biomedical Language Understanding Evaluation benchmark.
It is a collection of natural language understanding tasks including named entity recognition, information extraction, clinical diagnosis normalization, single-sentence/sentence-pair classification.
We report empirical results with the current 11 pre-trained Chinese models, and experimental results show that state-of-the-art neural models perform by far worse than the human ceiling.
arXiv Detail & Related papers (2021-06-15T12:25:30Z) - A Meta-embedding-based Ensemble Approach for ICD Coding Prediction [64.42386426730695]
International Classification of Diseases (ICD) are the de facto codes used globally for clinical coding.
These codes enable healthcare providers to claim reimbursement and facilitate efficient storage and retrieval of diagnostic information.
Our proposed approach enhances the performance of neural models by effectively training word vectors using routine medical data as well as external knowledge from scientific articles.
arXiv Detail & Related papers (2021-02-26T17:49:58Z) - An Explainable CNN Approach for Medical Codes Prediction from Clinical
Text [1.7746314978241657]
We develop CNN-based methods for automatic ICD coding based on clinical text from intensive care unit (ICU) stays.
We come up with the Shallow and Wide Attention convolutional Mechanism (SWAM), which allows our model to learn local and low-level features for each label.
arXiv Detail & Related papers (2021-01-14T02:05:34Z) - Explainable Automated Coding of Clinical Notes using Hierarchical
Label-wise Attention Networks and Label Embedding Initialisation [4.4036730220831535]
Recent studies on deep learning for automated medical coding achieved promising performances.
We propose a Hierarchical Label-wise Attention Network (HLAN), which aimed to interpret the model by quantifying importance (as attention weights) of words and sentences related to each of the labels.
Secondly, we propose to enhance the major deep learning models with a label embedding (LE) initialisation approach, which learns a dense, continuous vector representation and then injects the representation into the final layers and the label-wise attention layers in the models.
arXiv Detail & Related papers (2020-10-29T16:21:26Z) - Medical Code Assignment with Gated Convolution and Note-Code Interaction [39.079615516043674]
We propose a novel method, gated convolutional neural networks, and a note-code interaction (GatedCNN-NCI) for automatic medical code assignment.
With a novel note-code interaction design and a graph message passing mechanism, we explicitly capture the underlying dependency between notes and codes.
Our proposed model outperforms state-of-the-art models in most cases, and our model size is on par with light-weighted baselines.
arXiv Detail & Related papers (2020-10-14T11:37:24Z) - Dilated Convolutional Attention Network for Medical Code Assignment from
Clinical Text [19.701824507057623]
This paper proposes a Dilated Convolutional Attention Network (DCAN), integrating dilated convolutions, residual connections, and label attention, for medical code assignment.
It adopts dilated convolutions to capture complex medical patterns with a receptive field which increases exponentially with dilation size.
arXiv Detail & Related papers (2020-09-30T11:55:58Z) - BiteNet: Bidirectional Temporal Encoder Network to Predict Medical
Outcomes [53.163089893876645]
We propose a novel self-attention mechanism that captures the contextual dependency and temporal relationships within a patient's healthcare journey.
An end-to-end bidirectional temporal encoder network (BiteNet) then learns representations of the patient's journeys.
We have evaluated the effectiveness of our methods on two supervised prediction and two unsupervised clustering tasks with a real-world EHR dataset.
arXiv Detail & Related papers (2020-09-24T00:42:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.