Application of Deep Learning in Generating Structured Radiology Reports:
A Transformer-Based Technique
- URL: http://arxiv.org/abs/2209.12177v1
- Date: Sun, 25 Sep 2022 08:03:15 GMT
- Title: Application of Deep Learning in Generating Structured Radiology Reports:
A Transformer-Based Technique
- Authors: Seyed Ali Reza Moezzi, Abdolrahman Ghaedi, Mojdeh Rahmanian, Seyedeh
Zahra Mousavi, Ashkan Sami
- Abstract summary: Natural language processing techniques can facilitate automatic information extraction and transformation of free-text formats to structured data.
Deep learning (DL)-based models have been adapted for NLP experiments with promising results.
In this study, we propose a transformer-based fine-grained named entity recognition architecture for clinical information extraction.
- Score: 0.4549831511476247
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Since radiology reports needed for clinical practice and research are written
and stored in free-text narrations, extraction of relative information for
further analysis is difficult. In these circumstances, natural language
processing (NLP) techniques can facilitate automatic information extraction and
transformation of free-text formats to structured data. In recent years, deep
learning (DL)-based models have been adapted for NLP experiments with promising
results. Despite the significant potential of DL models based on artificial
neural networks (ANN) and convolutional neural networks (CNN), the models face
some limitations to implement in clinical practice. Transformers, another new
DL architecture, have been increasingly applied to improve the process.
Therefore, in this study, we propose a transformer-based fine-grained named
entity recognition (NER) architecture for clinical information extraction. We
collected 88 abdominopelvic sonography reports in free-text formats and
annotated them based on our developed information schema. The text-to-text
transfer transformer model (T5) and Scifive, a pre-trained domain-specific
adaptation of the T5 model, were applied for fine-tuning to extract entities
and relations and transform the input into a structured format. Our
transformer-based model in this study outperformed previously applied
approaches such as ANN and CNN models based on ROUGE-1, ROUGE-2, ROUGE-L, and
BLEU scores of 0.816, 0.668, 0.528, and 0.743, respectively, while providing an
interpretable structured report.
Related papers
- A Sentiment Analysis of Medical Text Based on Deep Learning [1.8130068086063336]
This paper focuses on the medical domain, using bidirectional encoder representations from transformers (BERT) as the basic pre-trained model.
Experiments and analyses were conducted on the METS-CoV dataset to explore the training performance after integrating different deep learning networks.
CNN models outperform other networks when trained on smaller medical text datasets in combination with pre-trained models like BERT.
arXiv Detail & Related papers (2024-04-16T12:20:49Z) - In-Context Language Learning: Architectures and Algorithms [73.93205821154605]
We study ICL through the lens of a new family of model problems we term in context language learning (ICLL)
We evaluate a diverse set of neural sequence models on regular ICLL tasks.
arXiv Detail & Related papers (2024-01-23T18:59:21Z) - PathLDM: Text conditioned Latent Diffusion Model for Histopathology [62.970593674481414]
We introduce PathLDM, the first text-conditioned Latent Diffusion Model tailored for generating high-quality histopathology images.
Our approach fuses image and textual data to enhance the generation process.
We achieved a SoTA FID score of 7.64 for text-to-image generation on the TCGA-BRCA dataset, significantly outperforming the closest text-conditioned competitor with FID 30.1.
arXiv Detail & Related papers (2023-09-01T22:08:32Z) - Application of Transformers based methods in Electronic Medical Records:
A Systematic Literature Review [77.34726150561087]
This work presents a systematic literature review of state-of-the-art advances using transformer-based methods on electronic medical records (EMRs) in different NLP tasks.
arXiv Detail & Related papers (2023-04-05T22:19:42Z) - Transformer-based approaches to Sentiment Detection [55.41644538483948]
We examined the performance of four different types of state-of-the-art transformer models for text classification.
The RoBERTa transformer model performs best on the test dataset with a score of 82.6% and is highly recommended for quality predictions.
arXiv Detail & Related papers (2023-03-13T17:12:03Z) - Time to Embrace Natural Language Processing (NLP)-based Digital
Pathology: Benchmarking NLP- and Convolutional Neural Network-based Deep
Learning Pipelines [4.876281217951695]
NLP-based computer vision models, particularly vision transformers, have been shown to outperform CNN models in many imaging tasks.
We developed digital pathology pipelines to benchmark the five most recently proposed NLP models and four popular CNN models.
Our NLP models achieved state-of-the-art predictions for all three biomarkers using a relatively small training dataset.
arXiv Detail & Related papers (2023-02-21T02:42:03Z) - Clinical Relation Extraction Using Transformer-based Models [28.237302721228435]
We developed a series of clinical RE models based on three transformer architectures, namely BERT, RoBERTa, and XLNet.
We demonstrated that the RoBERTa-clinical RE model achieved the best performance on the 2018 MADE1.0 dataset with an F1-score of 0.8958.
Our results indicated that the binary classification strategy consistently outperformed the multi-class classification strategy for clinical relation extraction.
arXiv Detail & Related papers (2021-07-19T15:15:51Z) - Reprogramming Language Models for Molecular Representation Learning [65.00999660425731]
We propose Representation Reprogramming via Dictionary Learning (R2DL) for adversarially reprogramming pretrained language models for molecular learning tasks.
The adversarial program learns a linear transformation between a dense source model input space (language data) and a sparse target model input space (e.g., chemical and biological molecule data) using a k-SVD solver.
R2DL achieves the baseline established by state of the art toxicity prediction models trained on domain-specific data and outperforms the baseline in a limited training-data setting.
arXiv Detail & Related papers (2020-12-07T05:50:27Z) - Democratizing Artificial Intelligence in Healthcare: A Study of Model
Development Across Two Institutions Incorporating Transfer Learning [8.043077408518826]
Transfer learning (TL) allows a fully trained model from one institution to be fine-tuned by another institution using a much small local dataset.
This report describes the challenges, methodology, and benefits of TL within the context of developing an AI model for a basic use-case.
arXiv Detail & Related papers (2020-09-25T21:12:50Z) - Improved Code Summarization via a Graph Neural Network [96.03715569092523]
In general, source code summarization techniques use the source code as input and outputs a natural language description.
We present an approach that uses a graph-based neural architecture that better matches the default structure of the AST to generate these summaries.
arXiv Detail & Related papers (2020-04-06T17:36:42Z) - The Utility of General Domain Transfer Learning for Medical Language
Tasks [1.5459429010135775]
The purpose of this study is to analyze the efficacy of transfer learning techniques and transformer-based models as applied to medical natural language processing (NLP) tasks.
General text transfer learning may be a viable technique to generate state-of-the-art results within medical NLP tasks on radiological corpora.
arXiv Detail & Related papers (2020-02-16T20:20:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.