Knowledge Graph Construction and Its Application in Automatic Radiology
Report Generation from Radiologist's Dictation
- URL: http://arxiv.org/abs/2206.06308v2
- Date: Tue, 14 Jun 2022 03:14:46 GMT
- Title: Knowledge Graph Construction and Its Application in Automatic Radiology
Report Generation from Radiologist's Dictation
- Authors: Kaveri Kale, Pushpak Bhattacharyya, Aditya Shetty, Milind Gune, Kush
Shrivastava, Rustom Lawyer and Spriha Biswas
- Abstract summary: This paper focuses on applications of NLP techniques like Information Extraction (IE) and domain-specific Knowledge Graph (KG) to automatically generate radiology reports from radiologist's dictation.
We develop an information extraction pipeline that combines rule-based, pattern-based, and dictionary-based techniques with lexical-semantic features to extract entities and relations.
We generate pathological descriptions evaluated using semantic similarity metrics, which shows 97% similarity with gold standard pathological descriptions.
- Score: 22.894248859405767
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Conventionally, the radiologist prepares the diagnosis notes and shares them
with the transcriptionist. Then the transcriptionist prepares a preliminary
formatted report referring to the notes, and finally, the radiologist reviews
the report, corrects the errors, and signs off. This workflow causes
significant delays and errors in the report. In current research work, we focus
on applications of NLP techniques like Information Extraction (IE) and
domain-specific Knowledge Graph (KG) to automatically generate radiology
reports from radiologist's dictation. This paper focuses on KG construction for
each organ by extracting information from an existing large corpus of free-text
radiology reports. We develop an information extraction pipeline that combines
rule-based, pattern-based, and dictionary-based techniques with
lexical-semantic features to extract entities and relations. Missing
information in short dictation can be accessed from the KGs to generate
pathological descriptions and hence the radiology report. Generated
pathological descriptions evaluated using semantic similarity metrics, which
shows 97% similarity with gold standard pathological descriptions. Also, our
analysis shows that our IE module is performing better than the OpenIE tool for
the radiology domain. Furthermore, we include a manual qualitative analysis
from radiologists, which shows that 80-85% of the generated reports are
correctly written, and the remaining are partially correct.
Related papers
- FG-CXR: A Radiologist-Aligned Gaze Dataset for Enhancing Interpretability in Chest X-Ray Report Generation [9.374812942790953]
We introduce Fine-Grained CXR dataset, which provides fine-grained paired information between the captions generated by radiologists and the corresponding gaze attention heatmaps for each anatomy.
Our analysis reveals that simply applying black-box image captioning methods to generate reports cannot adequately explain which information in CXR is utilized.
We propose a novel explainable radiologist's attention generator network (Gen-XAI) that mimics the diagnosis process of radiologists, explicitly constraining its output to closely align with both radiologist's gaze attention and transcript.
arXiv Detail & Related papers (2024-11-23T02:22:40Z) - RaTEScore: A Metric for Radiology Report Generation [59.37561810438641]
This paper introduces a novel, entity-aware metric, as Radiological Report (Text) Evaluation (RaTEScore)
RaTEScore emphasizes crucial medical entities such as diagnostic outcomes and anatomical details, and is robust against complex medical synonyms and sensitive to negation expressions.
Our evaluations demonstrate that RaTEScore aligns more closely with human preference than existing metrics, validated both on established public benchmarks and our newly proposed RaTE-Eval benchmark.
arXiv Detail & Related papers (2024-06-24T17:49:28Z) - Structural Entities Extraction and Patient Indications Incorporation for Chest X-ray Report Generation [10.46031380503486]
We introduce a novel method, textbfStructural textbfEntities extraction and patient indications textbfIncorporation (SEI) for chest X-ray report generation.
We employ a structural entities extraction (SEE) approach to eliminate presentation-style vocabulary in reports.
We propose a cross-modal fusion network to integrate information from X-ray images, similar historical cases, and patient-specific indications.
arXiv Detail & Related papers (2024-05-23T01:29:47Z) - Radiology Report Generation Using Transformers Conditioned with
Non-imaging Data [55.17268696112258]
This paper proposes a novel multi-modal transformer network that integrates chest x-ray (CXR) images and associated patient demographic information.
The proposed network uses a convolutional neural network to extract visual features from CXRs and a transformer-based encoder-decoder network that combines the visual features with semantic text embeddings of patient demographic information.
arXiv Detail & Related papers (2023-11-18T14:52:26Z) - Medical Image Captioning via Generative Pretrained Transformers [57.308920993032274]
We combine two language models, the Show-Attend-Tell and the GPT-3, to generate comprehensive and descriptive radiology records.
The proposed model is tested on two medical datasets, the Open-I, MIMIC-CXR, and the general-purpose MS-COCO.
arXiv Detail & Related papers (2022-09-28T10:27:10Z) - Radiomics-Guided Global-Local Transformer for Weakly Supervised
Pathology Localization in Chest X-Rays [65.88435151891369]
Radiomics-Guided Transformer (RGT) fuses textitglobal image information with textitlocal knowledge-guided radiomics information.
RGT consists of an image Transformer branch, a radiomics Transformer branch, and fusion layers that aggregate image and radiomic information.
arXiv Detail & Related papers (2022-07-10T06:32:56Z) - Graph Enhanced Contrastive Learning for Radiology Findings Summarization [25.377658879658306]
A section of a radiology report summarizes the most prominent observation from the findings.
We propose a unified framework for exploiting both extra knowledge and the original findings.
Key words and their relations can be extracted in an appropriate way to facilitate impression generation.
arXiv Detail & Related papers (2022-04-01T04:39:44Z) - Learning Semi-Structured Representations of Radiology Reports [10.134080761449093]
Given a corpus of radiology reports, researchers are often interested in identifying a subset of reports describing a particular medical finding.
Recent studies proposed mapping free-text statements in radiology reports to semi-structured strings of terms taken from a limited vocabulary.
This paper aims to present an approach for the automatic generation of semi-structured representations of radiology reports.
arXiv Detail & Related papers (2021-12-20T18:53:41Z) - Extracting Radiological Findings With Normalized Anatomical Information
Using a Span-Based BERT Relation Extraction Model [0.20999222360659603]
Medical imaging reports distill the findings and observations of radiologists.
Large-scale use of this text-encoded information requires converting the unstructured text to a structured, semantic representation.
We explore the extraction and normalization of anatomical information in radiology reports that is associated with radiological findings.
arXiv Detail & Related papers (2021-08-20T15:02:59Z) - Exploring and Distilling Posterior and Prior Knowledge for Radiology
Report Generation [55.00308939833555]
The PPKED includes three modules: Posterior Knowledge Explorer (PoKE), Prior Knowledge Explorer (PrKE) and Multi-domain Knowledge Distiller (MKD)
PoKE explores the posterior knowledge, which provides explicit abnormal visual regions to alleviate visual data bias.
PrKE explores the prior knowledge from the prior medical knowledge graph (medical knowledge) and prior radiology reports (working experience) to alleviate textual data bias.
arXiv Detail & Related papers (2021-06-13T11:10:02Z) - Auxiliary Signal-Guided Knowledge Encoder-Decoder for Medical Report
Generation [107.3538598876467]
We propose an Auxiliary Signal-Guided Knowledge-Decoder (ASGK) to mimic radiologists' working patterns.
ASGK integrates internal visual feature fusion and external medical linguistic information to guide medical knowledge transfer and learning.
arXiv Detail & Related papers (2020-06-06T01:00:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.