Parse and Recall: Towards Accurate Lung Nodule Malignancy Prediction
like Radiologists
- URL: http://arxiv.org/abs/2307.10824v1
- Date: Thu, 20 Jul 2023 12:38:17 GMT
- Title: Parse and Recall: Towards Accurate Lung Nodule Malignancy Prediction
like Radiologists
- Authors: Jianpeng Zhang, Xianghua Ye, Jianfeng Zhang, Yuxing Tang, Minfeng Xu,
Jianfei Guo, Xin Chen, Zaiyi Liu, Jingren Zhou, Le Lu, Ling Zhang
- Abstract summary: Lung cancer is a leading cause of death worldwide and early screening is critical for improving survival outcomes.
In clinical practice, the contextual structure of nodules and the accumulated experience of radiologists are the two core elements related to the accuracy of identification of benign and malignant nodules.
We propose a radiologist-inspired method to simulate the diagnostic process of radiologists, which is composed of context parsing and prototype recalling modules.
- Score: 39.907916342786564
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Lung cancer is a leading cause of death worldwide and early screening is
critical for improving survival outcomes. In clinical practice, the contextual
structure of nodules and the accumulated experience of radiologists are the two
core elements related to the accuracy of identification of benign and malignant
nodules. Contextual information provides comprehensive information about
nodules such as location, shape, and peripheral vessels, and experienced
radiologists can search for clues from previous cases as a reference to enrich
the basis of decision-making. In this paper, we propose a radiologist-inspired
method to simulate the diagnostic process of radiologists, which is composed of
context parsing and prototype recalling modules. The context parsing module
first segments the context structure of nodules and then aggregates contextual
information for a more comprehensive understanding of the nodule. The prototype
recalling module utilizes prototype-based learning to condense previously
learned cases as prototypes for comparative analysis, which is updated online
in a momentum way during training. Building on the two modules, our method
leverages both the intrinsic characteristics of the nodules and the external
knowledge accumulated from other nodules to achieve a sound diagnosis. To meet
the needs of both low-dose and noncontrast screening, we collect a large-scale
dataset of 12,852 and 4,029 nodules from low-dose and noncontrast CTs
respectively, each with pathology- or follow-up-confirmed labels. Experiments
on several datasets demonstrate that our method achieves advanced screening
performance on both low-dose and noncontrast scenarios.
Related papers
- Beyond Images: An Integrative Multi-modal Approach to Chest X-Ray Report
Generation [47.250147322130545]
Image-to-text radiology report generation aims to automatically produce radiology reports that describe the findings in medical images.
Most existing methods focus solely on the image data, disregarding the other patient information accessible to radiologists.
We present a novel multi-modal deep neural network framework for generating chest X-rays reports by integrating structured patient data, such as vital signs and symptoms, alongside unstructured clinical notes.
arXiv Detail & Related papers (2023-11-18T14:37:53Z) - Controllable Chest X-Ray Report Generation from Longitudinal
Representations [13.151444796296868]
One strategy to speed up reporting is to integrate automated reporting systems.
Previous approaches to automated radiology reporting generally do not provide the prior study as input.
We introduce two novel aspects: (1) longitudinal learning -- we propose a method to align, leverage the current and prior scan information into a joint longitudinal representation which can be provided to the multimodal report generation model; (2) sentence-anatomy dropout -- a training strategy for controllability in which the report generator model is trained to predict only sentences from the original report which correspond to the subset of anatomical regions given as input.
arXiv Detail & Related papers (2023-10-09T17:22:58Z) - Act Like a Radiologist: Radiology Report Generation across Anatomical Regions [50.13206214694885]
X-RGen is a radiologist-minded report generation framework across six anatomical regions.
In X-RGen, we seek to mimic the behaviour of human radiologists, breaking them down into four principal phases.
We enhance the recognition capacity of the image encoder by analysing images and reports across various regions.
arXiv Detail & Related papers (2023-05-26T07:12:35Z) - Improving Radiology Summarization with Radiograph and Anatomy Prompts [60.30659124918211]
We propose a novel anatomy-enhanced multimodal model to promote impression generation.
In detail, we first construct a set of rules to extract anatomies and put these prompts into each sentence to highlight anatomy characteristics.
We utilize a contrastive learning module to align these two representations at the overall level and use a co-attention to fuse them at the sentence level.
arXiv Detail & Related papers (2022-10-15T14:05:03Z) - Medical Image Captioning via Generative Pretrained Transformers [57.308920993032274]
We combine two language models, the Show-Attend-Tell and the GPT-3, to generate comprehensive and descriptive radiology records.
The proposed model is tested on two medical datasets, the Open-I, MIMIC-CXR, and the general-purpose MS-COCO.
arXiv Detail & Related papers (2022-09-28T10:27:10Z) - Differentiable Multi-Agent Actor-Critic for Multi-Step Radiology Report
Summarization [5.234281904315526]
The IMPRESSIONS section of a radiology report is a summary of the radiologist's reasoning and conclusions.
Prior research on radiology report summarization has focused on single-step end-to-end models.
We propose a two-step approach: extractive summarization followed by abstractive summarization.
arXiv Detail & Related papers (2022-03-15T21:18:09Z) - Faithful learning with sure data for lung nodule diagnosis [34.55176532924471]
We propose a collaborative learning framework to facilitate sure nodule classification.
A loss function is designed to learn reliable features by introducing interpretability constraints regulated with nodule segmentation maps.
arXiv Detail & Related papers (2022-02-25T06:33:11Z) - Learning Semi-Structured Representations of Radiology Reports [10.134080761449093]
Given a corpus of radiology reports, researchers are often interested in identifying a subset of reports describing a particular medical finding.
Recent studies proposed mapping free-text statements in radiology reports to semi-structured strings of terms taken from a limited vocabulary.
This paper aims to present an approach for the automatic generation of semi-structured representations of radiology reports.
arXiv Detail & Related papers (2021-12-20T18:53:41Z) - Exploring and Distilling Posterior and Prior Knowledge for Radiology
Report Generation [55.00308939833555]
The PPKED includes three modules: Posterior Knowledge Explorer (PoKE), Prior Knowledge Explorer (PrKE) and Multi-domain Knowledge Distiller (MKD)
PoKE explores the posterior knowledge, which provides explicit abnormal visual regions to alleviate visual data bias.
PrKE explores the prior knowledge from the prior medical knowledge graph (medical knowledge) and prior radiology reports (working experience) to alleviate textual data bias.
arXiv Detail & Related papers (2021-06-13T11:10:02Z) - Nodule2vec: a 3D Deep Learning System for Pulmonary Nodule Retrieval
Using Semantic Representation [1.7403133838762446]
We present a deep learning system that transforms a 3D image of a pulmonary nodule from a CT scan into a low-dimensional embedding vector.
We demonstrate that such a vector representation preserves semantic information about the nodule and offers a viable approach for content-based image retrieval (CBIR)
A comparison between doctors and algorithm scores suggests that the benefit provided by the system to the radiologist end-user is comparable to obtaining a second radiologist's opinion.
arXiv Detail & Related papers (2020-07-11T16:26:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.