Computer-Aided Assessment of Catheters and Tubes on Radiographs: How
Good is Artificial Intelligence for Assessment?
- URL: http://arxiv.org/abs/2002.03413v1
- Date: Sun, 9 Feb 2020 18:12:40 GMT
- Title: Computer-Aided Assessment of Catheters and Tubes on Radiographs: How
Good is Artificial Intelligence for Assessment?
- Authors: Xin Yi, Scott J. Adams, Robert D. E. Henderson, Paul Babyn
- Abstract summary: Catheters are the second most common abnormal finding on radiographs.
The position of catheters must be assessed on all radiographs, as serious complications can arise if catheters are malpositioned.
Computer-aided approaches hold the potential to assist in prioritizing radiographs with potentially malpositioned catheters for interpretation.
- Score: 2.256008196530956
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Catheters are the second most common abnormal finding on radiographs. The
position of catheters must be assessed on all radiographs, as serious
complications can arise if catheters are malpositioned. However, due to the
large number of radiographs performed each day, there can be substantial delays
between the time a radiograph is performed and when it is interpreted by a
radiologist. Computer-aided approaches hold the potential to assist in
prioritizing radiographs with potentially malpositioned catheters for
interpretation and automatically insert text indicating the placement of
catheters in radiology reports, thereby improving radiologists' efficiency.
After 50 years of research in computer-aided diagnosis, there is still a
paucity of study in this area. With the development of deep learning
approaches, the problem of catheter assessment is far more solvable. Therefore,
we have performed a review of current algorithms and identified key challenges
in building a reliable computer-aided diagnosis system for assessment of
catheters on radiographs. This review may serve to further the development of
machine learning approaches for this important use case.
Related papers
- CathFlow: Self-Supervised Segmentation of Catheters in Interventional Ultrasound Using Optical Flow and Transformers [66.15847237150909]
We introduce a self-supervised deep learning architecture to segment catheters in longitudinal ultrasound images.
The network architecture builds upon AiAReSeg, a segmentation transformer built with the Attention in Attention mechanism.
We validated our model on a test dataset, consisting of unseen synthetic data and images collected from silicon aorta phantoms.
arXiv Detail & Related papers (2024-03-21T15:13:36Z) - Mining Gaze for Contrastive Learning toward Computer-Assisted Diagnosis [61.089776864520594]
We propose eye-tracking as an alternative to text reports for medical images.
By tracking the gaze of radiologists as they read and diagnose medical images, we can understand their visual attention and clinical reasoning.
We introduce the Medical contrastive Gaze Image Pre-training (McGIP) as a plug-and-play module for contrastive learning frameworks.
arXiv Detail & Related papers (2023-12-11T02:27:45Z) - Generation of Radiology Findings in Chest X-Ray by Leveraging
Collaborative Knowledge [6.792487817626456]
The cognitive task of interpreting medical images remains the most critical and often time-consuming step in the radiology workflow.
This work focuses on reducing the workload of radiologists who spend most of their time either writing or narrating the Findings.
Unlike past research, which addresses radiology report generation as a single-step image captioning task, we have further taken into consideration the complexity of interpreting CXR images.
arXiv Detail & Related papers (2023-06-18T00:51:28Z) - Act Like a Radiologist: Radiology Report Generation across Anatomical Regions [50.13206214694885]
X-RGen is a radiologist-minded report generation framework across six anatomical regions.
In X-RGen, we seek to mimic the behaviour of human radiologists, breaking them down into four principal phases.
We enhance the recognition capacity of the image encoder by analysing images and reports across various regions.
arXiv Detail & Related papers (2023-05-26T07:12:35Z) - Using Multi-modal Data for Improving Generalizability and Explainability
of Disease Classification in Radiology [0.0]
Traditional datasets for the radiological diagnosis tend to only provide the radiology image alongside the radiology report.
This paper utilizes the recently published Eye-Gaze dataset to perform an exhaustive study on the impact on performance and explainability of deep learning (DL) classification.
We find that the best classification performance of X-ray images is achieved with a combination of radiology report free-text and radiology image, with the eye-gaze data providing no performance boost.
arXiv Detail & Related papers (2022-07-29T16:49:05Z) - Preservation of High Frequency Content for Deep Learning-Based Medical
Image Classification [74.84221280249876]
An efficient analysis of large amounts of chest radiographs can aid physicians and radiologists.
We propose a novel Discrete Wavelet Transform (DWT)-based method for the efficient identification and encoding of visual information.
arXiv Detail & Related papers (2022-05-08T15:29:54Z) - Generative Residual Attention Network for Disease Detection [51.60842580044539]
We present a novel approach for disease generation in X-rays using a conditional generative adversarial learning.
We generate a corresponding radiology image in a target domain while preserving the identity of the patient.
We then use the generated X-ray image in the target domain to augment our training to improve the detection performance.
arXiv Detail & Related papers (2021-10-25T14:15:57Z) - XraySyn: Realistic View Synthesis From a Single Radiograph Through CT
Priors [118.27130593216096]
A radiograph visualizes the internal anatomy of a patient through the use of X-ray, which projects 3D information onto a 2D plane.
To the best of our knowledge, this is the first work on radiograph view synthesis.
We show that by gaining an understanding of radiography in 3D space, our method can be applied to radiograph bone extraction and suppression without groundtruth bone labels.
arXiv Detail & Related papers (2020-12-04T05:08:53Z) - A Standardized Radiograph-Agnostic Framework and Platform For Evaluating
AI Radiological Systems [0.0]
We propose a radiograph-agnostic platform and framework that would allow any Artificial Intelligence radiological solution to be assessed on its ability to generalise across diverse geographical location, gender and age groups.
arXiv Detail & Related papers (2020-08-03T02:09:09Z) - Evaluation of Contemporary Convolutional Neural Network Architectures
for Detecting COVID-19 from Chest Radiographs [0.0]
We train and evaluate three model architectures, proposed for chest radiograph analysis, under varying conditions.
We find issues that discount the impressive model performances proposed by contemporary studies on this subject.
arXiv Detail & Related papers (2020-06-30T15:22:39Z) - Automated Radiological Report Generation For Chest X-Rays With
Weakly-Supervised End-to-End Deep Learning [17.315387269810426]
We built a database containing more than 12,000 CXR scans and radiological reports.
We developed a model based on deep convolutional neural network and recurrent network with attention mechanism.
The model provides automated recognition of given scans and generation of reports.
arXiv Detail & Related papers (2020-06-18T08:12:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.