Tubular Shape Aware Data Generation for Semantic Segmentation in Medical
Imaging
- URL: http://arxiv.org/abs/2010.00907v2
- Date: Mon, 7 Dec 2020 15:11:33 GMT
- Title: Tubular Shape Aware Data Generation for Semantic Segmentation in Medical
Imaging
- Authors: Ilyas Sirazitdinov, Heinrich Schulz, Axel Saalbach, Steffen Renisch
and Dmitry V. Dylov
- Abstract summary: We present an approach for synthetic data generation of the tube-shaped objects, with a generative adversarial network being regularized with a prior-shape constraint.
Our method eliminates the need for paired image-mask data and requires only a weakly-labeled dataset.
We report the applicability of the approach for the task of segmenting tubes and catheters in the X-ray images.
- Score: 2.6673784948574215
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Chest X-ray is one of the most widespread examinations of the human body. In
interventional radiology, its use is frequently associated with the need to
visualize various tube-like objects, such as puncture needles, guiding sheaths,
wires, and catheters. Detection and precise localization of these tube-like
objects in the X-ray images is, therefore, of utmost value, catalyzing the
development of accurate target-specific segmentation algorithms. Similar to the
other medical imaging tasks, the manual pixel-wise annotation of the tubes is a
resource-consuming process. In this work, we aim to alleviate the lack of the
annotated images by using artificial data. Specifically, we present an approach
for synthetic data generation of the tube-shaped objects, with a generative
adversarial network being regularized with a prior-shape constraint. Our method
eliminates the need for paired image--mask data and requires only a
weakly-labeled dataset (10--20 images) to reach the accuracy of the
fully-supervised models. We report the applicability of the approach for the
task of segmenting tubes and catheters in the X-ray images, whereas the results
should also hold for the other imaging modalities.
Related papers
- CathFlow: Self-Supervised Segmentation of Catheters in Interventional Ultrasound Using Optical Flow and Transformers [66.15847237150909]
We introduce a self-supervised deep learning architecture to segment catheters in longitudinal ultrasound images.
The network architecture builds upon AiAReSeg, a segmentation transformer built with the Attention in Attention mechanism.
We validated our model on a test dataset, consisting of unseen synthetic data and images collected from silicon aorta phantoms.
arXiv Detail & Related papers (2024-03-21T15:13:36Z) - X-Ray to CT Rigid Registration Using Scene Coordinate Regression [1.1687067206676627]
This paper proposes a fully automatic registration method that is robust to extreme viewpoints.
It is based on a fully convolutional neural network (CNN) that regresses the overlapping coordinates for a given X-ray image.
The proposed method achieved an average mean target registration error (mTRE) of 3.79 mm in the 50th percentile of the simulated test dataset and projected mTRE of 9.65 mm in the 50th percentile of real fluoroscopic images for pelvis registration.
arXiv Detail & Related papers (2023-11-25T17:48:46Z) - Generation of Anonymous Chest Radiographs Using Latent Diffusion Models
for Training Thoracic Abnormality Classification Systems [7.909848251752742]
Biometric identifiers in chest radiographs hinder the public sharing of such data for research purposes.
This work employs a latent diffusion model to synthesize an anonymous chest X-ray dataset of high-quality class-conditional images.
arXiv Detail & Related papers (2022-11-02T17:43:02Z) - SQUID: Deep Feature In-Painting for Unsupervised Anomaly Detection [76.01333073259677]
We propose the use of Space-aware Memory Queues for In-painting and Detecting anomalies from radiography images (abbreviated as SQUID)
We show that SQUID can taxonomize the ingrained anatomical structures into recurrent patterns; and in the inference, it can identify anomalies (unseen/modified patterns) in the image.
arXiv Detail & Related papers (2021-11-26T13:47:34Z) - Generative Residual Attention Network for Disease Detection [51.60842580044539]
We present a novel approach for disease generation in X-rays using a conditional generative adversarial learning.
We generate a corresponding radiology image in a target domain while preserving the identity of the patient.
We then use the generated X-ray image in the target domain to augment our training to improve the detection performance.
arXiv Detail & Related papers (2021-10-25T14:15:57Z) - Cross-Modal Contrastive Learning for Abnormality Classification and
Localization in Chest X-rays with Radiomics using a Feedback Loop [63.81818077092879]
We propose an end-to-end semi-supervised cross-modal contrastive learning framework for medical images.
We first apply an image encoder to classify the chest X-rays and to generate the image features.
The radiomic features are then passed through another dedicated encoder to act as the positive sample for the image features generated from the same chest X-ray.
arXiv Detail & Related papers (2021-04-11T09:16:29Z) - XraySyn: Realistic View Synthesis From a Single Radiograph Through CT
Priors [118.27130593216096]
A radiograph visualizes the internal anatomy of a patient through the use of X-ray, which projects 3D information onto a 2D plane.
To the best of our knowledge, this is the first work on radiograph view synthesis.
We show that by gaining an understanding of radiography in 3D space, our method can be applied to radiograph bone extraction and suppression without groundtruth bone labels.
arXiv Detail & Related papers (2020-12-04T05:08:53Z) - Towards Unsupervised Learning for Instrument Segmentation in Robotic
Surgery with Cycle-Consistent Adversarial Networks [54.00217496410142]
We propose an unpaired image-to-image translation where the goal is to learn the mapping between an input endoscopic image and a corresponding annotation.
Our approach allows to train image segmentation models without the need to acquire expensive annotations.
We test our proposed method on Endovis 2017 challenge dataset and show that it is competitive with supervised segmentation methods.
arXiv Detail & Related papers (2020-07-09T01:39:39Z) - Separation of target anatomical structure and occlusions in chest
radiographs [2.0478628221188497]
We propose a Fully Convolutional Network to suppress, for a specific task, undesired visual structure from radiographs.
The proposed algorithm creates reconstructed radiographs and ground-truth data from high resolution CT-scans.
arXiv Detail & Related papers (2020-02-03T14:01:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.