A Vessel Bifurcation Landmark Pair Dataset for Abdominal CT Deformable Image Registration (DIR) Validation
- URL: http://arxiv.org/abs/2501.09162v1
- Date: Wed, 15 Jan 2025 21:28:47 GMT
- Title: A Vessel Bifurcation Landmark Pair Dataset for Abdominal CT Deformable Image Registration (DIR) Validation
- Authors: Edward R Criscuolo, Yao Hao, Zhendong Zhang, Trevor McKeown, Deshan Yang,
- Abstract summary: Deformable image registration (DIR) is an enabling technology in many diagnostic and therapeutic tasks.
This dataset is a first-of-its-kind for abdominal DIR validation.
The number, accuracy, and distribution of landmark pairs will allow robust validation of DIR algorithms with precision beyond what is currently available.
- Score: 1.906989881803579
- License:
- Abstract: Deformable image registration (DIR) is an enabling technology in many diagnostic and therapeutic tasks. Despite this, DIR algorithms have limited clinical use, largely due to a lack of benchmark datasets for quality assurance during development. To support future algorithm development, here we introduce our first-of-its-kind abdominal CT DIR benchmark dataset, comprising large numbers of highly accurate landmark pairs on matching blood vessel bifurcations. Abdominal CT image pairs of 30 patients were acquired from several public repositories as well as the authors' institution with IRB approval. The two CTs of each pair were originally acquired for the same patient on different days. An image processing workflow was developed and applied to each image pair: 1) Abdominal organs were segmented with a deep learning model, and image intensity within organ masks was overwritten. 2) Matching image patches were manually identified between two CTs of each image pair 3) Vessel bifurcation landmarks were labeled on one image of each image patch pair. 4) Image patches were deformably registered, and landmarks were projected onto the second image. 5) Landmark pair locations were refined manually or with an automated process. This workflow resulted in 1895 total landmark pairs, or 63 per case on average. Estimates of the landmark pair accuracy using digital phantoms were 0.7+/-1.2mm. The data is published in Zenodo at https://doi.org/10.5281/zenodo.14362785. Instructions for use can be found at https://github.com/deshanyang/Abdominal-DIR-QA. This dataset is a first-of-its-kind for abdominal DIR validation. The number, accuracy, and distribution of landmark pairs will allow for robust validation of DIR algorithms with precision beyond what is currently available.
Related papers
- VerteNet -- A Multi-Context Hybrid CNN Transformer for Accurate Vertebral Landmark Localization in Lateral Spine DXA Images [12.240318467857906]
VerteNet is a hybrid CNN-Transformer model featuring a novel dual-resolution attention mechanism in self and cross-attention domains.
We train VerteNet on 620 DXA LSIs from various machines and achieve superior results compared to existing methods.
arXiv Detail & Related papers (2025-02-04T08:27:51Z) - Denoising diffusion-based MRI to CT image translation enables automated
spinal segmentation [8.094450260464354]
This retrospective study involved translating T1w and T2w MR image series into CT images in a total of n=263 pairs of CT/MR series.
Two landmarks per vertebra registration enabled paired image-to-image translation from MR to CT and outperformed all unpaired approaches.
arXiv Detail & Related papers (2023-08-18T07:07:15Z) - Recurrence With Correlation Network for Medical Image Registration [66.63200823918429]
We present Recurrence with Correlation Network (RWCNet), a medical image registration network with multi-scale features and a cost volume layer.
We demonstrate that these architectural features improve medical image registration accuracy in two image registration datasets.
arXiv Detail & Related papers (2023-02-05T02:41:46Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - Revisiting Consistency Regularization for Semi-supervised Change
Detection in Remote Sensing Images [60.89777029184023]
We propose a semi-supervised CD model in which we formulate an unsupervised CD loss in addition to the supervised Cross-Entropy (CE) loss.
Experiments conducted on two publicly available CD datasets show that the proposed semi-supervised CD method can reach closer to the performance of supervised CD.
arXiv Detail & Related papers (2022-04-18T17:59:01Z) - Adaptation to CT Reconstruction Kernels by Enforcing Cross-domain
Feature Maps Consistency [0.06117371161379209]
We show a decrease in the COVID-19 segmentation quality of the model trained on the smooth and tested on the sharp reconstruction kernels.
We propose the unsupervised adaptation method, called F-Consistency, that outperforms the previous approaches.
arXiv Detail & Related papers (2022-03-28T10:00:03Z) - CNN Filter Learning from Drawn Markers for the Detection of Suggestive
Signs of COVID-19 in CT Images [58.720142291102135]
We propose a method that does not require either large annotated datasets or backpropagation to estimate the filters of a convolutional neural network (CNN)
For a few CT images, the user draws markers at representative normal and abnormal regions.
The method generates a feature extractor composed of a sequence of convolutional layers, whose kernels are specialized in enhancing regions similar to the marked ones.
arXiv Detail & Related papers (2021-11-16T15:03:42Z) - Improved Slice-wise Tumour Detection in Brain MRIs by Computing
Dissimilarities between Latent Representations [68.8204255655161]
Anomaly detection for Magnetic Resonance Images (MRIs) can be solved with unsupervised methods.
We have proposed a slice-wise semi-supervised method for tumour detection based on the computation of a dissimilarity function in the latent space of a Variational AutoEncoder.
We show that by training the models on higher resolution images and by improving the quality of the reconstructions, we obtain results which are comparable with different baselines.
arXiv Detail & Related papers (2020-07-24T14:02:09Z) - A Weakly Supervised Consistency-based Learning Method for COVID-19
Segmentation in CT Images [11.778195406694206]
Coronavirus Disease 2019 (COVID-19) has spread aggressively across the world causing an existential health crisis.
A system that automatically detects COVID-19 in tomography (CT) images can assist in quantifying the severity of the illness.
We address these labelling challenges by only requiring point annotations, a single pixel for each infected region on a CT image.
arXiv Detail & Related papers (2020-07-04T20:41:17Z) - Groupwise Multimodal Image Registration using Joint Total Variation [0.0]
We introduce a cost function based on joint total variation for such multimodal image registration.
We evaluate our algorithm on rigidly aligning both simulated and real 3D brain scans.
arXiv Detail & Related papers (2020-05-06T16:11:32Z) - VerSe: A Vertebrae Labelling and Segmentation Benchmark for
Multi-detector CT Images [121.31355003451152]
Large Scale Vertebrae Challenge (VerSe) was organised in conjunction with the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2019 and 2020.
We present the the results of this evaluation and further investigate the performance-variation at vertebra-level, scan-level, and at different fields-of-view.
arXiv Detail & Related papers (2020-01-24T21:09:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.