2DeteCT -- A large 2D expandable, trainable, experimental Computed
Tomography dataset for machine learning
- URL: http://arxiv.org/abs/2306.05907v1
- Date: Fri, 9 Jun 2023 14:02:53 GMT
- Title: 2DeteCT -- A large 2D expandable, trainable, experimental Computed
Tomography dataset for machine learning
- Authors: Maximilian B. Kiss, Sophia B. Coban, K. Joost Batenburg, Tristan van
Leeuwen, Felix Lucka
- Abstract summary: We provide a versatile, open 2D fan-beam CT dataset suitable for developing machine learning techniques.
A diverse mix of samples with high natural variability in shape and density was scanned slice-by-slice.
We provide raw projection data, reference reconstructions and segmentations based on an open-source data processing pipeline.
- Score: 1.0266286487433585
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent research in computational imaging largely focuses on developing
machine learning (ML) techniques for image reconstruction, which requires
large-scale training datasets consisting of measurement data and ground-truth
images. However, suitable experimental datasets for X-ray Computed Tomography
(CT) are scarce, and methods are often developed and evaluated only on
simulated data. We fill this gap by providing the community with a versatile,
open 2D fan-beam CT dataset suitable for developing ML techniques for a range
of image reconstruction tasks. To acquire it, we designed a sophisticated,
semi-automatic scan procedure that utilizes a highly-flexible laboratory X-ray
CT setup. A diverse mix of samples with high natural variability in shape and
density was scanned slice-by-slice (5000 slices in total) with high angular and
spatial resolution and three different beam characteristics: A high-fidelity, a
low-dose and a beam-hardening-inflicted mode. In addition, 750
out-of-distribution slices were scanned with sample and beam variations to
accommodate robustness and segmentation tasks. We provide raw projection data,
reference reconstructions and segmentations based on an open-source data
processing pipeline.
Related papers
- 3D-CT-GPT: Generating 3D Radiology Reports through Integration of Large Vision-Language Models [51.855377054763345]
This paper introduces 3D-CT-GPT, a Visual Question Answering (VQA)-based medical visual language model for generating radiology reports from 3D CT scans.
Experiments on both public and private datasets demonstrate that 3D-CT-GPT significantly outperforms existing methods in terms of report accuracy and quality.
arXiv Detail & Related papers (2024-09-28T12:31:07Z) - CoCPF: Coordinate-based Continuous Projection Field for Ill-Posed Inverse Problem in Imaging [78.734927709231]
Sparse-view computed tomography (SVCT) reconstruction aims to acquire CT images based on sparsely-sampled measurements.
Due to ill-posedness, implicit neural representation (INR) techniques may leave considerable holes'' (i.e., unmodeled spaces) in their fields, leading to sub-optimal results.
We propose the Coordinate-based Continuous Projection Field (CoCPF), which aims to build hole-free representation fields for SVCT reconstruction.
arXiv Detail & Related papers (2024-06-21T08:38:30Z) - End-to-End Model-based Deep Learning for Dual-Energy Computed Tomography Material Decomposition [53.14236375171593]
We propose a deep learning procedure called End-to-End Material Decomposition (E2E-DEcomp) for quantitative material decomposition.
We show the effectiveness of the proposed direct E2E-DEcomp method on the AAPM spectral CT dataset.
arXiv Detail & Related papers (2024-06-01T16:20:59Z) - XProspeCT: CT Volume Generation from Paired X-Rays [0.0]
We build on previous research to convert X-ray images into simulated CT volumes.
Model variations include UNet architectures, custom connections, activation functions, loss functions, and a novel back projection approach.
arXiv Detail & Related papers (2024-02-11T21:57:49Z) - Geometry-Aware Attenuation Learning for Sparse-View CBCT Reconstruction [53.93674177236367]
Cone Beam Computed Tomography (CBCT) plays a vital role in clinical imaging.
Traditional methods typically require hundreds of 2D X-ray projections to reconstruct a high-quality 3D CBCT image.
This has led to a growing interest in sparse-view CBCT reconstruction to reduce radiation doses.
We introduce a novel geometry-aware encoder-decoder framework to solve this problem.
arXiv Detail & Related papers (2023-03-26T14:38:42Z) - Geometric Constraints Enable Self-Supervised Sinogram Inpainting in
Sparse-View Tomography [7.416898042520079]
Sparse-angle tomographic scans reduce radiation and accelerate data acquisition, but suffer from image artifacts and noise.
Existing image processing algorithms can restore CT reconstruction quality but often require large training data sets or can not be used for truncated objects.
This work presents a self-supervised projection inpainting method that allows optimizing missing projective views via gradient-based optimization.
arXiv Detail & Related papers (2023-02-13T15:15:18Z) - Simulation-Driven Training of Vision Transformers Enabling Metal
Segmentation in X-Ray Images [6.416928579907334]
This study proposes to generate simulated X-ray images based on CT data sets combined with computer aided design (CAD) implants.
The metal segmentation in CBCT projections serves as a prerequisite for metal artifact avoidance and reduction algorithms.
Our study indicates that the CAD model-based data generation has high flexibility and could be a way to overcome the problem of shortage in clinical data sampling and labelling.
arXiv Detail & Related papers (2022-03-17T09:58:58Z) - MedNeRF: Medical Neural Radiance Fields for Reconstructing 3D-aware
CT-Projections from a Single X-ray [14.10611608681131]
Excessive ionising radiation can lead to deterministic and harmful effects on the body.
This paper proposes a Deep Learning model that learns to reconstruct CT projections from a few or even a single-view X-ray.
arXiv Detail & Related papers (2022-02-02T13:25:23Z) - Image Synthesis for Data Augmentation in Medical CT using Deep
Reinforcement Learning [31.677682150726383]
We show that our method bears high promise for generating novel and anatomically accurate high resolution CT images at large and diverse quantities.
Our approach is specifically designed to work with even small image datasets which is desirable given the often low amount of image data many researchers have available to them.
arXiv Detail & Related papers (2021-03-18T19:47:11Z) - Fed-Sim: Federated Simulation for Medical Imaging [131.56325440976207]
We introduce a physics-driven generative approach that consists of two learnable neural modules.
We show that our data synthesis framework improves the downstream segmentation performance on several datasets.
arXiv Detail & Related papers (2020-09-01T19:17:46Z) - Modelling the Distribution of 3D Brain MRI using a 2D Slice VAE [66.63629641650572]
We propose a method to model 3D MR brain volumes distribution by combining a 2D slice VAE with a Gaussian model that captures the relationships between slices.
We also introduce a novel evaluation method for generated volumes that quantifies how well their segmentations match those of true brain anatomy.
arXiv Detail & Related papers (2020-07-09T13:23:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.