AutoCT: Automated CT registration, segmentation, and quantification
- URL: http://arxiv.org/abs/2310.17780v1
- Date: Thu, 26 Oct 2023 21:09:47 GMT
- Title: AutoCT: Automated CT registration, segmentation, and quantification
- Authors: Zhe Bai, Abdelilah Essiari, Talita Perciano, Kristofer E. Bouchard
- Abstract summary: We provide a comprehensive pipeline that integrates an end-to-end automatic preprocessing, registration, segmentation, and quantitative analysis of 3D CT scans.
The engineered pipeline enables atlas-based CT segmentation and quantification.
On a lightweight and portable software platform, AutoCT provides a new toolkit for the CT imaging community to underpin the deployment of artificial intelligence-driven applications.
- Score: 0.5461938536945721
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The processing and analysis of computed tomography (CT) imaging is important
for both basic scientific development and clinical applications. In AutoCT, we
provide a comprehensive pipeline that integrates an end-to-end automatic
preprocessing, registration, segmentation, and quantitative analysis of 3D CT
scans. The engineered pipeline enables atlas-based CT segmentation and
quantification leveraging diffeomorphic transformations through efficient
forward and inverse mappings. The extracted localized features from the
deformation field allow for downstream statistical learning that may facilitate
medical diagnostics. On a lightweight and portable software platform, AutoCT
provides a new toolkit for the CT imaging community to underpin the deployment
of artificial intelligence-driven applications.
Related papers
- DCT-HistoTransformer: Efficient Lightweight Vision Transformer with DCT Integration for histopathological image analysis [0.0]
We introduce a novel lightweight breast cancer classification approach using Vision Transformers (ViTs)
By incorporating parallel processing pathways for Discrete Cosine Transform (DCT) Attention and MobileConv, we convert image data from the spatial domain to the frequency domain to utilize the benefits such as filtering out high frequencies in the image.
Our proposed model achieves an accuracy of 96.00% $pm$ 0.48% for binary classification and 87.85% $pm$ 0.93% for multiclass classification, which is comparable to state-of-the-art models.
arXiv Detail & Related papers (2024-10-24T21:16:56Z) - 3D-CT-GPT: Generating 3D Radiology Reports through Integration of Large Vision-Language Models [51.855377054763345]
This paper introduces 3D-CT-GPT, a Visual Question Answering (VQA)-based medical visual language model for generating radiology reports from 3D CT scans.
Experiments on both public and private datasets demonstrate that 3D-CT-GPT significantly outperforms existing methods in terms of report accuracy and quality.
arXiv Detail & Related papers (2024-09-28T12:31:07Z) - Exploiting Liver CT scans in Colorectal Carcinoma genomics mutation
classification [0.0]
We propose the first DeepLearning-based exploration, to our knowledge, of such classification approach from the patient medical imaging.
Our method is able to identify CRC RAS mutation family from CT images with 0.73 F1 score.
arXiv Detail & Related papers (2024-01-25T14:40:58Z) - Invariant Scattering Transform for Medical Imaging [0.0]
Invariant Scattering Transform (IST) technique has become popular for medical image analysis.
IST aims to be invariant to transformations that are common in medical images.
IST can be integrated into machine learning algorithms for disease detection, diagnosis, and treatment planning.
arXiv Detail & Related papers (2023-04-20T18:12:50Z) - Orientation-Shared Convolution Representation for CT Metal Artifact
Learning [63.67718355820655]
During X-ray computed tomography (CT) scanning, metallic implants carrying with patients often lead to adverse artifacts.
Existing deep-learning-based methods have gained promising reconstruction performance.
We propose an orientation-shared convolution representation strategy to adapt the physical prior structures of artifacts.
arXiv Detail & Related papers (2022-12-26T13:56:12Z) - Hyper-Connected Transformer Network for Multi-Modality PET-CT
Segmentation [16.999643199612244]
Co-learning complementary PET-CT imaging features is a fundamental requirement for automatic tumor segmentation.
We propose a hyper-connected transformer network that integrates a transformer network (TN) with a hyper connected fusion for multi-modality PET-CT images.
Our results with two clinical datasets show that HCT achieved better performance in segmentation accuracy when compared to the existing methods.
arXiv Detail & Related papers (2022-10-28T00:03:43Z) - Attentive Symmetric Autoencoder for Brain MRI Segmentation [56.02577247523737]
We propose a novel Attentive Symmetric Auto-encoder based on Vision Transformer (ViT) for 3D brain MRI segmentation tasks.
In the pre-training stage, the proposed auto-encoder pays more attention to reconstruct the informative patches according to the gradient metrics.
Experimental results show that our proposed attentive symmetric auto-encoder outperforms the state-of-the-art self-supervised learning methods and medical image segmentation models.
arXiv Detail & Related papers (2022-09-19T09:43:19Z) - Fluid registration between lung CT and stationary chest tomosynthesis
images [23.239722016943794]
We formulate a 3D/2D registration approach which infers a 3D deformation based on measured projections and digitally reconstructed radiographs.
We demonstrate our approach for the registration between CT and stationary chest tomosynthesis (sDCT) images and show how it naturally leads to an iterative image reconstruction approach.
arXiv Detail & Related papers (2022-03-06T21:51:49Z) - Incremental Cross-view Mutual Distillation for Self-supervised Medical
CT Synthesis [88.39466012709205]
This paper builds a novel medical slice to increase the between-slice resolution.
Considering that the ground-truth intermediate medical slices are always absent in clinical practice, we introduce the incremental cross-view mutual distillation strategy.
Our method outperforms state-of-the-art algorithms by clear margins.
arXiv Detail & Related papers (2021-12-20T03:38:37Z) - A Multi-Stage Attentive Transfer Learning Framework for Improving
COVID-19 Diagnosis [49.3704402041314]
We propose a multi-stage attentive transfer learning framework for improving COVID-19 diagnosis.
Our proposed framework consists of three stages to train accurate diagnosis models through learning knowledge from multiple source tasks and data of different domains.
Importantly, we propose a novel self-supervised learning method to learn multi-scale representations for lung CT images.
arXiv Detail & Related papers (2021-01-14T01:39:19Z) - Deep Reinforcement Learning for Organ Localization in CT [59.23083161858951]
We propose a deep reinforcement learning approach for organ localization in CT.
In this work, an artificial agent is actively self-taught to localize organs in CT by learning from its asserts and mistakes.
Our method can use as a plug-and-play module for localizing any organ of interest.
arXiv Detail & Related papers (2020-05-11T10:06:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.