Body Part Regression for CT Images
- URL: http://arxiv.org/abs/2110.09148v1
- Date: Mon, 18 Oct 2021 10:03:42 GMT
- Title: Body Part Regression for CT Images
- Authors: Sarah Schuhegger
- Abstract summary: Self-supervised body part regression model for CT volumes is developed and trained on a heterogeneous collection of CT studies.
It is demonstrated how the algorithm can contribute to the robust and reliable transfer of medical models into the clinic.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: One of the greatest challenges in the medical imaging domain is to
successfully transfer deep learning models into clinical practice. Since models
are often trained on a specific body region, a robust transfer into the clinic
necessitates the selection of images with body regions that fit the algorithm
to avoid false-positive predictions in unknown regions. Due to the insufficient
and inaccurate nature of manually-defined imaging meta-data, automated body
part recognition is a key ingredient towards the broad and reliable adoption of
medical deep learning models. While some approaches to this task have been
presented in the past, building and evaluating robust algorithms for
fine-grained body part recognition remains challenging. So far, no easy-to-use
method exists to determine the scanned body range of medical Computed
Tomography (CT) volumes. In this thesis, a self-supervised body part regression
model for CT volumes is developed and trained on a heterogeneous collection of
CT studies. Furthermore, it is demonstrated how the algorithm can contribute to
the robust and reliable transfer of medical models into the clinic. Finally,
easy application of the developed method is ensured by integrating it into the
medical platform toolkit Kaapana and providing it as a python package at
https://github.com/MIC-DKFZ/BodyPartRegression .
Related papers
- CC-DCNet: Dynamic Convolutional Neural Network with Contrastive Constraints for Identifying Lung Cancer Subtypes on Multi-modality Images [13.655407979403945]
We propose a novel deep learning network designed to accurately classify lung cancer subtype with multi-dimensional and multi-modality images.
The strength of the proposed model lies in its ability to dynamically process both paired CT-pathological image sets and independent CT image sets.
We also develop a contrastive constraint module, which quantitatively maps the cross-modality associations through network training.
arXiv Detail & Related papers (2024-07-18T01:42:00Z) - MASSM: An End-to-End Deep Learning Framework for Multi-Anatomy Statistical Shape Modeling Directly From Images [1.9029890402585894]
We introduce MASSM, a novel end-to-end deep learning framework that simultaneously localizes multiple anatomies, estimates population-level statistical representations, and delineates shape representations directly in image space.
Our results show that MASSM, which delineates anatomy in image space and handles multiple anatomies through a multitask network, provides superior shape information compared to segmentation networks for medical imaging tasks.
arXiv Detail & Related papers (2024-03-16T20:16:37Z) - Pick the Best Pre-trained Model: Towards Transferability Estimation for
Medical Image Segmentation [20.03177073703528]
Transfer learning is a critical technique in training deep neural networks for the challenging medical image segmentation task.
We propose a new Transferability Estimation (TE) method for medical image segmentation.
Our method surpasses all current algorithms for transferability estimation in medical image segmentation.
arXiv Detail & Related papers (2023-07-22T01:58:18Z) - Abdominal organ segmentation via deep diffeomorphic mesh deformations [5.4173776411667935]
Abdominal organ segmentation from CT and MRI is an essential prerequisite for surgical planning and computer-aided navigation systems.
We employ template-based mesh reconstruction methods for joint liver, kidney, pancreas, and spleen segmentation.
The resulting method, UNetFlow, generalizes well to all four organs and can be easily fine-tuned on new data.
arXiv Detail & Related papers (2023-06-27T14:41:18Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - AutoPaint: A Self-Inpainting Method for Unsupervised Anomaly Detection [34.007468043336274]
We propose a robust inpainting model to learn the details of healthy anatomies and reconstruct high-resolution images.
We also propose an autoinpainting pipeline to automatically detect tumors, replace their appearance with the learned healthy anatomies, and based on that segment the tumoral volumes.
arXiv Detail & Related papers (2023-05-21T05:45:38Z) - Patched Diffusion Models for Unsupervised Anomaly Detection in Brain MRI [55.78588835407174]
We propose a method that reformulates the generation task of diffusion models as a patch-based estimation of healthy brain anatomy.
We evaluate our approach on data of tumors and multiple sclerosis lesions and demonstrate a relative improvement of 25.1% compared to existing baselines.
arXiv Detail & Related papers (2023-03-07T09:40:22Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - A Multi-Stage Attentive Transfer Learning Framework for Improving
COVID-19 Diagnosis [49.3704402041314]
We propose a multi-stage attentive transfer learning framework for improving COVID-19 diagnosis.
Our proposed framework consists of three stages to train accurate diagnosis models through learning knowledge from multiple source tasks and data of different domains.
Importantly, we propose a novel self-supervised learning method to learn multi-scale representations for lung CT images.
arXiv Detail & Related papers (2021-01-14T01:39:19Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Deep Reinforcement Learning for Organ Localization in CT [59.23083161858951]
We propose a deep reinforcement learning approach for organ localization in CT.
In this work, an artificial agent is actively self-taught to localize organs in CT by learning from its asserts and mistakes.
Our method can use as a plug-and-play module for localizing any organ of interest.
arXiv Detail & Related papers (2020-05-11T10:06:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.