Using the Order of Tomographic Slices as a Prior for Neural Networks
  Pre-Training
        - URL: http://arxiv.org/abs/2203.09372v1
- Date: Thu, 17 Mar 2022 14:58:15 GMT
- Title: Using the Order of Tomographic Slices as a Prior for Neural Networks
  Pre-Training
- Authors: Yaroslav Zharov, Alexey Ershov, Tilo Baumbach and Vincent Heuveline
- Abstract summary: We propose a pre-training method SortingLoss on slices instead of volumes.
It performs pre-training on slices instead of volumes, so that a model could be fine-tuned on a sparse set of slices.
We show that the proposed method performs on par with SimCLR, while working 2x faster and requiring 1.5x less memory.
- Score: 1.1470070927586016
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract:   The technical advances in Computed Tomography (CT) allow to obtain immense
amounts of 3D data. For such datasets it is very costly and time-consuming to
obtain the accurate 3D segmentation markup to train neural networks. The
annotation is typically done for a limited number of 2D slices, followed by an
interpolation. In this work, we propose a pre-training method SortingLoss. It
performs pre-training on slices instead of volumes, so that a model could be
fine-tuned on a sparse set of slices, without the interpolation step. Unlike
general methods (e.g. SimCLR or Barlow Twins), the task specific methods (e.g.
Transferable Visual Words) trade broad applicability for quality benefits by
imposing stronger assumptions on the input data. We propose a relatively mild
assumption -- if we take several slices along some axis of a volume, structure
of the sample presented on those slices, should give a strong clue to
reconstruct the correct order of those slices along the axis. Many biomedical
datasets fulfill this requirement due to the specific anatomy of a sample and
pre-defined alignment of the imaging setup. We examine the proposed method on
two datasets: medical CT of lungs affected by COVID-19 disease, and
high-resolution synchrotron-based full-body CT of model organisms (Medaka
fish). We show that the proposed method performs on par with SimCLR, while
working 2x faster and requiring 1.5x less memory. In addition, we present the
benefits in terms of practical scenarios, especially the applicability to the
pre-training of large models and the ability to localize samples within volumes
in an unsupervised setup.
 
      
        Related papers
        - Cascaded Diffusion Models for 2D and 3D Microscopy Image Synthesis to   Enhance Cell Segmentation [1.1454121287632515]
 We propose a novel framework for synthesizing densely annotated 2D and 3D cell microscopy images.
Our method synthesizes 2D and 3D cell masks from sparse 2D annotations using multi-level diffusion models and NeuS, a 3D surface reconstruction approach.
We show that training a segmentation model with a combination of our synthetic data and real data improves cell segmentation performance by up to 9% across multiple datasets.
 arXiv  Detail & Related papers  (2024-11-18T12:22:37Z)
- Towards a Comprehensive, Efficient and Promptable Anatomic Structure   Segmentation Model using 3D Whole-body CT Scans [23.573958232965104]
 Segment anything model (SAM) demonstrates strong ability generalization on natural image segmentation.
For segmenting 3D radiological CT or MRI scans, a 2D SAM model has to separately handle hundreds of 2D slices.
We propose a comprehensive and scalable 3D SAM model for whole-body CT segmentation, named CT-SAM3D.
 arXiv  Detail & Related papers  (2024-03-22T09:40:52Z)
- Promise:Prompt-driven 3D Medical Image Segmentation Using Pretrained
  Image Foundation Models [13.08275555017179]
 We propose ProMISe, a prompt-driven 3D medical image segmentation model using only a single point prompt.
We evaluate our model on two public datasets for colon and pancreas tumor segmentations.
 arXiv  Detail & Related papers  (2023-10-30T16:49:03Z)
- Dual Multi-scale Mean Teacher Network for Semi-supervised Infection
  Segmentation in Chest CT Volume for COVID-19 [76.51091445670596]
 Automated detecting lung infections from computed tomography (CT) data plays an important role for combating COVID-19.
Most current COVID-19 infection segmentation methods mainly relied on 2D CT images, which lack 3D sequential constraint.
Existing 3D CT segmentation methods focus on single-scale representations, which do not achieve the multiple level receptive field sizes on 3D volume.
 arXiv  Detail & Related papers  (2022-11-10T13:11:21Z)
- Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
  Segmentation [133.02190910009384]
 We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
 arXiv  Detail & Related papers  (2022-04-19T10:41:09Z)
- SimCVD: Simple Contrastive Voxel-Wise Representation Distillation for
  Semi-Supervised Medical Image Segmentation [7.779842667527933]
 We present SimCVD, a simple contrastive distillation framework that significantly advances state-of-the-art voxel-wise representation learning.
SimCVD achieves an average Dice score of 90.85% and 89.03% respectively, a 0.91% and 2.22% improvement compared to previous best results.
 arXiv  Detail & Related papers  (2021-08-13T13:17:58Z)
- Cherry-Picking Gradients: Learning Low-Rank Embeddings of Visual Data
  via Differentiable Cross-Approximation [53.95297550117153]
 We propose an end-to-end trainable framework that processes large-scale visual data tensors by looking emphat a fraction of their entries only.
The proposed approach is particularly useful for large-scale multidimensional grid data, and for tasks that require context over a large receptive field.
 arXiv  Detail & Related papers  (2021-05-29T08:39:57Z)
- Bidirectional RNN-based Few Shot Learning for 3D Medical Image
  Segmentation [11.873435088539459]
 We propose a 3D few shot segmentation framework for accurate organ segmentation using limited training samples of the target organ annotation.
A U-Net like network is designed to predict segmentation by learning the relationship between 2D slices of support data and a query image.
We evaluate our proposed model using three 3D CT datasets with annotations of different organs.
 arXiv  Detail & Related papers  (2020-11-19T01:44:55Z)
- Weakly-supervised Learning For Catheter Segmentation in 3D Frustum
  Ultrasound [74.22397862400177]
 We propose a novel Frustum ultrasound based catheter segmentation method.
The proposed method achieved the state-of-the-art performance with an efficiency of 0.25 second per volume.
 arXiv  Detail & Related papers  (2020-10-19T13:56:22Z)
- Modelling the Distribution of 3D Brain MRI using a 2D Slice VAE [66.63629641650572]
 We propose a method to model 3D MR brain volumes distribution by combining a 2D slice VAE with a Gaussian model that captures the relationships between slices.
We also introduce a novel evaluation method for generated volumes that quantifies how well their segmentations match those of true brain anatomy.
 arXiv  Detail & Related papers  (2020-07-09T13:23:15Z)
- Deep Q-Network-Driven Catheter Segmentation in 3D US by Hybrid
  Constrained Semi-Supervised Learning and Dual-UNet [74.22397862400177]
 We propose a novel catheter segmentation approach, which requests fewer annotations than the supervised learning method.
Our scheme considers a deep Q learning as the pre-localization step, which avoids voxel-level annotation.
With the detected catheter, patch-based Dual-UNet is applied to segment the catheter in 3D volumetric data.
 arXiv  Detail & Related papers  (2020-06-25T21:10:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
       
     
           This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.