A Knowledge Distillation framework for Multi-Organ Segmentation of
Medaka Fish in Tomographic Image
- URL: http://arxiv.org/abs/2302.12562v1
- Date: Fri, 24 Feb 2023 10:31:29 GMT
- Title: A Knowledge Distillation framework for Multi-Organ Segmentation of
Medaka Fish in Tomographic Image
- Authors: Jwalin Bhatt, Yaroslav Zharov, Sungho Suh, Tilo Baumbach, Vincent
Heuveline, Paul Lukowicz
- Abstract summary: We propose a self-training framework for multi-organ segmentation in tomographic images of Medaka fish.
We utilize the pseudo-labeled data from a pretrained model and adopt a Quality Teacher to refine the pseudo-labeled data.
The experimental results demonstrate that our method improves mean Intersection over Union (IoU) by 5.9% on the full dataset.
- Score: 5.881800919492064
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Morphological atlases are an important tool in organismal studies, and modern
high-throughput Computed Tomography (CT) facilities can produce hundreds of
full-body high-resolution volumetric images of organisms. However, creating an
atlas from these volumes requires accurate organ segmentation. In the last
decade, machine learning approaches have achieved incredible results in image
segmentation tasks, but they require large amounts of annotated data for
training. In this paper, we propose a self-training framework for multi-organ
segmentation in tomographic images of Medaka fish. We utilize the
pseudo-labeled data from a pretrained Teacher model and adopt a Quality
Classifier to refine the pseudo-labeled data. Then, we introduce a pixel-wise
knowledge distillation method to prevent overfitting to the pseudo-labeled data
and improve the segmentation performance. The experimental results demonstrate
that our method improves mean Intersection over Union (IoU) by 5.9% on the full
dataset and enables keeping the quality while using three times less markup.
Related papers
- Pseudo Label-Guided Data Fusion and Output Consistency for
Semi-Supervised Medical Image Segmentation [9.93871075239635]
We propose the PLGDF framework, which builds upon the mean teacher network for segmenting medical images with less annotation.
We propose a novel pseudo-label utilization scheme, which combines labeled and unlabeled data to augment the dataset effectively.
Our framework yields superior performance compared to six state-of-the-art semi-supervised learning methods.
arXiv Detail & Related papers (2023-11-17T06:36:43Z) - Diffusion-based Data Augmentation for Nuclei Image Segmentation [68.28350341833526]
We introduce the first diffusion-based augmentation method for nuclei segmentation.
The idea is to synthesize a large number of labeled images to facilitate training the segmentation model.
The experimental results show that by augmenting 10% labeled real dataset with synthetic samples, one can achieve comparable segmentation results.
arXiv Detail & Related papers (2023-10-22T06:16:16Z) - Information Gain Sampling for Active Learning in Medical Image
Classification [3.1619162190378787]
This work presents an information-theoretic active learning framework that guides the optimal selection of images from the unlabelled pool to be labeled.
Experiments are performed on two different medical image classification datasets.
arXiv Detail & Related papers (2022-08-01T16:25:53Z) - PCA: Semi-supervised Segmentation with Patch Confidence Adversarial
Training [52.895952593202054]
We propose a new semi-supervised adversarial method called Patch Confidence Adrial Training (PCA) for medical image segmentation.
PCA learns the pixel structure and context information in each patch to get enough gradient feedback, which aids the discriminator in convergent to an optimal state.
Our method outperforms the state-of-the-art semi-supervised methods, which demonstrates its effectiveness for medical image segmentation.
arXiv Detail & Related papers (2022-07-24T07:45:47Z) - Self-Paced Contrastive Learning for Semi-supervisedMedical Image
Segmentation with Meta-labels [6.349708371894538]
We propose to adapt contrastive learning to work with meta-label annotations.
We use the meta-labels for pre-training the image encoder as well as to regularize a semi-supervised training.
Results on three different medical image segmentation datasets show that our approach highly boosts the performance of a model trained on a few scans.
arXiv Detail & Related papers (2021-07-29T04:30:46Z) - Positional Contrastive Learning for Volumetric Medical Image
Segmentation [13.086140606803408]
We propose a novel positional contrastive learning framework to generate contrastive data pairs.
The proposed PCL method can substantially improve the segmentation performance compared to existing methods in both semi-supervised setting and transfer learning setting.
arXiv Detail & Related papers (2021-06-16T22:15:28Z) - Towards Robust Partially Supervised Multi-Structure Medical Image
Segmentation on Small-Scale Data [123.03252888189546]
We propose Vicinal Labels Under Uncertainty (VLUU) to bridge the methodological gaps in partially supervised learning (PSL) under data scarcity.
Motivated by multi-task learning and vicinal risk minimization, VLUU transforms the partially supervised problem into a fully supervised problem by generating vicinal labels.
Our research suggests a new research direction in label-efficient deep learning with partial supervision.
arXiv Detail & Related papers (2020-11-28T16:31:00Z) - Suggestive Annotation of Brain Tumour Images with Gradient-guided
Sampling [14.092503407739422]
We propose an efficient annotation framework for brain tumour images that is able to suggest informative sample images for human experts to annotate.
Experiments show that training a segmentation model with only 19% suggestively annotated patient scans from BraTS 2019 dataset can achieve a comparable performance to training a model on the full dataset for whole tumour segmentation task.
arXiv Detail & Related papers (2020-06-26T13:39:49Z) - ATSO: Asynchronous Teacher-Student Optimization for Semi-Supervised
Medical Image Segmentation [99.90263375737362]
We propose ATSO, an asynchronous version of teacher-student optimization.
ATSO partitions the unlabeled data into two subsets and alternately uses one subset to fine-tune the model and updates the label on the other subset.
We evaluate ATSO on two popular medical image segmentation datasets and show its superior performance in various semi-supervised settings.
arXiv Detail & Related papers (2020-06-24T04:05:12Z) - Pathological Retinal Region Segmentation From OCT Images Using Geometric
Relation Based Augmentation [84.7571086566595]
We propose improvements over previous GAN-based medical image synthesis methods by jointly encoding the intrinsic relationship of geometry and shape.
The proposed method outperforms state-of-the-art segmentation methods on the public RETOUCH dataset having images captured from different acquisition procedures.
arXiv Detail & Related papers (2020-03-31T11:50:43Z) - 3D medical image segmentation with labeled and unlabeled data using
autoencoders at the example of liver segmentation in CT images [58.720142291102135]
This work investigates the potential of autoencoder-extracted features to improve segmentation with a convolutional neural network.
A convolutional autoencoder was used to extract features from unlabeled data and a multi-scale, fully convolutional CNN was used to perform the target task of 3D liver segmentation in CT images.
arXiv Detail & Related papers (2020-03-17T20:20:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.