MIPR:Automatic Annotation of Medical Images with Pixel Rearrangement
- URL: http://arxiv.org/abs/2204.10513v1
- Date: Fri, 22 Apr 2022 05:54:14 GMT
- Title: MIPR:Automatic Annotation of Medical Images with Pixel Rearrangement
- Authors: Pingping Dai, Haiming Zhu, Shuang Ge, Ruihan Zhang, Xiang Qian, Xi Li,
Kehong Yuan
- Abstract summary: We pro?pose a novel approach to solve the lack of annotated data from another angle, called medical image pixel rearrangement (short in MIPR)
The MIPR combines image-editing and pseudo-label technology to obtain labeled data.
Experiments on the ISIC18 show that the effect of the data annotated by our method for segmentation task is is equal to or even better than that of doctors annotations.
- Score: 7.39560318487728
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Most of the state-of-the-art semantic segmentation reported in recent years
is based on fully supervised deep learning in the medical domain. How?ever, the
high-quality annotated datasets require intense labor and domain knowledge,
consuming enormous time and cost. Previous works that adopt semi?supervised and
unsupervised learning are proposed to address the lack of anno?tated data
through assisted training with unlabeled data and achieve good perfor?mance.
Still, these methods can not directly get the image annotation as doctors do.
In this paper, inspired by self-training of semi-supervised learning, we
pro?pose a novel approach to solve the lack of annotated data from another
angle, called medical image pixel rearrangement (short in MIPR). The MIPR
combines image-editing and pseudo-label technology to obtain labeled data. As
the number of iterations increases, the edited image is similar to the original
image, and the labeled result is similar to the doctor annotation. Therefore,
the MIPR is to get labeled pairs of data directly from amounts of unlabled data
with pixel rearrange?ment, which is implemented with a designed conditional
Generative Adversarial Networks and a segmentation network. Experiments on the
ISIC18 show that the effect of the data annotated by our method for
segmentation task is is equal to or even better than that of doctors
annotations
Related papers
- Pseudo Label-Guided Data Fusion and Output Consistency for
Semi-Supervised Medical Image Segmentation [9.93871075239635]
We propose the PLGDF framework, which builds upon the mean teacher network for segmenting medical images with less annotation.
We propose a novel pseudo-label utilization scheme, which combines labeled and unlabeled data to augment the dataset effectively.
Our framework yields superior performance compared to six state-of-the-art semi-supervised learning methods.
arXiv Detail & Related papers (2023-11-17T06:36:43Z) - UniMOS: A Universal Framework For Multi-Organ Segmentation Over
Label-Constrained Datasets [6.428456997507811]
We present UniMOS, the first universal framework for achieving the utilization of fully and partially labeled images as well as unlabeled images.
We incorporate a semi-supervised training module that combines consistent regularization and pseudolabeling techniques on unlabeled data.
Experiments show that the framework exhibits excellent performance in several medical image segmentation tasks compared to other advanced methods.
arXiv Detail & Related papers (2023-11-17T00:44:56Z) - Self-Supervised Correction Learning for Semi-Supervised Biomedical Image
Segmentation [84.58210297703714]
We propose a self-supervised correction learning paradigm for semi-supervised biomedical image segmentation.
We design a dual-task network, including a shared encoder and two independent decoders for segmentation and lesion region inpainting.
Experiments on three medical image segmentation datasets for different tasks demonstrate the outstanding performance of our method.
arXiv Detail & Related papers (2023-01-12T08:19:46Z) - PCA: Semi-supervised Segmentation with Patch Confidence Adversarial
Training [52.895952593202054]
We propose a new semi-supervised adversarial method called Patch Confidence Adrial Training (PCA) for medical image segmentation.
PCA learns the pixel structure and context information in each patch to get enough gradient feedback, which aids the discriminator in convergent to an optimal state.
Our method outperforms the state-of-the-art semi-supervised methods, which demonstrates its effectiveness for medical image segmentation.
arXiv Detail & Related papers (2022-07-24T07:45:47Z) - Self-Supervised Learning as a Means To Reduce the Need for Labeled Data
in Medical Image Analysis [64.4093648042484]
We use a dataset of chest X-ray images with bounding box labels for 13 different classes of anomalies.
We show that it is possible to achieve similar performance to a fully supervised model in terms of mean average precision and accuracy with only 60% of the labeled data.
arXiv Detail & Related papers (2022-06-01T09:20:30Z) - FedMed-ATL: Misaligned Unpaired Brain Image Synthesis via Affine
Transform Loss [58.58979566599889]
We propose a novel self-supervised learning (FedMed) for brain image synthesis.
An affine transform loss (ATL) was formulated to make use of severely distorted images without violating privacy legislation.
The proposed method demonstrates advanced performance in both the quality of synthesized results under a severely misaligned and unpaired data setting.
arXiv Detail & Related papers (2022-01-29T13:45:39Z) - Self-Paced Contrastive Learning for Semi-supervisedMedical Image
Segmentation with Meta-labels [6.349708371894538]
We propose to adapt contrastive learning to work with meta-label annotations.
We use the meta-labels for pre-training the image encoder as well as to regularize a semi-supervised training.
Results on three different medical image segmentation datasets show that our approach highly boosts the performance of a model trained on a few scans.
arXiv Detail & Related papers (2021-07-29T04:30:46Z) - Positional Contrastive Learning for Volumetric Medical Image
Segmentation [13.086140606803408]
We propose a novel positional contrastive learning framework to generate contrastive data pairs.
The proposed PCL method can substantially improve the segmentation performance compared to existing methods in both semi-supervised setting and transfer learning setting.
arXiv Detail & Related papers (2021-06-16T22:15:28Z) - Semantic Segmentation with Generative Models: Semi-Supervised Learning
and Strong Out-of-Domain Generalization [112.68171734288237]
We propose a novel framework for discriminative pixel-level tasks using a generative model of both images and labels.
We learn a generative adversarial network that captures the joint image-label distribution and is trained efficiently using a large set of unlabeled images.
We demonstrate strong in-domain performance compared to several baselines, and are the first to showcase extreme out-of-domain generalization.
arXiv Detail & Related papers (2021-04-12T21:41:25Z) - ATSO: Asynchronous Teacher-Student Optimization for Semi-Supervised
Medical Image Segmentation [99.90263375737362]
We propose ATSO, an asynchronous version of teacher-student optimization.
ATSO partitions the unlabeled data into two subsets and alternately uses one subset to fine-tune the model and updates the label on the other subset.
We evaluate ATSO on two popular medical image segmentation datasets and show its superior performance in various semi-supervised settings.
arXiv Detail & Related papers (2020-06-24T04:05:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.