An Auto-Encoder Strategy for Adaptive Image Segmentation
- URL: http://arxiv.org/abs/2004.13903v1
- Date: Wed, 29 Apr 2020 00:53:24 GMT
- Title: An Auto-Encoder Strategy for Adaptive Image Segmentation
- Authors: Evan M. Yu, Juan Eugenio Iglesias, Adrian V. Dalca, Mert R. Sabuncu
- Abstract summary: We propose a novel perspective of segmentation as a discrete representation learning problem.
We present a variational autoencoder segmentation strategy that is flexible and adaptive.
We demonstrate that a Markov Random Field prior can yield significantly better results than a spatially independent prior.
- Score: 18.333542893112007
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks are powerful tools for biomedical image segmentation.
These models are often trained with heavy supervision, relying on pairs of
images and corresponding voxel-level labels. However, obtaining segmentations
of anatomical regions on a large number of cases can be prohibitively
expensive. Thus there is a strong need for deep learning-based segmentation
tools that do not require heavy supervision and can continuously adapt. In this
paper, we propose a novel perspective of segmentation as a discrete
representation learning problem, and present a variational autoencoder
segmentation strategy that is flexible and adaptive. Our method, called
Segmentation Auto-Encoder (SAE), leverages all available unlabeled scans and
merely requires a segmentation prior, which can be a single unpaired
segmentation image. In experiments, we apply SAE to brain MRI scans. Our
results show that SAE can produce good quality segmentations, particularly when
the prior is good. We demonstrate that a Markov Random Field prior can yield
significantly better results than a spatially independent prior. Our code is
freely available at https://github.com/evanmy/sae.
Related papers
- Prompting Segment Anything Model with Domain-Adaptive Prototype for Generalizable Medical Image Segmentation [49.5901368256326]
We propose a novel Domain-Adaptive Prompt framework for fine-tuning the Segment Anything Model (termed as DAPSAM) in segmenting medical images.
Our DAPSAM achieves state-of-the-art performance on two medical image segmentation tasks with different modalities.
arXiv Detail & Related papers (2024-09-19T07:28:33Z) - Unsupervised Universal Image Segmentation [59.0383635597103]
We propose an Unsupervised Universal model (U2Seg) adept at performing various image segmentation tasks.
U2Seg generates pseudo semantic labels for these segmentation tasks via leveraging self-supervised models.
We then self-train the model on these pseudo semantic labels, yielding substantial performance gains.
arXiv Detail & Related papers (2023-12-28T18:59:04Z) - SAMBA: A Trainable Segmentation Web-App with Smart Labelling [0.0]
SAMBA is a trainable segmentation tool that uses Meta's Segment Anything Model (SAM) for fast, high-quality label suggestions.
The segmentation backend is run in the cloud, so does not require the user to have powerful hardware.
arXiv Detail & Related papers (2023-12-07T10:31:05Z) - Unsupervised Segmentation of Fetal Brain MRI using Deep Learning
Cascaded Registration [2.494736313545503]
Traditional deep learning-based automatic segmentation requires extensive training data with ground-truth labels.
We propose a novel method based on multi-atlas segmentation, that accurately segments multiple tissues without relying on labeled data for training.
Our method employs a cascaded deep learning network for 3D image registration, which computes small, incremental deformations to the moving image to align it precisely with the fixed image.
arXiv Detail & Related papers (2023-07-07T13:17:12Z) - On-Device Unsupervised Image Segmentation [5.9990534851802915]
We build the HDC-based unsupervised segmentation framework, namely "SegHDC"
On a standard segmentation dataset, SegHDC can achieve a 28.0% improvement in Intersection over Union (IoU) score.
SegHDC can obtain segmentation results within 3 minutes while achieving a 0.9587 IoU score.
arXiv Detail & Related papers (2023-02-24T00:51:17Z) - Self-Supervised Correction Learning for Semi-Supervised Biomedical Image
Segmentation [84.58210297703714]
We propose a self-supervised correction learning paradigm for semi-supervised biomedical image segmentation.
We design a dual-task network, including a shared encoder and two independent decoders for segmentation and lesion region inpainting.
Experiments on three medical image segmentation datasets for different tasks demonstrate the outstanding performance of our method.
arXiv Detail & Related papers (2023-01-12T08:19:46Z) - Learning from partially labeled data for multi-organ and tumor
segmentation [102.55303521877933]
We propose a Transformer based dynamic on-demand network (TransDoDNet) that learns to segment organs and tumors on multiple datasets.
A dynamic head enables the network to accomplish multiple segmentation tasks flexibly.
We create a large-scale partially labeled Multi-Organ and Tumor benchmark, termed MOTS, and demonstrate the superior performance of our TransDoDNet over other competitors.
arXiv Detail & Related papers (2022-11-13T13:03:09Z) - Segmenter: Transformer for Semantic Segmentation [79.9887988699159]
We introduce Segmenter, a transformer model for semantic segmentation.
We build on the recent Vision Transformer (ViT) and extend it to semantic segmentation.
It outperforms the state of the art on the challenging ADE20K dataset and performs on-par on Pascal Context and Cityscapes.
arXiv Detail & Related papers (2021-05-12T13:01:44Z) - Modeling the Probabilistic Distribution of Unlabeled Data forOne-shot
Medical Image Segmentation [40.41161371507547]
We develop a data augmentation method for one-shot brain magnetic resonance imaging (MRI) image segmentation.
Our method exploits only one labeled MRI image (named atlas) and a few unlabeled images.
Our method outperforms the state-of-the-art one-shot medical segmentation methods.
arXiv Detail & Related papers (2021-02-03T12:28:04Z) - 3D medical image segmentation with labeled and unlabeled data using
autoencoders at the example of liver segmentation in CT images [58.720142291102135]
This work investigates the potential of autoencoder-extracted features to improve segmentation with a convolutional neural network.
A convolutional autoencoder was used to extract features from unlabeled data and a multi-scale, fully convolutional CNN was used to perform the target task of 3D liver segmentation in CT images.
arXiv Detail & Related papers (2020-03-17T20:20:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.