Compositional Representation Learning for Brain Tumour Segmentation
- URL: http://arxiv.org/abs/2310.06562v1
- Date: Tue, 10 Oct 2023 12:19:39 GMT
- Title: Compositional Representation Learning for Brain Tumour Segmentation
- Authors: Xiao Liu, Antanas Kascenas, Hannah Watson, Sotirios A. Tsaftaris and
Alison Q. O'Neil
- Abstract summary: deep learning models can achieve human expert-level performance given a large amount of data and pixel-level annotations.
We adapt a mixed supervision framework, vMFNet, to learn robust representations using unsupervised learning and weak supervision.
We show that good tumour segmentation performance can be achieved with a large amount of weakly labelled data but only a small amount of fully-annotated data.
- Score: 13.5112749699868
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: For brain tumour segmentation, deep learning models can achieve human
expert-level performance given a large amount of data and pixel-level
annotations. However, the expensive exercise of obtaining pixel-level
annotations for large amounts of data is not always feasible, and performance
is often heavily reduced in a low-annotated data regime. To tackle this
challenge, we adapt a mixed supervision framework, vMFNet, to learn robust
compositional representations using unsupervised learning and weak supervision
alongside non-exhaustive pixel-level pathology labels. In particular, we use
the BraTS dataset to simulate a collection of 2-point expert pathology
annotations indicating the top and bottom slice of the tumour (or tumour
sub-regions: peritumoural edema, GD-enhancing tumour, and the necrotic /
non-enhancing tumour) in each MRI volume, from which weak image-level labels
that indicate the presence or absence of the tumour (or the tumour sub-regions)
in the image are constructed. Then, vMFNet models the encoded image features
with von-Mises-Fisher (vMF) distributions, via learnable and compositional vMF
kernels which capture information about structures in the images. We show that
good tumour segmentation performance can be achieved with a large amount of
weakly labelled data but only a small amount of fully-annotated data.
Interestingly, emergent learning of anatomical structures occurs in the
compositional representation even given only supervision relating to pathology
(tumour).
Related papers
- Transferring Ultrahigh-Field Representations for Intensity-Guided Brain
Segmentation of Low-Field Magnetic Resonance Imaging [51.92395928517429]
The use of 7T MRI is limited by its high cost and lower accessibility compared to low-field (LF) MRI.
This study proposes a deep-learning framework that fuses the input LF magnetic resonance feature representations with the inferred 7T-like feature representations for brain image segmentation tasks.
arXiv Detail & Related papers (2024-02-13T12:21:06Z) - Comparative Analysis of Segment Anything Model and U-Net for Breast
Tumor Detection in Ultrasound and Mammography Images [0.15833270109954137]
The technique employs two advanced deep learning architectures, namely U-Net and pretrained SAM, for tumor segmentation.
The U-Net model is specifically designed for medical image segmentation.
The pretrained SAM architecture incorporates a mechanism to capture spatial dependencies and generate segmentation results.
arXiv Detail & Related papers (2023-06-21T18:49:21Z) - Brain tumor multi classification and segmentation in MRI images using
deep learning [3.1248717814228923]
The classification model is based on the EfficientNetB1 architecture and is trained to classify images into four classes: meningioma, glioma, pituitary adenoma, and no tumor.
The segmentation model is based on the U-Net architecture and is trained to accurately segment the tumor from the MRI images.
arXiv Detail & Related papers (2023-04-20T01:32:55Z) - Patched Diffusion Models for Unsupervised Anomaly Detection in Brain MRI [55.78588835407174]
We propose a method that reformulates the generation task of diffusion models as a patch-based estimation of healthy brain anatomy.
We evaluate our approach on data of tumors and multiple sclerosis lesions and demonstrate a relative improvement of 25.1% compared to existing baselines.
arXiv Detail & Related papers (2023-03-07T09:40:22Z) - PCRLv2: A Unified Visual Information Preservation Framework for
Self-supervised Pre-training in Medical Image Analysis [56.63327669853693]
We propose to incorporate the task of pixel restoration for explicitly encoding more pixel-level information into high-level semantics.
We also address the preservation of scale information, a powerful tool in aiding image understanding.
The proposed unified SSL framework surpasses its self-supervised counterparts on various tasks.
arXiv Detail & Related papers (2023-01-02T17:47:27Z) - Deep Superpixel Generation and Clustering for Weakly Supervised
Segmentation of Brain Tumors in MR Images [0.0]
This work proposes the use of a superpixel generation model and a superpixel clustering model to enable weakly supervised brain tumor segmentations.
We used 2D slices of magnetic resonance brain scans from the Multimodal Brain Tumor Challenge 2020 dataset and labels indicating the presence of tumors to train the pipeline.
Our method achieved a mean Dice coefficient of 0.691 and a mean 95% Hausdorff distance of 18.1, outperforming existing superpixel-based weakly supervised segmentation methods.
arXiv Detail & Related papers (2022-09-20T18:08:34Z) - Mixed-UNet: Refined Class Activation Mapping for Weakly-Supervised
Semantic Segmentation with Multi-scale Inference [28.409679398886304]
We develop a novel model named Mixed-UNet, which has two parallel branches in the decoding phase.
We evaluate the designed Mixed-UNet against several prevalent deep learning-based segmentation approaches on our dataset collected from the local hospital and public datasets.
arXiv Detail & Related papers (2022-05-06T08:37:02Z) - Pseudo-label refinement using superpixels for semi-supervised brain
tumour segmentation [0.6767885381740952]
Training neural networks using limited annotations is an important problem in the medical domain.
Semi-supervised learning aims to overcome this problem by learning segmentations with very little annotated data.
We propose a framework based on superpixels to improve the accuracy of the pseudo labels.
arXiv Detail & Related papers (2021-10-16T15:17:11Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Towards Robust Partially Supervised Multi-Structure Medical Image
Segmentation on Small-Scale Data [123.03252888189546]
We propose Vicinal Labels Under Uncertainty (VLUU) to bridge the methodological gaps in partially supervised learning (PSL) under data scarcity.
Motivated by multi-task learning and vicinal risk minimization, VLUU transforms the partially supervised problem into a fully supervised problem by generating vicinal labels.
Our research suggests a new research direction in label-efficient deep learning with partial supervision.
arXiv Detail & Related papers (2020-11-28T16:31:00Z) - Improved Slice-wise Tumour Detection in Brain MRIs by Computing
Dissimilarities between Latent Representations [68.8204255655161]
Anomaly detection for Magnetic Resonance Images (MRIs) can be solved with unsupervised methods.
We have proposed a slice-wise semi-supervised method for tumour detection based on the computation of a dissimilarity function in the latent space of a Variational AutoEncoder.
We show that by training the models on higher resolution images and by improving the quality of the reconstructions, we obtain results which are comparable with different baselines.
arXiv Detail & Related papers (2020-07-24T14:02:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.