Enhancing Cross-Modal Medical Image Segmentation through Compositionality
- URL: http://arxiv.org/abs/2408.11733v1
- Date: Wed, 21 Aug 2024 15:57:24 GMT
- Title: Enhancing Cross-Modal Medical Image Segmentation through Compositionality
- Authors: Aniek Eijpe, Valentina Corbetta, Kalina Chupetlovska, Regina Beets-Tan, Wilson Silva,
- Abstract summary: We introduce compositionality as an inductive bias in a cross-modal segmentation network to improve segmentation performance and interpretability.
The proposed network enforces compositionality on the learned representations using learnable von Mises-Fisher kernels.
The experimental results demonstrate enhanced segmentation performance and reduced computational costs on multiple medical datasets.
- Score: 0.4194295877935868
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cross-modal medical image segmentation presents a significant challenge, as different imaging modalities produce images with varying resolutions, contrasts, and appearances of anatomical structures. We introduce compositionality as an inductive bias in a cross-modal segmentation network to improve segmentation performance and interpretability while reducing complexity. The proposed network is an end-to-end cross-modal segmentation framework that enforces compositionality on the learned representations using learnable von Mises-Fisher kernels. These kernels facilitate content-style disentanglement in the learned representations, resulting in compositional content representations that are inherently interpretable and effectively disentangle different anatomical structures. The experimental results demonstrate enhanced segmentation performance and reduced computational costs on multiple medical datasets. Additionally, we demonstrate the interpretability of the learned compositional features. Code and checkpoints will be publicly available at: https://github.com/Trustworthy-AI-UU-NKI/Cross-Modal-Segmentation.
Related papers
- Boosting Medical Image Segmentation Performance with Adaptive Convolution Layer [6.887244952811574]
We propose an adaptive layer placed ahead of leading deep-learning models such as UCTransNet.
Our approach enhances the network's ability to handle diverse anatomical structures and subtle image details.
It consistently outperforms traditional CNNs with fixed kernel sizes with a similar number of parameters.
arXiv Detail & Related papers (2024-04-17T13:18:39Z) - Dual-scale Enhanced and Cross-generative Consistency Learning for Semi-supervised Medical Image Segmentation [49.57907601086494]
Medical image segmentation plays a crucial role in computer-aided diagnosis.
We propose a novel Dual-scale Enhanced and Cross-generative consistency learning framework for semi-supervised medical image (DEC-Seg)
arXiv Detail & Related papers (2023-12-26T12:56:31Z) - Compositionally Equivariant Representation Learning [22.741376970643973]
Humans can swiftly learn to identify important anatomy in medical images like MRI and CT scans.
This recognition capability easily generalises to new images from different medical facilities and to new tasks in different settings.
We study the utilisation of compositionality in learning more interpretable and generalisable representations for medical image segmentation.
arXiv Detail & Related papers (2023-06-13T14:06:55Z) - Implicit Anatomical Rendering for Medical Image Segmentation with
Stochastic Experts [11.007092387379078]
We propose MORSE, a generic implicit neural rendering framework designed at an anatomical level to assist learning in medical image segmentation.
Our approach is to formulate medical image segmentation as a rendering problem in an end-to-end manner.
Our experiments demonstrate that MORSE can work well with different medical segmentation backbones.
arXiv Detail & Related papers (2023-04-06T16:44:03Z) - Self-Supervised Correction Learning for Semi-Supervised Biomedical Image
Segmentation [84.58210297703714]
We propose a self-supervised correction learning paradigm for semi-supervised biomedical image segmentation.
We design a dual-task network, including a shared encoder and two independent decoders for segmentation and lesion region inpainting.
Experiments on three medical image segmentation datasets for different tasks demonstrate the outstanding performance of our method.
arXiv Detail & Related papers (2023-01-12T08:19:46Z) - Contrastive Image Synthesis and Self-supervised Feature Adaptation for
Cross-Modality Biomedical Image Segmentation [8.772764547425291]
CISFA builds on image domain translation and unsupervised feature adaptation for cross-modality biomedical image segmentation.
We use a one-sided generative model and add a weighted patch-wise contrastive loss between sampled patches of the input image and the corresponding synthetic image.
We evaluate our methods on segmentation tasks containing CT and MRI images for abdominal cavities and whole hearts.
arXiv Detail & Related papers (2022-07-27T01:49:26Z) - Cross-level Contrastive Learning and Consistency Constraint for
Semi-supervised Medical Image Segmentation [46.678279106837294]
We propose a cross-level constrastive learning scheme to enhance representation capacity for local features in semi-supervised medical image segmentation.
With the help of the cross-level contrastive learning and consistency constraint, the unlabelled data can be effectively explored to improve segmentation performance.
arXiv Detail & Related papers (2022-02-08T15:12:11Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Retinal Image Segmentation with a Structure-Texture Demixing Network [62.69128827622726]
The complex structure and texture information are mixed in a retinal image, and distinguishing the information is difficult.
Existing methods handle texture and structure jointly, which may lead biased models toward recognizing textures and thus results in inferior segmentation performance.
We propose a segmentation strategy that seeks to separate structure and texture components and significantly improve the performance.
arXiv Detail & Related papers (2020-07-15T12:19:03Z) - Pathological Retinal Region Segmentation From OCT Images Using Geometric
Relation Based Augmentation [84.7571086566595]
We propose improvements over previous GAN-based medical image synthesis methods by jointly encoding the intrinsic relationship of geometry and shape.
The proposed method outperforms state-of-the-art segmentation methods on the public RETOUCH dataset having images captured from different acquisition procedures.
arXiv Detail & Related papers (2020-03-31T11:50:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.