Benefits of Linear Conditioning for Segmentation using Metadata
- URL: http://arxiv.org/abs/2102.09582v1
- Date: Thu, 18 Feb 2021 19:03:58 GMT
- Title: Benefits of Linear Conditioning for Segmentation using Metadata
- Authors: Andreanne Lemay, Charley Gros, Olivier Vincent, Yaou Liu, Joseph Paul
Cohen, Julien Cohen-Adad
- Abstract summary: We adapt a linear conditioning method called FiLM for image segmentation tasks.
We observed an average Dice score increase of 5.1% on spinal cord tumor segmentation when incorporating the tumor type with FiLM.
- Score: 2.4932758829952095
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Medical images are often accompanied by metadata describing the image
(vendor, acquisition parameters) and the patient (disease type or severity,
demographics, genomics). This metadata is usually disregarded by image
segmentation methods. In this work, we adapt a linear conditioning method
called FiLM (Feature-wise Linear Modulation) for image segmentation tasks. This
FiLM adaptation enables integrating metadata into segmentation models for
better performance. We observed an average Dice score increase of 5.1% on
spinal cord tumor segmentation when incorporating the tumor type with FiLM. The
metadata modulates the segmentation process through low-cost affine
transformations applied on feature maps which can be included in any neural
network's architecture. Additionally, we assess the relevance of segmentation
FiLM layers for tackling common challenges in medical imaging: training with
limited or unbalanced number of annotated data, multi-class training with
missing segmentations, and model adaptation to multiple tasks. Our results
demonstrated the following benefits of FiLM for segmentation: FiLMed U-Net was
robust to missing labels and reached higher Dice scores with few labels (up to
16.7%) compared to single-task U-Net. The code is open-source and available at
www.ivadomed.org.
Related papers
- MGFI-Net: A Multi-Grained Feature Integration Network for Enhanced Medical Image Segmentation [0.3108011671896571]
A major challenge in medical image segmentation is achieving accurate delineation of regions of interest in the presence of noise, low contrast, or complex anatomical structures.
Existing segmentation models often neglect the integration of multi-grained information and fail to preserve edge details.
We propose a novel image semantic segmentation model called the Multi-Grained Feature Integration Network (MGFI-Net)
Our MGFI-Net is designed with two dedicated modules to tackle these issues.
arXiv Detail & Related papers (2025-02-19T15:24:34Z) - I-MedSAM: Implicit Medical Image Segmentation with Segment Anything [24.04558900909617]
We propose I-MedSAM, which leverages the benefits of both continuous representations and SAM to obtain better cross-domain ability and accurate boundary delineation.
Our proposed method with only 1.6M trainable parameters outperforms existing methods including discrete and implicit methods.
arXiv Detail & Related papers (2023-11-28T00:43:52Z) - Self-Supervised Neuron Segmentation with Multi-Agent Reinforcement
Learning [53.00683059396803]
Mask image model (MIM) has been widely used due to its simplicity and effectiveness in recovering original information from masked images.
We propose a decision-based MIM that utilizes reinforcement learning (RL) to automatically search for optimal image masking ratio and masking strategy.
Our approach has a significant advantage over alternative self-supervised methods on the task of neuron segmentation.
arXiv Detail & Related papers (2023-10-06T10:40:46Z) - CGAM: Click-Guided Attention Module for Interactive Pathology Image
Segmentation via Backpropagating Refinement [8.590026259176806]
Tumor region segmentation is an essential task for the quantitative analysis of digital pathology.
Recent deep neural networks have shown state-of-the-art performance in various image-segmentation tasks.
We propose an interactive segmentation method that allows users to refine the output of deep neural networks through click-type user interactions.
arXiv Detail & Related papers (2023-07-03T13:45:24Z) - Self-Supervised Correction Learning for Semi-Supervised Biomedical Image
Segmentation [84.58210297703714]
We propose a self-supervised correction learning paradigm for semi-supervised biomedical image segmentation.
We design a dual-task network, including a shared encoder and two independent decoders for segmentation and lesion region inpainting.
Experiments on three medical image segmentation datasets for different tasks demonstrate the outstanding performance of our method.
arXiv Detail & Related papers (2023-01-12T08:19:46Z) - Learning from partially labeled data for multi-organ and tumor
segmentation [102.55303521877933]
We propose a Transformer based dynamic on-demand network (TransDoDNet) that learns to segment organs and tumors on multiple datasets.
A dynamic head enables the network to accomplish multiple segmentation tasks flexibly.
We create a large-scale partially labeled Multi-Organ and Tumor benchmark, termed MOTS, and demonstrate the superior performance of our TransDoDNet over other competitors.
arXiv Detail & Related papers (2022-11-13T13:03:09Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - Unsupervised Cross-Modality Domain Adaptation for Segmenting Vestibular
Schwannoma and Cochlea with Data Augmentation and Model Ensemble [4.942327155020771]
In this paper, we propose an unsupervised learning framework to segment the vestibular schwannoma and the cochlea.
Our framework leverages information from contrast-enhanced T1-weighted (ceT1-w) MRIs and its labels, and produces segmentations for T2-weighted MRIs without any labels in the target domain.
Our method is easy to build and produces promising segmentations, with a mean Dice score of 0.7930 and 0.7432 for VS and cochlea respectively in the validation set.
arXiv Detail & Related papers (2021-09-24T20:10:05Z) - Anatomy-Constrained Contrastive Learning for Synthetic Segmentation
without Ground-truth [8.513014699605499]
We developed an anatomy-constrained contrastive synthetic segmentation network (AccSeg-Net) to train a segmentation network for a target imaging modality.
We demonstrated successful applications on CBCT, MRI, and PET imaging data, and showed superior segmentation performances as compared to previous methods.
arXiv Detail & Related papers (2021-07-12T14:54:04Z) - Segmenter: Transformer for Semantic Segmentation [79.9887988699159]
We introduce Segmenter, a transformer model for semantic segmentation.
We build on the recent Vision Transformer (ViT) and extend it to semantic segmentation.
It outperforms the state of the art on the challenging ADE20K dataset and performs on-par on Pascal Context and Cityscapes.
arXiv Detail & Related papers (2021-05-12T13:01:44Z) - Pairwise Relation Learning for Semi-supervised Gland Segmentation [90.45303394358493]
We propose a pairwise relation-based semi-supervised (PRS2) model for gland segmentation on histology images.
This model consists of a segmentation network (S-Net) and a pairwise relation network (PR-Net)
We evaluate our model against five recent methods on the GlaS dataset and three recent methods on the CRAG dataset.
arXiv Detail & Related papers (2020-08-06T15:02:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.