CoactSeg: Learning from Heterogeneous Data for New Multiple Sclerosis
Lesion Segmentation
- URL: http://arxiv.org/abs/2307.04513v2
- Date: Fri, 15 Sep 2023 01:13:37 GMT
- Title: CoactSeg: Learning from Heterogeneous Data for New Multiple Sclerosis
Lesion Segmentation
- Authors: Yicheng Wu, Zhonghua Wu, Hengcan Shi, Bjoern Picker, Winston Chong,
and Jianfei Cai
- Abstract summary: CoactSeg model is designed as a unified model, with the same three inputs (the baseline, follow-up, and their longitudinal brain differences) and the same three outputs (the corresponding all-lesion and new-lesion predictions)
Experiments demonstrate that utilizing the heterogeneous data and the proposed longitudinal relation constraint can significantly improve the performance for both new-lesion and all-lesion segmentation tasks.
- Score: 27.816276215102
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: New lesion segmentation is essential to estimate the disease progression and
therapeutic effects during multiple sclerosis (MS) clinical treatments.
However, the expensive data acquisition and expert annotation restrict the
feasibility of applying large-scale deep learning models. Since
single-time-point samples with all-lesion labels are relatively easy to
collect, exploiting them to train deep models is highly desirable to improve
new lesion segmentation. Therefore, we proposed a coaction segmentation
(CoactSeg) framework to exploit the heterogeneous data (i.e., new-lesion
annotated two-time-point data and all-lesion annotated single-time-point data)
for new MS lesion segmentation. The CoactSeg model is designed as a unified
model, with the same three inputs (the baseline, follow-up, and their
longitudinal brain differences) and the same three outputs (the corresponding
all-lesion and new-lesion predictions), no matter which type of heterogeneous
data is being used. Moreover, a simple and effective relation regularization is
proposed to ensure the longitudinal relations among the three outputs to
improve the model learning. Extensive experiments demonstrate that utilizing
the heterogeneous data and the proposed longitudinal relation constraint can
significantly improve the performance for both new-lesion and all-lesion
segmentation tasks. Meanwhile, we also introduce an in-house MS-23v1 dataset,
including 38 Oceania single-time-point samples with all-lesion labels. Codes
and the dataset are released at https://github.com/ycwu1997/CoactSeg.
Related papers
- SegHeD: Segmentation of Heterogeneous Data for Multiple Sclerosis Lesions with Anatomical Constraints [1.498084483844508]
Machine learning models have demonstrated a great potential for automated MS lesion segmentation.
SegHeD is a novel multi-dataset multi-task segmentation model that can incorporate heterogeneous data as input.
SegHeD is assessed on five MS datasets and achieves a high performance in all, new, and vanishing-lesion segmentation.
arXiv Detail & Related papers (2024-10-02T17:21:43Z) - Towards Modality-agnostic Label-efficient Segmentation with Entropy-Regularized Distribution Alignment [62.73503467108322]
This topic is widely studied in 3D point cloud segmentation due to the difficulty of annotating point clouds densely.
Until recently, pseudo-labels have been widely employed to facilitate training with limited ground-truth labels.
Existing pseudo-labeling approaches could suffer heavily from the noises and variations in unlabelled data.
We propose a novel learning strategy to regularize the pseudo-labels generated for training, thus effectively narrowing the gaps between pseudo-labels and model predictions.
arXiv Detail & Related papers (2024-08-29T13:31:15Z) - The effect of data augmentation and 3D-CNN depth on Alzheimer's Disease
detection [51.697248252191265]
This work summarizes and strictly observes best practices regarding data handling, experimental design, and model evaluation.
We focus on Alzheimer's Disease (AD) detection, which serves as a paradigmatic example of challenging problem in healthcare.
Within this framework, we train predictive 15 models, considering three different data augmentation strategies and five distinct 3D CNN architectures.
arXiv Detail & Related papers (2023-09-13T10:40:41Z) - Neural Collapse Terminus: A Unified Solution for Class Incremental
Learning and Its Variants [166.916517335816]
In this paper, we offer a unified solution to the misalignment dilemma in the three tasks.
We propose neural collapse terminus that is a fixed structure with the maximal equiangular inter-class separation for the whole label space.
Our method holds the neural collapse optimality in an incremental fashion regardless of data imbalance or data scarcity.
arXiv Detail & Related papers (2023-08-03T13:09:59Z) - Mine yOur owN Anatomy: Revisiting Medical Image Segmentation with Extremely Limited Labels [54.58539616385138]
We introduce a novel semi-supervised 2D medical image segmentation framework termed Mine yOur owN Anatomy (MONA)
First, prior work argues that every pixel equally matters to the model training; we observe empirically that this alone is unlikely to define meaningful anatomical features.
Second, we construct a set of objectives that encourage the model to be capable of decomposing medical images into a collection of anatomical features.
arXiv Detail & Related papers (2022-09-27T15:50:31Z) - Longitudinal detection of new MS lesions using Deep Learning [0.0]
We describe a deep-learning-based pipeline addressing the task of detecting and segmenting new MS lesions.
First, we propose to use transfer-learning from a model trained on a segmentation task using single time-points.
Second, we propose a data synthesis strategy to generate realistic longitudinal time-points with new lesions.
arXiv Detail & Related papers (2022-06-16T16:09:04Z) - Rapid model transfer for medical image segmentation via iterative
human-in-the-loop update: from labelled public to unlabelled clinical
datasets for multi-organ segmentation in CT [22.411929051477912]
This paper presents a novel and generic human-in-the-loop scheme for efficiently transferring a segmentation model from a small-scale labelled dataset to a larger-scale unlabelled dataset for multi-organ segmentation in CT.
The results show that our scheme can not only improve the performance by 19.7% on Dice, but also expedite the cost time of manual labelling from 13.87 min to 1.51 min per CT volume during the model transfer, demonstrating the clinical usefulness with promising potentials.
arXiv Detail & Related papers (2022-04-13T08:22:42Z) - Cohort Bias Adaptation in Aggregated Datasets for Lesion Segmentation [0.8466401378239363]
We propose a generalized affine conditioning framework to learn and account for cohort biases across multi-source datasets.
We show that our cohort bias adaptation method improves performance of the network on pooled datasets.
arXiv Detail & Related papers (2021-08-02T08:32:57Z) - Towards Robust Partially Supervised Multi-Structure Medical Image
Segmentation on Small-Scale Data [123.03252888189546]
We propose Vicinal Labels Under Uncertainty (VLUU) to bridge the methodological gaps in partially supervised learning (PSL) under data scarcity.
Motivated by multi-task learning and vicinal risk minimization, VLUU transforms the partially supervised problem into a fully supervised problem by generating vicinal labels.
Our research suggests a new research direction in label-efficient deep learning with partial supervision.
arXiv Detail & Related papers (2020-11-28T16:31:00Z) - Learning from Multiple Datasets with Heterogeneous and Partial Labels
for Universal Lesion Detection in CT [25.351709433029896]
We build a simple yet effective lesion detection framework named Lesion ENSemble (LENS)
LENS can efficiently learn from multiple heterogeneous lesion datasets in a multi-task fashion.
We train our framework on four public lesion datasets and evaluate it on 800 manually-labeled sub-volumes in DeepLesion.
arXiv Detail & Related papers (2020-09-05T17:55:21Z) - Select-ProtoNet: Learning to Select for Few-Shot Disease Subtype
Prediction [55.94378672172967]
We focus on few-shot disease subtype prediction problem, identifying subgroups of similar patients.
We introduce meta learning techniques to develop a new model, which can extract the common experience or knowledge from interrelated clinical tasks.
Our new model is built upon a carefully designed meta-learner, called Prototypical Network, that is a simple yet effective meta learning machine for few-shot image classification.
arXiv Detail & Related papers (2020-09-02T02:50:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.