Style Curriculum Learning for Robust Medical Image Segmentation
- URL: http://arxiv.org/abs/2108.00402v1
- Date: Sun, 1 Aug 2021 08:56:24 GMT
- Title: Style Curriculum Learning for Robust Medical Image Segmentation
- Authors: Zhendong Liu, Van Manh, Xin Yang, Xiaoqiong Huang, Karim Lekadir,
V\'ictor Campello, Nishant Ravikumar, Alejandro F Frangi, Dong Ni
- Abstract summary: Deep segmentation models often degrade due to distribution shifts in image intensities between the training and test data sets.
We propose a novel framework to ensure robust segmentation in the presence of such distribution shifts.
- Score: 62.02435329931057
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The performance of deep segmentation models often degrades due to
distribution shifts in image intensities between the training and test data
sets. This is particularly pronounced in multi-centre studies involving data
acquired using multi-vendor scanners, with variations in acquisition protocols.
It is challenging to address this degradation because the shift is often not
known \textit{a priori} and hence difficult to model. We propose a novel
framework to ensure robust segmentation in the presence of such distribution
shifts. Our contribution is three-fold. First, inspired by the spirit of
curriculum learning, we design a novel style curriculum to train the
segmentation models using an easy-to-hard mode. A style transfer model with
style fusion is employed to generate the curriculum samples. Gradually focusing
on complex and adversarial style samples can significantly boost the robustness
of the models. Second, instead of subjectively defining the curriculum
complexity, we adopt an automated gradient manipulation method to control the
hard and adversarial sample generation process. Third, we propose the Local
Gradient Sign strategy to aggregate the gradient locally and stabilise training
during gradient manipulation. The proposed framework can generalise to unknown
distribution without using any target data. Extensive experiments on the public
M\&Ms Challenge dataset demonstrate that our proposed framework can generalise
deep models well to unknown distributions and achieve significant improvements
in segmentation accuracy.
Related papers
- Integrated Image-Text Based on Semi-supervised Learning for Small Sample Instance Segmentation [1.3157419797035321]
The article proposes a novel small sample instance segmentation solution from the perspective of maximizing the utilization of existing information.
First, it helps the model fully utilize unlabeled data by learning to generate pseudo labels, increasing the number of available samples.
Second, by integrating the features of text and image, more accurate classification results can be obtained.
arXiv Detail & Related papers (2024-10-21T14:44:08Z) - Deep ContourFlow: Advancing Active Contours with Deep Learning [3.9948520633731026]
We present a framework for both unsupervised and one-shot approaches for image segmentation.
It is capable of capturing complex object boundaries without the need for extensive labeled training data.
This is particularly required in histology, a field facing a significant shortage of annotations.
arXiv Detail & Related papers (2024-07-15T13:12:34Z) - Reinforcing Pre-trained Models Using Counterfactual Images [54.26310919385808]
This paper proposes a novel framework to reinforce classification models using language-guided generated counterfactual images.
We identify model weaknesses by testing the model using the counterfactual image dataset.
We employ the counterfactual images as an augmented dataset to fine-tune and reinforce the classification model.
arXiv Detail & Related papers (2024-06-19T08:07:14Z) - Semi-supervised Medical Image Segmentation Method Based on Cross-pseudo
Labeling Leveraging Strong and Weak Data Augmentation Strategies [2.8246591681333024]
This paper proposes a semi-supervised model, DFCPS, which innovatively incorporates the Fixmatch concept.
Cross-pseudo-supervision is introduced, integrating consistency learning with self-training.
Our model consistently exhibits superior performance across all four subdivisions containing different proportions of unlabeled data.
arXiv Detail & Related papers (2024-02-17T13:07:44Z) - Learning Invariant Molecular Representation in Latent Discrete Space [52.13724532622099]
We propose a new framework for learning molecular representations that exhibit invariance and robustness against distribution shifts.
Our model achieves stronger generalization against state-of-the-art baselines in the presence of various distribution shifts.
arXiv Detail & Related papers (2023-10-22T04:06:44Z) - LatentDR: Improving Model Generalization Through Sample-Aware Latent
Degradation and Restoration [22.871920291497094]
We propose a novel approach for distribution-aware latent augmentation.
Our approach first degrades the samples in the latent space, mapping them to augmented labels, and then restores the samples during training.
We show that our method can be flexibly adapted to long-tail recognition tasks, demonstrating its versatility in building more generalizable models.
arXiv Detail & Related papers (2023-08-28T14:08:42Z) - Consistency Regularization for Generalizable Source-free Domain
Adaptation [62.654883736925456]
Source-free domain adaptation (SFDA) aims to adapt a well-trained source model to an unlabelled target domain without accessing the source dataset.
Existing SFDA methods ONLY assess their adapted models on the target training set, neglecting the data from unseen but identically distributed testing sets.
We propose a consistency regularization framework to develop a more generalizable SFDA method.
arXiv Detail & Related papers (2023-08-03T07:45:53Z) - Self-Evolution Learning for Mixup: Enhance Data Augmentation on Few-Shot
Text Classification Tasks [75.42002070547267]
We propose a self evolution learning (SE) based mixup approach for data augmentation in text classification.
We introduce a novel instance specific label smoothing approach, which linearly interpolates the model's output and one hot labels of the original samples to generate new soft for label mixing up.
arXiv Detail & Related papers (2023-05-22T23:43:23Z) - Semi-supervised Deep Learning for Image Classification with Distribution
Mismatch: A Survey [1.5469452301122175]
Deep learning models rely on the abundance of labelled observations to train a prospective model.
It is expensive to gather labelled observations of data, making the usage of deep learning models not ideal.
In many situations different unlabelled data sources might be available.
This raises the risk of a significant distribution mismatch between the labelled and unlabelled datasets.
arXiv Detail & Related papers (2022-03-01T02:46:00Z) - UniT: Unified Knowledge Transfer for Any-shot Object Detection and
Segmentation [52.487469544343305]
Methods for object detection and segmentation rely on large scale instance-level annotations for training.
We propose an intuitive and unified semi-supervised model that is applicable to a range of supervision.
arXiv Detail & Related papers (2020-06-12T22:45:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.