Tyche: Stochastic In-Context Learning for Medical Image Segmentation
- URL: http://arxiv.org/abs/2401.13650v1
- Date: Wed, 24 Jan 2024 18:35:55 GMT
- Title: Tyche: Stochastic In-Context Learning for Medical Image Segmentation
- Authors: Marianne Rakic, Hallee E. Wong, Jose Javier Gonzalez Ortiz, Beth
Cimini, John Guttag and Adrian V. Dalca
- Abstract summary: Tyche is a model that uses a context set to generate predictions for previously unseen tasks without the need to retrain.
We introduce a novel convolution block architecture that enables interactions among predictions.
When combined with appropriate model design and loss functions, Tyche can predict a set of plausible diverse segmentation candidates for new or unseen medical images and segmentation tasks without the need to retrain.
- Score: 3.7997415514096926
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Existing learning-based solutions to medical image segmentation have two
important shortcomings. First, for most new segmentation task, a new model has
to be trained or fine-tuned. This requires extensive resources and machine
learning expertise, and is therefore often infeasible for medical researchers
and clinicians. Second, most existing segmentation methods produce a single
deterministic segmentation mask for a given image. In practice however, there
is often considerable uncertainty about what constitutes the correct
segmentation, and different expert annotators will often segment the same image
differently. We tackle both of these problems with Tyche, a model that uses a
context set to generate stochastic predictions for previously unseen tasks
without the need to retrain. Tyche differs from other in-context segmentation
methods in two important ways. (1) We introduce a novel convolution block
architecture that enables interactions among predictions. (2) We introduce
in-context test-time augmentation, a new mechanism to provide prediction
stochasticity. When combined with appropriate model design and loss functions,
Tyche can predict a set of plausible diverse segmentation candidates for new or
unseen medical images and segmentation tasks without the need to retrain.
Related papers
- Multi-rater Prompting for Ambiguous Medical Image Segmentation [12.452584289825849]
Multi-rater annotations commonly occur when medical images are independently annotated by multiple experts (raters)
We propose a multi-rater prompt-based approach to address these two challenges altogether.
arXiv Detail & Related papers (2024-04-11T09:13:50Z) - GMISeg: General Medical Image Segmentation without Re-Training [6.6467547151592505]
Deep learning models often struggle to be generalisable to unknown tasks involving new anatomical structures, labels, or shapes.
Here I developed a general model that can solve unknown medical image segmentation tasks without requiring additional training.
I evaluated the performance of the proposed method on medical image datasets with different imaging modalities and anatomical structures.
arXiv Detail & Related papers (2023-11-21T11:33:15Z) - UniverSeg: Universal Medical Image Segmentation [16.19510845046103]
We present UniverSeg, a method for solving unseen medical segmentation tasks without additional training.
We have gathered and standardized a collection of 53 open-access medical segmentation datasets with over 22,000 scans.
We demonstrate that UniverSeg substantially outperforms several related methods on unseen tasks.
arXiv Detail & Related papers (2023-04-12T19:36:46Z) - Self-Supervised Correction Learning for Semi-Supervised Biomedical Image
Segmentation [84.58210297703714]
We propose a self-supervised correction learning paradigm for semi-supervised biomedical image segmentation.
We design a dual-task network, including a shared encoder and two independent decoders for segmentation and lesion region inpainting.
Experiments on three medical image segmentation datasets for different tasks demonstrate the outstanding performance of our method.
arXiv Detail & Related papers (2023-01-12T08:19:46Z) - Mine yOur owN Anatomy: Revisiting Medical Image Segmentation with Extremely Limited Labels [54.58539616385138]
We introduce a novel semi-supervised 2D medical image segmentation framework termed Mine yOur owN Anatomy (MONA)
First, prior work argues that every pixel equally matters to the model training; we observe empirically that this alone is unlikely to define meaningful anatomical features.
Second, we construct a set of objectives that encourage the model to be capable of decomposing medical images into a collection of anatomical features.
arXiv Detail & Related papers (2022-09-27T15:50:31Z) - Learning Discriminative Representation via Metric Learning for
Imbalanced Medical Image Classification [52.94051907952536]
We propose embedding metric learning into the first stage of the two-stage framework specially to help the feature extractor learn to extract more discriminative feature representations.
Experiments mainly on three medical image datasets show that the proposed approach consistently outperforms existing onestage and two-stage approaches.
arXiv Detail & Related papers (2022-07-14T14:57:01Z) - Recurrent Mask Refinement for Few-Shot Medical Image Segmentation [15.775057485500348]
We propose a new framework for few-shot medical image segmentation based on prototypical networks.
Our innovation lies in the design of two key modules: 1) a context relation encoder (CRE) that uses correlation to capture local relation features between foreground and background regions.
Experiments on two abdomen CT datasets and an abdomen MRI dataset show the proposed method obtains substantial improvement over the state-of-the-art methods.
arXiv Detail & Related papers (2021-08-02T04:06:12Z) - Cascaded Robust Learning at Imperfect Labels for Chest X-ray
Segmentation [61.09321488002978]
We present a novel cascaded robust learning framework for chest X-ray segmentation with imperfect annotation.
Our model consists of three independent network, which can effectively learn useful information from the peer networks.
Our methods could achieve a significant improvement on the accuracy in segmentation tasks compared to the previous methods.
arXiv Detail & Related papers (2021-04-05T15:50:16Z) - A Few Guidelines for Incremental Few-Shot Segmentation [57.34237650765928]
Given a pretrained segmentation model and few images containing novel classes, our goal is to learn to segment novel classes while retaining the ability to segment previously seen ones.
We show how the main problems of end-to-end training in this scenario are.
i) the drift of the batch-normalization statistics toward novel classes that we can fix with batch renormalization and.
ii) the forgetting of old classes, that we can fix with regularization strategies.
arXiv Detail & Related papers (2020-11-30T20:45:56Z) - CRNet: Cross-Reference Networks for Few-Shot Segmentation [59.85183776573642]
Few-shot segmentation aims to learn a segmentation model that can be generalized to novel classes with only a few training images.
With a cross-reference mechanism, our network can better find the co-occurrent objects in the two images.
Experiments on the PASCAL VOC 2012 dataset show that our network achieves state-of-the-art performance.
arXiv Detail & Related papers (2020-03-24T04:55:43Z) - Semi-supervised few-shot learning for medical image segmentation [21.349705243254423]
Recent attempts to alleviate the need for large annotated datasets have developed training strategies under the few-shot learning paradigm.
We propose a novel few-shot learning framework for semantic segmentation, where unlabeled images are also made available at each episode.
We show that including unlabeled surrogate tasks in the episodic training leads to more powerful feature representations.
arXiv Detail & Related papers (2020-03-18T20:37:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.