Predicting Scores of Medical Imaging Segmentation Methods with
Meta-Learning
- URL: http://arxiv.org/abs/2005.08869v1
- Date: Fri, 8 May 2020 07:47:52 GMT
- Title: Predicting Scores of Medical Imaging Segmentation Methods with
Meta-Learning
- Authors: Tom van Sonsbeek and Veronika Cheplygina
- Abstract summary: We investigate meta-learning for segmentation across ten datasets of different organs and modalities.
We use support vector regression and deep neural networks to learn the relationship between the meta-features and prior model performance.
These results demonstrate the potential of meta-learning in medical imaging.
- Score: 0.30458514384586394
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning has led to state-of-the-art results for many medical imaging
tasks, such as segmentation of different anatomical structures. With the
increased numbers of deep learning publications and openly available code, the
approach to choosing a model for a new task becomes more complicated, while
time and (computational) resources are limited. A possible solution to choosing
a model efficiently is meta-learning, a learning method in which prior
performance of a model is used to predict the performance for new tasks. We
investigate meta-learning for segmentation across ten datasets of different
organs and modalities. We propose four ways to represent each dataset by
meta-features: one based on statistical features of the images and three are
based on deep learning features. We use support vector regression and deep
neural networks to learn the relationship between the meta-features and prior
model performance. On three external test datasets these methods give Dice
scores within 0.10 of the true performance. These results demonstrate the
potential of meta-learning in medical imaging.
Related papers
- MAP: Domain Generalization via Meta-Learning on Anatomy-Consistent
Pseudo-Modalities [12.194439938007672]
We propose Meta learning on Anatomy-consistent Pseudo-modalities (MAP)
MAP improves model generalizability by learning structural features.
We evaluate our model on seven public datasets of various retinal imaging modalities.
arXiv Detail & Related papers (2023-09-03T22:56:22Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - Mine yOur owN Anatomy: Revisiting Medical Image Segmentation with Extremely Limited Labels [54.58539616385138]
We introduce a novel semi-supervised 2D medical image segmentation framework termed Mine yOur owN Anatomy (MONA)
First, prior work argues that every pixel equally matters to the model training; we observe empirically that this alone is unlikely to define meaningful anatomical features.
Second, we construct a set of objectives that encourage the model to be capable of decomposing medical images into a collection of anatomical features.
arXiv Detail & Related papers (2022-09-27T15:50:31Z) - Understanding the Tricks of Deep Learning in Medical Image Segmentation:
Challenges and Future Directions [66.40971096248946]
In this paper, we collect a series of MedISeg tricks for different model implementation phases.
We experimentally explore the effectiveness of these tricks on consistent baselines.
We also open-sourced a strong MedISeg repository, where each component has the advantage of plug-and-play.
arXiv Detail & Related papers (2022-09-21T12:30:05Z) - Learning from Temporal Spatial Cubism for Cross-Dataset Skeleton-based
Action Recognition [88.34182299496074]
Action labels are only available on a source dataset, but unavailable on a target dataset in the training stage.
We utilize a self-supervision scheme to reduce the domain shift between two skeleton-based action datasets.
By segmenting and permuting temporal segments or human body parts, we design two self-supervised learning classification tasks.
arXiv Detail & Related papers (2022-07-17T07:05:39Z) - The Effect of Diversity in Meta-Learning [79.56118674435844]
Few-shot learning aims to learn representations that can tackle novel tasks given a small number of examples.
Recent studies show that task distribution plays a vital role in the model's performance.
We study different task distributions on a myriad of models and datasets to evaluate the effect of task diversity on meta-learning algorithms.
arXiv Detail & Related papers (2022-01-27T19:39:07Z) - MetaHistoSeg: A Python Framework for Meta Learning in Histopathology
Image Segmentation [3.738450972771192]
We introduce MetaHistoSeg - a Python framework that implements unique scenarios in both meta learning and instance based transfer learning.
We also curate a histopathology meta dataset - a benchmark dataset for training and validating models on out-of-distribution performance across a range of cancer types.
In experiments we showcase the usage of MetaHistoSeg with the meta dataset and find that both meta-learning and instance based transfer learning deliver comparable results on average.
arXiv Detail & Related papers (2021-09-29T23:05:04Z) - Learning to Segment Human Body Parts with Synthetically Trained Deep
Convolutional Networks [58.0240970093372]
This paper presents a new framework for human body part segmentation based on Deep Convolutional Neural Networks trained using only synthetic data.
The proposed approach achieves cutting-edge results without the need of training the models with real annotated data of human body parts.
arXiv Detail & Related papers (2021-02-02T12:26:50Z) - Learning Abstract Task Representations [0.6690874707758511]
We propose a method to induce new abstract meta-features as latent variables in a deep neural network.
We demonstrate our methodology using a deep neural network as a feature extractor.
arXiv Detail & Related papers (2021-01-19T20:31:02Z) - Diminishing Uncertainty within the Training Pool: Active Learning for
Medical Image Segmentation [6.3858225352615285]
We explore active learning for the task of segmentation of medical imaging data sets.
We propose three new strategies for active learning: increasing frequency of uncertain data to bias the training data set, using mutual information among the input images as a regularizer and adaptation of Dice log-likelihood for Stein variational gradient descent (SVGD)
The results indicate an improvement in terms of data reduction by achieving full accuracy while only using 22.69 % and 48.85 % of the available data for each dataset, respectively.
arXiv Detail & Related papers (2021-01-07T01:55:48Z) - Improving Calibration and Out-of-Distribution Detection in Medical Image
Segmentation with Convolutional Neural Networks [8.219843232619551]
Convolutional Neural Networks (CNNs) have shown to be powerful medical image segmentation models.
We advocate for multi-task learning, i.e., training a single model on several different datasets.
We show that not only a single CNN learns to automatically recognize the context and accurately segment the organ of interest in each context, but also that such a joint model often has more accurate and better-calibrated predictions.
arXiv Detail & Related papers (2020-04-12T23:42:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.