Input Data Adaptive Learning (IDAL) for Sub-acute Ischemic Stroke Lesion
Segmentation
- URL: http://arxiv.org/abs/2403.07428v1
- Date: Tue, 12 Mar 2024 09:11:02 GMT
- Title: Input Data Adaptive Learning (IDAL) for Sub-acute Ischemic Stroke Lesion
Segmentation
- Authors: Michael G\"otz, Christian Weber, Christoph Kolb, Klaus Maier-Hein
- Abstract summary: This paper presents a method for learning from a large training base by adaptively selecting optimal training samples for given input data.
The proposed algorithm leads to a significant improvement of the classification accuracy.
- Score: 0.11976120407592658
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In machine learning larger databases are usually associated with higher
classification accuracy due to better generalization. This generalization may
lead to non-optimal classifiers in some medical applications with highly
variable expressions of pathologies. This paper presents a method for learning
from a large training base by adaptively selecting optimal training samples for
given input data. In this way heterogeneous databases are supported two-fold.
First, by being able to deal with sparsely annotated data allows a quick
inclusion of new data set and second, by training an input-dependent
classifier. The proposed approach is evaluated using the SISS challenge. The
proposed algorithm leads to a significant improvement of the classification
accuracy.
Related papers
- DALSA: Domain Adaptation for Supervised Learning From Sparsely Annotated
MR Images [2.352695945685781]
We propose a new method that employs transfer learning techniques to correct sampling selection errors introduced by sparse annotations during supervised learning for automated tumor segmentation.
The proposed method derives high-quality classifiers for the different tissue classes from sparse and unambiguous annotations.
Compared to training on fully labeled data, we reduced the time for labeling and training by a factor greater than 70 and 180 respectively without sacrificing accuracy.
arXiv Detail & Related papers (2024-03-12T09:17:21Z) - Adaptive Variance Thresholding: A Novel Approach to Improve Existing
Deep Transfer Vision Models and Advance Automatic Knee-Joint Osteoarthritis
Classification [0.11249583407496219]
Knee-Joint Osteoarthritis (KOA) is a prevalent cause of global disability and inherently complex to diagnose.
One promising classification avenue involves applying deep learning methods.
This study proposes a novel paradigm for improving post-training specialized classifiers.
arXiv Detail & Related papers (2023-11-10T00:17:07Z) - XAL: EXplainable Active Learning Makes Classifiers Better Low-resource Learners [71.8257151788923]
We propose a novel Explainable Active Learning framework (XAL) for low-resource text classification.
XAL encourages classifiers to justify their inferences and delve into unlabeled data for which they cannot provide reasonable explanations.
Experiments on six datasets show that XAL achieves consistent improvement over 9 strong baselines.
arXiv Detail & Related papers (2023-10-09T08:07:04Z) - Compound Batch Normalization for Long-tailed Image Classification [77.42829178064807]
We propose a compound batch normalization method based on a Gaussian mixture.
It can model the feature space more comprehensively and reduce the dominance of head classes.
The proposed method outperforms existing methods on long-tailed image classification.
arXiv Detail & Related papers (2022-12-02T07:31:39Z) - Mutual Information Learned Classifiers: an Information-theoretic
Viewpoint of Training Deep Learning Classification Systems [9.660129425150926]
Cross entropy loss can easily lead us to find models which demonstrate severe overfitting behavior.
In this paper, we prove that the existing cross entropy loss minimization for training DNN classifiers essentially learns the conditional entropy of the underlying data distribution.
We propose a mutual information learning framework where we train DNN classifiers via learning the mutual information between the label and input.
arXiv Detail & Related papers (2022-10-03T15:09:19Z) - Invariance Learning in Deep Neural Networks with Differentiable Laplace
Approximations [76.82124752950148]
We develop a convenient gradient-based method for selecting the data augmentation.
We use a differentiable Kronecker-factored Laplace approximation to the marginal likelihood as our objective.
arXiv Detail & Related papers (2022-02-22T02:51:11Z) - Riemannian classification of EEG signals with missing values [67.90148548467762]
This paper proposes two strategies to handle missing data for the classification of electroencephalograms.
The first approach estimates the covariance from imputed data with the $k$-nearest neighbors algorithm; the second relies on the observed data by leveraging the observed-data likelihood within an expectation-maximization algorithm.
As results show, the proposed strategies perform better than the classification based on observed data and allow to keep a high accuracy even when the missing data ratio increases.
arXiv Detail & Related papers (2021-10-19T14:24:50Z) - Categorical EHR Imputation with Generative Adversarial Nets [11.171712535005357]
We propose a simple and yet effective approach that is based on previous work on GANs for data imputation.
We show that our imputation approach largely improves the prediction accuracy, compared to more traditional data imputation approaches.
arXiv Detail & Related papers (2021-08-03T18:50:26Z) - Binary Classification from Multiple Unlabeled Datasets via Surrogate Set
Classification [94.55805516167369]
We propose a new approach for binary classification from m U-sets for $mge2$.
Our key idea is to consider an auxiliary classification task called surrogate set classification (SSC)
arXiv Detail & Related papers (2021-02-01T07:36:38Z) - A generic ensemble based deep convolutional neural network for
semi-supervised medical image segmentation [7.141405427125369]
We propose a generic semi-supervised learning framework for image segmentation based on a deep convolutional neural network (DCNN)
Our method is able to significantly improve beyond fully supervised model learning by incorporating unlabeled data.
arXiv Detail & Related papers (2020-04-16T23:41:50Z) - Unshuffling Data for Improved Generalization [65.57124325257409]
Generalization beyond the training distribution is a core challenge in machine learning.
We show that partitioning the data into well-chosen, non-i.i.d. subsets treated as multiple training environments can guide the learning of models with better out-of-distribution generalization.
arXiv Detail & Related papers (2020-02-27T03:07:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.