Meta ordinal weighting net for improving lung nodule classification
- URL: http://arxiv.org/abs/2102.00456v1
- Date: Sun, 31 Jan 2021 14:00:20 GMT
- Title: Meta ordinal weighting net for improving lung nodule classification
- Authors: Yiming Lei, Hongming Shan, Junping Zhang
- Abstract summary: We propose a Meta Ordinal Weighting Network (MOW-Net) to align each training sample with a meta ordinal set (MOS) containing a few samples from all classes.
During the training process, the MOW-Net learns a mapping from samples in MOS to the corresponding class-specific weight.
The experimental results demonstrate that the MOW-Net achieves better accuracy than the state-of-the-art ordinal regression methods.
- Score: 19.61244172891081
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The progression of lung cancer implies the intrinsic ordinal relationship of
lung nodules at different stages-from benign to unsure then to malignant. This
problem can be solved by ordinal regression methods, which is between
classification and regression due to its ordinal label. However, existing
convolutional neural network (CNN)-based ordinal regression methods only focus
on modifying classification head based on a randomly sampled mini-batch of
data, ignoring the ordinal relationship resided in the data itself. In this
paper, we propose a Meta Ordinal Weighting Network (MOW-Net) to explicitly
align each training sample with a meta ordinal set (MOS) containing a few
samples from all classes. During the training process, the MOW-Net learns a
mapping from samples in MOS to the corresponding class-specific weight. In
addition, we further propose a meta cross-entropy (MCE) loss to optimize the
network in a meta-learning scheme. The experimental results demonstrate that
the MOW-Net achieves better accuracy than the state-of-the-art ordinal
regression methods, especially for the unsure class.
Related papers
- Adaptive Margin Global Classifier for Exemplar-Free Class-Incremental Learning [3.4069627091757178]
Existing methods mainly focus on handling biased learning.
We introduce a Distribution-Based Global (DBGC) to avoid bias factors in existing methods, such as data imbalance and sampling.
More importantly, the compromised distributions of old classes are simulated via a simple operation, variance (VE).
This loss is proven equivalent to an Adaptive Margin Softmax Cross Entropy (AMarX)
arXiv Detail & Related papers (2024-09-20T07:07:23Z) - Meta-GCN: A Dynamically Weighted Loss Minimization Method for Dealing with the Data Imbalance in Graph Neural Networks [5.285761906707429]
We propose a meta-learning algorithm, named Meta-GCN, for adaptively learning the example weights.
We have shown that Meta-GCN outperforms state-of-the-art frameworks and other baselines in terms of accuracy.
arXiv Detail & Related papers (2024-06-24T18:59:24Z) - Rethinking Classifier Re-Training in Long-Tailed Recognition: A Simple
Logits Retargeting Approach [102.0769560460338]
We develop a simple logits approach (LORT) without the requirement of prior knowledge of the number of samples per class.
Our method achieves state-of-the-art performance on various imbalanced datasets, including CIFAR100-LT, ImageNet-LT, and iNaturalist 2018.
arXiv Detail & Related papers (2024-03-01T03:27:08Z) - Theoretical Characterization of the Generalization Performance of
Overfitted Meta-Learning [70.52689048213398]
This paper studies the performance of overfitted meta-learning under a linear regression model with Gaussian features.
We find new and interesting properties that do not exist in single-task linear regression.
Our analysis suggests that benign overfitting is more significant and easier to observe when the noise and the diversity/fluctuation of the ground truth of each training task are large.
arXiv Detail & Related papers (2023-04-09T20:36:13Z) - Compound Batch Normalization for Long-tailed Image Classification [77.42829178064807]
We propose a compound batch normalization method based on a Gaussian mixture.
It can model the feature space more comprehensively and reduce the dominance of head classes.
The proposed method outperforms existing methods on long-tailed image classification.
arXiv Detail & Related papers (2022-12-02T07:31:39Z) - Intra-class Adaptive Augmentation with Neighbor Correction for Deep
Metric Learning [99.14132861655223]
We propose a novel intra-class adaptive augmentation (IAA) framework for deep metric learning.
We reasonably estimate intra-class variations for every class and generate adaptive synthetic samples to support hard samples mining.
Our method significantly improves and outperforms the state-of-the-art methods on retrieval performances by 3%-6%.
arXiv Detail & Related papers (2022-11-29T14:52:38Z) - Adaptive Distribution Calibration for Few-Shot Learning with
Hierarchical Optimal Transport [78.9167477093745]
We propose a novel distribution calibration method by learning the adaptive weight matrix between novel samples and base classes.
Experimental results on standard benchmarks demonstrate that our proposed plug-and-play model outperforms competing approaches.
arXiv Detail & Related papers (2022-10-09T02:32:57Z) - Adapting the Mean Teacher for keypoint-based lung registration under
geometric domain shifts [75.51482952586773]
deep neural networks generally require plenty of labeled training data and are vulnerable to domain shifts between training and test data.
We present a novel approach to geometric domain adaptation for image registration, adapting a model from a labeled source to an unlabeled target domain.
Our method consistently improves on the baseline model by 50%/47% while even matching the accuracy of models trained on target data.
arXiv Detail & Related papers (2022-07-01T12:16:42Z) - Deep Ordinal Regression with Label Diversity [19.89482062012177]
We propose that using several discrete data representations simultaneously can improve neural network learning.
Our approach is end-to-end differentiable and can be added as a simple extension to conventional learning methods.
arXiv Detail & Related papers (2020-06-29T08:23:43Z) - A generic ensemble based deep convolutional neural network for
semi-supervised medical image segmentation [7.141405427125369]
We propose a generic semi-supervised learning framework for image segmentation based on a deep convolutional neural network (DCNN)
Our method is able to significantly improve beyond fully supervised model learning by incorporating unlabeled data.
arXiv Detail & Related papers (2020-04-16T23:41:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.