Adversarial Regression Learning for Bone Age Estimation
- URL: http://arxiv.org/abs/2103.06149v1
- Date: Wed, 10 Mar 2021 15:58:26 GMT
- Title: Adversarial Regression Learning for Bone Age Estimation
- Authors: Youshan Zhang and Brian D. Davison
- Abstract summary: We propose an adversarial regression learning network (ARLNet) for bone age estimation.
Specifically, we first extract bone features from a fine-tuned Inception V3 neural network.
We then propose adversarial regression loss and feature reconstruction loss to guarantee the transition from training data to test data.
- Score: 6.942003070153651
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Estimation of bone age from hand radiographs is essential to determine
skeletal age in diagnosing endocrine disorders and depicting the growth status
of children. However, existing automatic methods only apply their models to
test images without considering the discrepancy between training samples and
test samples, which will lead to a lower generalization ability. In this paper,
we propose an adversarial regression learning network (ARLNet) for bone age
estimation. Specifically, we first extract bone features from a fine-tuned
Inception V3 neural network and propose regression percentage loss for
training. To reduce the discrepancy between training and test data, we then
propose adversarial regression loss and feature reconstruction loss to
guarantee the transition from training data to test data and vice versa,
preserving invariant features from both training and test data. Experimental
results show that the proposed model outperforms state-of-the-art methods.
Related papers
- When No-Rejection Learning is Consistent for Regression with Rejection [11.244583592648443]
We study a no-reject learning strategy that uses all the data to learn the prediction.
This paper investigates a no-reject learning strategy that uses all the data to learn the prediction.
arXiv Detail & Related papers (2023-07-06T11:43:22Z) - Semantic Latent Space Regression of Diffusion Autoencoders for Vertebral
Fracture Grading [72.45699658852304]
This paper proposes a novel approach to train a generative Diffusion Autoencoder model as an unsupervised feature extractor.
We model fracture grading as a continuous regression, which is more reflective of the smooth progression of fractures.
Importantly, the generative nature of our method allows us to visualize different grades of a given vertebra, providing interpretability and insight into the features that contribute to automated grading.
arXiv Detail & Related papers (2023-03-21T17:16:01Z) - Reconstructing Training Data from Model Gradient, Provably [68.21082086264555]
We reconstruct the training samples from a single gradient query at a randomly chosen parameter value.
As a provable attack that reveals sensitive training data, our findings suggest potential severe threats to privacy.
arXiv Detail & Related papers (2022-12-07T15:32:22Z) - Anatomy-guided domain adaptation for 3D in-bed human pose estimation [62.3463429269385]
3D human pose estimation is a key component of clinical monitoring systems.
We present a novel domain adaptation method, adapting a model from a labeled source to a shifted unlabeled target domain.
Our method consistently outperforms various state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2022-11-22T11:34:51Z) - Using machine learning on new feature sets extracted from 3D models of
broken animal bones to classify fragments according to break agent [53.796331564067835]
We present a new approach to fracture pattern analysis aimed at distinguishing bone fragments resulting from hominin bone breakage and those produced by carnivores.
This new method uses 3D models of fragmentary bone to extract a much richer dataset that is more transparent and replicable than feature sets previously used in fracture pattern analysis.
Supervised machine learning algorithms are properly used to classify bone fragments according to agent of breakage with average mean accuracy of 77% across tests.
arXiv Detail & Related papers (2022-05-20T20:16:21Z) - About Explicit Variance Minimization: Training Neural Networks for
Medical Imaging With Limited Data Annotations [2.3204178451683264]
Variance Aware Training (VAT) method exploits this property by introducing the variance error into the model loss function.
We validate VAT on three medical imaging datasets from diverse domains and various learning objectives.
arXiv Detail & Related papers (2021-05-28T21:34:04Z) - Ridge Regression Neural Network for Pediatric Bone Age Assessment [1.1501261942096426]
Delayed or increased bone age is a serious concern for pediatricians.
We introduce a unified deep learning framework for bone age assessment using instance segmentation and ridge regression.
arXiv Detail & Related papers (2021-04-15T21:38:22Z) - Unsupervised neural adaptation model based on optimal transport for
spoken language identification [54.96267179988487]
Due to the mismatch of statistical distributions of acoustic speech between training and testing sets, the performance of spoken language identification (SLID) could be drastically degraded.
We propose an unsupervised neural adaptation model to deal with the distribution mismatch problem for SLID.
arXiv Detail & Related papers (2020-12-24T07:37:19Z) - Bayesian Sampling Bias Correction: Training with the Right Loss Function [0.0]
We derive a family of loss functions to train models in the presence of sampling bias.
Examples are when the prevalence of a pathology differs from its sampling rate in the training dataset, or when a machine learning practioner rebalances their training dataset.
arXiv Detail & Related papers (2020-06-24T15:10:43Z) - Automatic Recall Machines: Internal Replay, Continual Learning and the
Brain [104.38824285741248]
Replay in neural networks involves training on sequential data with memorized samples, which counteracts forgetting of previous behavior caused by non-stationarity.
We present a method where these auxiliary samples are generated on the fly, given only the model that is being trained for the assessed objective.
Instead the implicit memory of learned samples within the assessed model itself is exploited.
arXiv Detail & Related papers (2020-06-22T15:07:06Z) - An Efficient Method of Training Small Models for Regression Problems
with Knowledge Distillation [1.433758865948252]
We propose a new formalism of knowledge distillation for regression problems.
First, we propose a new loss function, teacher outlier loss rejection, which rejects outliers in training samples using teacher model predictions.
By considering the multi-task network, training of the feature extraction of student models becomes more effective.
arXiv Detail & Related papers (2020-02-28T08:46:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.