UDAMA: Unsupervised Domain Adaptation through Multi-discriminator
Adversarial Training with Noisy Labels Improves Cardio-fitness Prediction
- URL: http://arxiv.org/abs/2307.16651v1
- Date: Mon, 31 Jul 2023 13:31:53 GMT
- Title: UDAMA: Unsupervised Domain Adaptation through Multi-discriminator
Adversarial Training with Noisy Labels Improves Cardio-fitness Prediction
- Authors: Yu Wu, Dimitris Spathis, Hong Jia, Ignacio Perez-Pozuelo, Tomas
Gonzales, Soren Brage, Nicholas Wareham, Cecilia Mascolo
- Abstract summary: We introduce UDAMA, a method with two key components: Unsupervised Domain Adaptation and Multidiscriminator Adversarial Training.
In particular, we showcase the practical potential of UDAMA by applying it to Cardio-respiratory fitness (CRF) prediction.
Our results show promising performance by alleviating distribution shifts in various label shift settings.
- Score: 16.26599832125242
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning models have shown great promise in various healthcare
monitoring applications. However, most healthcare datasets with high-quality
(gold-standard) labels are small-scale, as directly collecting ground truth is
often costly and time-consuming. As a result, models developed and validated on
small-scale datasets often suffer from overfitting and do not generalize well
to unseen scenarios. At the same time, large amounts of imprecise
(silver-standard) labeled data, annotated by approximate methods with the help
of modern wearables and in the absence of ground truth validation, are starting
to emerge. However, due to measurement differences, this data displays
significant label distribution shifts, which motivates the use of domain
adaptation. To this end, we introduce UDAMA, a method with two key components:
Unsupervised Domain Adaptation and Multidiscriminator Adversarial Training,
where we pre-train on the silver-standard data and employ adversarial
adaptation with the gold-standard data along with two domain discriminators. In
particular, we showcase the practical potential of UDAMA by applying it to
Cardio-respiratory fitness (CRF) prediction. CRF is a crucial determinant of
metabolic disease and mortality, and it presents labels with various levels of
noise (goldand silver-standard), making it challenging to establish an accurate
prediction model. Our results show promising performance by alleviating
distribution shifts in various label shift settings. Additionally, by using
data from two free-living cohort studies (Fenland and BBVS), we show that UDAMA
consistently outperforms up to 12% compared to competitive transfer learning
and state-of-the-art domain adaptation models, paving the way for leveraging
noisy labeled data to improve fitness estimation at scale.
Related papers
- Weakly supervised deep learning model with size constraint for prostate cancer detection in multiparametric MRI and generalization to unseen domains [0.90668179713299]
We show that the model achieves on-par performance with strong fully supervised baseline models.
We also observe a performance decrease for both fully supervised and weakly supervised models when tested on unseen data domains.
arXiv Detail & Related papers (2024-11-04T12:24:33Z) - Continuous Contrastive Learning for Long-Tailed Semi-Supervised Recognition [50.61991746981703]
Current state-of-the-art LTSSL approaches rely on high-quality pseudo-labels for large-scale unlabeled data.
This paper introduces a novel probabilistic framework that unifies various recent proposals in long-tail learning.
We introduce a continuous contrastive learning method, CCL, extending our framework to unlabeled data using reliable and smoothed pseudo-labels.
arXiv Detail & Related papers (2024-10-08T15:06:10Z) - Channel-Selective Normalization for Label-Shift Robust Test-Time Adaptation [16.657929958093824]
Test-time adaptation is an approach to adjust models to a new data distribution during inference.
Test-time batch normalization is a simple and popular method that achieved compelling performance on domain shift benchmarks.
We propose to tackle this challenge by only selectively adapting channels in a deep network, minimizing drastic adaptation that is sensitive to label shifts.
arXiv Detail & Related papers (2024-02-07T15:41:01Z) - Calibrated Adaptive Teacher for Domain Adaptive Intelligent Fault
Diagnosis [7.88657961743755]
Unsupervised domain adaptation (UDA) deals with the scenario where labeled data are available in a source domain, and only unlabeled data are available in a target domain.
We propose a novel UDA method called Calibrated Adaptive Teacher (CAT), where we propose to calibrate the predictions of the teacher network throughout the self-training process.
arXiv Detail & Related papers (2023-12-05T15:19:29Z) - Turning Silver into Gold: Domain Adaptation with Noisy Labels for
Wearable Cardio-Respiratory Fitness Prediction [16.26599832125242]
We propose UDAMA, a novel model with two key components: Unsupervised Domain Adaptation and Multi-discriminator Adversarial training.
We validate our framework on the challenging task of predicting lab-measured maximal oxygen consumption.
Our experiments show that the proposed framework achieves the best performance of corr = 0.665 $pm$ 0.04, paving the way for accurate fitness estimation at scale.
arXiv Detail & Related papers (2022-11-20T14:55:48Z) - Cluster-level pseudo-labelling for source-free cross-domain facial
expression recognition [94.56304526014875]
We propose the first Source-Free Unsupervised Domain Adaptation (SFUDA) method for Facial Expression Recognition (FER)
Our method exploits self-supervised pretraining to learn good feature representations from the target data.
We validate the effectiveness of our method in four adaptation setups, proving that it consistently outperforms existing SFUDA methods when applied to FER.
arXiv Detail & Related papers (2022-10-11T08:24:50Z) - Contrastive Domain Adaptation for Early Misinformation Detection: A Case
Study on COVID-19 [8.828396559882954]
Early misinformation often demonstrates both conditional and label shifts against existing misinformation data.
We propose contrastive adaptation network for early misinformation detection (CANMD)
Results suggest CANMD can effectively adapt misinformation detection systems to the unseen COVID-19 target domain.
arXiv Detail & Related papers (2022-08-20T02:09:35Z) - VisDA-2021 Competition Universal Domain Adaptation to Improve
Performance on Out-of-Distribution Data [64.91713686654805]
The Visual Domain Adaptation (VisDA) 2021 competition tests models' ability to adapt to novel test distributions.
We will evaluate adaptation to novel viewpoints, backgrounds, modalities and degradation in quality.
Performance will be measured using a rigorous protocol, comparing to state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-07-23T03:21:51Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - Unsupervised neural adaptation model based on optimal transport for
spoken language identification [54.96267179988487]
Due to the mismatch of statistical distributions of acoustic speech between training and testing sets, the performance of spoken language identification (SLID) could be drastically degraded.
We propose an unsupervised neural adaptation model to deal with the distribution mismatch problem for SLID.
arXiv Detail & Related papers (2020-12-24T07:37:19Z) - Semi-supervised Medical Image Classification with Relation-driven
Self-ensembling Model [71.80319052891817]
We present a relation-driven semi-supervised framework for medical image classification.
It exploits the unlabeled data by encouraging the prediction consistency of given input under perturbations.
Our method outperforms many state-of-the-art semi-supervised learning methods on both single-label and multi-label image classification scenarios.
arXiv Detail & Related papers (2020-05-15T06:57:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.