Dis-AE: Multi-domain & Multi-task Generalisation on Real-World Clinical
Data
- URL: http://arxiv.org/abs/2306.09177v1
- Date: Thu, 15 Jun 2023 14:56:37 GMT
- Title: Dis-AE: Multi-domain & Multi-task Generalisation on Real-World Clinical
Data
- Authors: Daniel Kreuter, Samuel Tull, Julian Gilbey, Jacobus Preller,
BloodCounts! Consortium, John A.D. Aston, James H.F. Rudd, Suthesh
Sivapalaratnam, Carola-Bibiane Sch\"onlieb, Nicholas Gleadall, Michael
Roberts
- Abstract summary: We propose a novel disentangled autoencoder (Dis-AE) neural network architecture.
Dis-AE learns domain-invariant data representations for multi-label classification of medical measurements.
We evaluate the model's domain generalisation capabilities on synthetic datasets and full blood count (FBC) data from blood donors.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Clinical data is often affected by clinically irrelevant factors such as
discrepancies between measurement devices or differing processing methods
between sites. In the field of machine learning (ML), these factors are known
as domains and the distribution differences they cause in the data are known as
domain shifts. ML models trained using data from one domain often perform
poorly when applied to data from another domain, potentially leading to wrong
predictions. As such, developing machine learning models that can generalise
well across multiple domains is a challenging yet essential task in the
successful application of ML in clinical practice. In this paper, we propose a
novel disentangled autoencoder (Dis-AE) neural network architecture that can
learn domain-invariant data representations for multi-label classification of
medical measurements even when the data is influenced by multiple interacting
domain shifts at once. The model utilises adversarial training to produce data
representations from which the domain can no longer be determined. We evaluate
the model's domain generalisation capabilities on synthetic datasets and full
blood count (FBC) data from blood donors as well as primary and secondary care
patients, showing that Dis-AE improves model generalisation on multiple domains
simultaneously while preserving clinically relevant information.
Related papers
- DIGIC: Domain Generalizable Imitation Learning by Causal Discovery [69.13526582209165]
Causality has been combined with machine learning to produce robust representations for domain generalization.
We make a different attempt by leveraging the demonstration data distribution to discover causal features for a domain generalizable policy.
We design a novel framework, called DIGIC, to identify the causal features by finding the direct cause of the expert action from the demonstration data distribution.
arXiv Detail & Related papers (2024-02-29T07:09:01Z) - Generalization in medical AI: a perspective on developing scalable
models [3.003979691986621]
Many prestigious journals now require reporting results both on the local hidden test set as well as on external datasets.
This is because of the variability encountered in intended use and specificities across hospital cultures.
We establish a hierarchical three-level scale system reflecting the generalization level of a medical AI algorithm.
arXiv Detail & Related papers (2023-11-09T14:54:28Z) - Maximizing Model Generalization for Machine Condition Monitoring with
Self-Supervised Learning and Federated Learning [4.214064911004321]
Deep Learning can diagnose faults and assess machine health from raw condition monitoring data without manually designed statistical features.
Traditional supervised learning may struggle to learn compact, discriminative representations that generalize to unseen target domains.
This study proposes focusing on maximizing the feature generality on the source domain and applying TL via weight transfer to copy the model to the target domain.
arXiv Detail & Related papers (2023-04-27T17:57:54Z) - Domain shifts in dermoscopic skin cancer datasets: Evaluation of
essential limitations for clinical translation [0.0]
We grouped publicly available images from ISIC archive based on their metadata to generate meaningful domains.
We used multiple quantification measures to estimate the presence and intensity of domain shifts.
We observed that in most of our grouped domains, domain shifts in fact exist.
arXiv Detail & Related papers (2023-04-14T07:38:09Z) - Domain Generalization with Adversarial Intensity Attack for Medical
Image Segmentation [27.49427483473792]
In real-world scenarios, it is common for models to encounter data from new and different domains to which they were not exposed to during training.
domain generalization (DG) is a promising direction as it enables models to handle data from previously unseen domains.
We introduce a novel DG method called Adversarial Intensity Attack (AdverIN), which leverages adversarial training to generate training data with an infinite number of styles.
arXiv Detail & Related papers (2023-04-05T19:40:51Z) - Improving Domain Generalization with Domain Relations [77.63345406973097]
This paper focuses on domain shifts, which occur when the model is applied to new domains that are different from the ones it was trained on.
We propose a new approach called D$3$G to learn domain-specific models.
Our results show that D$3$G consistently outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-02-06T08:11:16Z) - Multi-Scale Multi-Target Domain Adaptation for Angle Closure
Classification [50.658613573816254]
We propose a novel Multi-scale Multi-target Domain Adversarial Network (M2DAN) for angle closure classification.
Based on these domain-invariant features at different scales, the deep model trained on the source domain is able to classify angle closure on multiple target domains.
arXiv Detail & Related papers (2022-08-25T15:27:55Z) - Embracing the Disharmony in Heterogeneous Medical Data [12.739380441313022]
Heterogeneity in medical imaging data is often tackled, in the context of machine learning, using domain invariance.
This paper instead embraces the heterogeneity and treats it as a multi-task learning problem.
We show that this approach improves classification accuracy by 5-30 % across different datasets on the main classification tasks.
arXiv Detail & Related papers (2021-03-23T21:36:39Z) - Batch Normalization Embeddings for Deep Domain Generalization [50.51405390150066]
Domain generalization aims at training machine learning models to perform robustly across different and unseen domains.
We show a significant increase in classification accuracy over current state-of-the-art techniques on popular domain generalization benchmarks.
arXiv Detail & Related papers (2020-11-25T12:02:57Z) - Multi-Domain Adversarial Feature Generalization for Person
Re-Identification [52.835955258959785]
We propose a multi-dataset feature generalization network (MMFA-AAE)
It is capable of learning a universal domain-invariant feature representation from multiple labeled datasets and generalizing it to unseen' camera systems.
It also surpasses many state-of-the-art supervised methods and unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2020-11-25T08:03:15Z) - Domain Generalization for Medical Imaging Classification with
Linear-Dependency Regularization [59.5104563755095]
We introduce a simple but effective approach to improve the generalization capability of deep neural networks in the field of medical imaging classification.
Motivated by the observation that the domain variability of the medical images is to some extent compact, we propose to learn a representative feature space through variational encoding.
arXiv Detail & Related papers (2020-09-27T12:30:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.