Robust White Matter Hyperintensity Segmentation on Unseen Domain
- URL: http://arxiv.org/abs/2102.06650v1
- Date: Fri, 12 Feb 2021 17:44:11 GMT
- Title: Robust White Matter Hyperintensity Segmentation on Unseen Domain
- Authors: Xingchen Zhao, Anthony Sicilia, Davneet Minhas, Erin O'Connor, Howard
Aizenstein, William Klunk, Dana Tudorascu, Seong Jae Hwang
- Abstract summary: We consider the challenging case of Domain Generalization (DG) where we train a model without any knowledge about the testing distribution.
We focus on the task of white matter hyperintensity (WMH) prediction using the multi-site WMH Challenge dataset and our local in-house dataset.
We identify how two mechanically distinct DG approaches, namely domain adversarial learning and mix-up, have theoretical synergy.
- Score: 5.490618192331097
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Typical machine learning frameworks heavily rely on an underlying assumption
that training and test data follow the same distribution. In medical imaging
which increasingly begun acquiring datasets from multiple sites or scanners,
this identical distribution assumption often fails to hold due to systematic
variability induced by site or scanner dependent factors. Therefore, we cannot
simply expect a model trained on a given dataset to consistently work well, or
generalize, on a dataset from another distribution. In this work, we address
this problem, investigating the application of machine learning models to
unseen medical imaging data. Specifically, we consider the challenging case of
Domain Generalization (DG) where we train a model without any knowledge about
the testing distribution. That is, we train on samples from a set of
distributions (sources) and test on samples from a new, unseen distribution
(target). We focus on the task of white matter hyperintensity (WMH) prediction
using the multi-site WMH Segmentation Challenge dataset and our local in-house
dataset. We identify how two mechanically distinct DG approaches, namely domain
adversarial learning and mix-up, have theoretical synergy. Then, we show
drastic improvements of WMH prediction on an unseen target domain.
Related papers
- Generalizing to any diverse distribution: uniformity, gentle finetuning and rebalancing [55.791818510796645]
We aim to develop models that generalize well to any diverse test distribution, even if the latter deviates significantly from the training data.
Various approaches like domain adaptation, domain generalization, and robust optimization attempt to address the out-of-distribution challenge.
We adopt a more conservative perspective by accounting for the worst-case error across all sufficiently diverse test distributions within a known domain.
arXiv Detail & Related papers (2024-10-08T12:26:48Z) - Class-Balancing Diffusion Models [57.38599989220613]
Class-Balancing Diffusion Models (CBDM) are trained with a distribution adjustment regularizer as a solution.
Our method benchmarked the generation results on CIFAR100/CIFAR100LT dataset and shows outstanding performance on the downstream recognition task.
arXiv Detail & Related papers (2023-04-30T20:00:14Z) - Domain Generalization with Adversarial Intensity Attack for Medical
Image Segmentation [27.49427483473792]
In real-world scenarios, it is common for models to encounter data from new and different domains to which they were not exposed to during training.
domain generalization (DG) is a promising direction as it enables models to handle data from previously unseen domains.
We introduce a novel DG method called Adversarial Intensity Attack (AdverIN), which leverages adversarial training to generate training data with an infinite number of styles.
arXiv Detail & Related papers (2023-04-05T19:40:51Z) - Domain Adaptation and Generalization on Functional Medical Images: A
Systematic Survey [2.990508892017587]
Machine learning algorithms have revolutionized different fields, including natural language processing, computer vision, signal processing, and medical data processing.
Despite the excellent capabilities of machine learning algorithms, the performance of these models mainly deteriorates when there is a shift in the test and training data distributions.
This paper provides the first systematic review of domain generalization (DG) and domain adaptation (DA) on functional brain signals.
arXiv Detail & Related papers (2022-12-04T21:52:38Z) - Unsupervised Domain Adaptation Using Feature Disentanglement And GCNs
For Medical Image Classification [5.6512908295414]
We propose an unsupervised domain adaptation approach that uses graph neural networks and, disentangled semantic and domain invariant structural features.
We test the proposed method for classification on two challenging medical image datasets with distribution shifts.
Experiments show our method achieves state-of-the-art results compared to other domain adaptation methods.
arXiv Detail & Related papers (2022-06-27T09:02:16Z) - A Systematic Evaluation of Domain Adaptation in Facial Expression
Recognition [0.0]
This paper provides a systematic evaluation of domain adaptation in facial expression recognition.
We use state-of-the-art transfer learning techniques and six commonly-used facial expression datasets.
We find sobering results that the accuracy of transfer learning is not high, and varies idiosyncratically with the target dataset.
arXiv Detail & Related papers (2021-06-29T14:41:19Z) - Decentralized Local Stochastic Extra-Gradient for Variational
Inequalities [125.62877849447729]
We consider distributed variational inequalities (VIs) on domains with the problem data that is heterogeneous (non-IID) and distributed across many devices.
We make a very general assumption on the computational network that covers the settings of fully decentralized calculations.
We theoretically analyze its convergence rate in the strongly-monotone, monotone, and non-monotone settings.
arXiv Detail & Related papers (2021-06-15T17:45:51Z) - WILDS: A Benchmark of in-the-Wild Distribution Shifts [157.53410583509924]
Distribution shifts can substantially degrade the accuracy of machine learning systems deployed in the wild.
We present WILDS, a curated collection of 8 benchmark datasets that reflect a diverse range of distribution shifts.
We show that standard training results in substantially lower out-of-distribution than in-distribution performance.
arXiv Detail & Related papers (2020-12-14T11:14:56Z) - Deep Mining External Imperfect Data for Chest X-ray Disease Screening [57.40329813850719]
We argue that incorporating an external CXR dataset leads to imperfect training data, which raises the challenges.
We formulate the multi-label disease classification problem as weighted independent binary tasks according to the categories.
Our framework simultaneously models and tackles the domain and label discrepancies, enabling superior knowledge mining ability.
arXiv Detail & Related papers (2020-06-06T06:48:40Z) - Diversity inducing Information Bottleneck in Model Ensembles [73.80615604822435]
In this paper, we target the problem of generating effective ensembles of neural networks by encouraging diversity in prediction.
We explicitly optimize a diversity inducing adversarial loss for learning latent variables and thereby obtain diversity in the output predictions necessary for modeling multi-modal data.
Compared to the most competitive baselines, we show significant improvements in classification accuracy, under a shift in the data distribution.
arXiv Detail & Related papers (2020-03-10T03:10:41Z) - Learn to Expect the Unexpected: Probably Approximately Correct Domain
Generalization [38.345670899258515]
Domain generalization is the problem of machine learning when the training data and the test data come from different data domains.
We present a simple theoretical model of learning to generalize across domains in which there is a meta-distribution over data distributions.
arXiv Detail & Related papers (2020-02-13T17:37:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.