Towards Measuring Domain Shift in Histopathological Stain Translation in
an Unsupervised Manner
- URL: http://arxiv.org/abs/2205.04368v1
- Date: Mon, 9 May 2022 15:18:12 GMT
- Title: Towards Measuring Domain Shift in Histopathological Stain Translation in
an Unsupervised Manner
- Authors: Zeeshan Nisar, Jelica Vasiljevi\'c, Pierre Gan\c{c}arski, Thomas
Lampert
- Abstract summary: This article demonstrates that the PixelCNN and domain shift metric can be used to detect and quantify domain shift in digital histopathology.
Findings pave the way for a mechanism to infer the average performance of a model (trained on source data) on unseen and unlabelled target data.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Domain shift in digital histopathology can occur when different stains or
scanners are used, during stain translation, etc. A deep neural network trained
on source data may not generalise well to data that has undergone some domain
shift. An important step towards being robust to domain shift is the ability to
detect and measure it. This article demonstrates that the PixelCNN and domain
shift metric can be used to detect and quantify domain shift in digital
histopathology, and they demonstrate a strong correlation with generalisation
performance. These findings pave the way for a mechanism to infer the average
performance of a model (trained on source data) on unseen and unlabelled target
data.
Related papers
- S4DL: Shift-sensitive Spatial-Spectral Disentangling Learning for Hyperspectral Image Unsupervised Domain Adaptation [73.90209847296839]
Unsupervised domain adaptation techniques, extensively studied in hyperspectral image (HSI) classification, aim to use labeled source domain data and unlabeled target domain data.
We propose shift-sensitive spatial-spectral disentangling learning (S4DL) approach.
Experiments on several cross-scene HSI datasets consistently verified that S4DL is better than the state-of-the-art UDA methods.
arXiv Detail & Related papers (2024-08-11T15:58:24Z) - Adversarial Learning for Feature Shift Detection and Correction [45.65548560695731]
Feature shifts can occur in many datasets, including in multi-sensor data, where some sensors are malfunctioning, or in structured data, where faulty standardization and data processing pipelines can lead to erroneous features.
In this work, we explore using the principles of adversarial learning, where the information from several discriminators trained to distinguish between two distributions is used to both detect the corrupted features and fix them in order to remove the distribution shift between datasets.
arXiv Detail & Related papers (2023-12-07T18:58:40Z) - Dis-AE: Multi-domain & Multi-task Generalisation on Real-World Clinical
Data [0.0]
We propose a novel disentangled autoencoder (Dis-AE) neural network architecture.
Dis-AE learns domain-invariant data representations for multi-label classification of medical measurements.
We evaluate the model's domain generalisation capabilities on synthetic datasets and full blood count (FBC) data from blood donors.
arXiv Detail & Related papers (2023-06-15T14:56:37Z) - Domain shifts in dermoscopic skin cancer datasets: Evaluation of
essential limitations for clinical translation [0.0]
We grouped publicly available images from ISIC archive based on their metadata to generate meaningful domains.
We used multiple quantification measures to estimate the presence and intensity of domain shifts.
We observed that in most of our grouped domains, domain shifts in fact exist.
arXiv Detail & Related papers (2023-04-14T07:38:09Z) - Adapting to Latent Subgroup Shifts via Concepts and Proxies [82.01141290360562]
We show that the optimal target predictor can be non-parametrically identified with the help of concept and proxy variables available only in the source domain.
For continuous observations, we propose a latent variable model specific to the data generation process at hand.
arXiv Detail & Related papers (2022-12-21T18:30:22Z) - Inter-Semantic Domain Adversarial in Histopathological Images [0.0]
In computer vision, data shift has proven to be a major barrier for safe and robust deep learning applications.
It is important to understand to what extent a model can be made robust against data shift using all available data.
arXiv Detail & Related papers (2022-01-22T12:55:59Z) - Embracing the Disharmony in Heterogeneous Medical Data [12.739380441313022]
Heterogeneity in medical imaging data is often tackled, in the context of machine learning, using domain invariance.
This paper instead embraces the heterogeneity and treats it as a multi-task learning problem.
We show that this approach improves classification accuracy by 5-30 % across different datasets on the main classification tasks.
arXiv Detail & Related papers (2021-03-23T21:36:39Z) - TraND: Transferable Neighborhood Discovery for Unsupervised Cross-domain
Gait Recognition [77.77786072373942]
This paper proposes a Transferable Neighborhood Discovery (TraND) framework to bridge the domain gap for unsupervised cross-domain gait recognition.
We design an end-to-end trainable approach to automatically discover the confident neighborhoods of unlabeled samples in the latent space.
Our method achieves state-of-the-art results on two public datasets, i.e., CASIA-B and OU-LP.
arXiv Detail & Related papers (2021-02-09T03:07:07Z) - Domain Generalization for Medical Imaging Classification with
Linear-Dependency Regularization [59.5104563755095]
We introduce a simple but effective approach to improve the generalization capability of deep neural networks in the field of medical imaging classification.
Motivated by the observation that the domain variability of the medical images is to some extent compact, we propose to learn a representative feature space through variational encoding.
arXiv Detail & Related papers (2020-09-27T12:30:30Z) - Understanding Self-Training for Gradual Domain Adaptation [107.37869221297687]
We consider gradual domain adaptation, where the goal is to adapt an initial classifier trained on a source domain given only unlabeled data that shifts gradually in distribution towards a target domain.
We prove the first non-vacuous upper bound on the error of self-training with gradual shifts, under settings where directly adapting to the target domain can result in unbounded error.
The theoretical analysis leads to algorithmic insights, highlighting that regularization and label sharpening are essential even when we have infinite data, and suggesting that self-training works particularly well for shifts with small Wasserstein-infinity distance.
arXiv Detail & Related papers (2020-02-26T08:59:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.