Domain Generalization by Learning from Privileged Medical Imaging
Information
- URL: http://arxiv.org/abs/2311.05861v1
- Date: Fri, 10 Nov 2023 04:09:52 GMT
- Title: Domain Generalization by Learning from Privileged Medical Imaging
Information
- Authors: Steven Korevaar, Ruwan Tennakoon, Ricky O'Brien, Dwarikanath
Mahapatra, Alireza Bab-Hadiasha
- Abstract summary: We show that using some privileged information such as tumor shape or location leads to stronger domain generalization ability than current state-of-the-art techniques.
This paper provides a strong starting point for using privileged information in other medical problems requiring generalization.
- Score: 11.838548716479158
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Learning the ability to generalize knowledge between similar contexts is
particularly important in medical imaging as data distributions can shift
substantially from one hospital to another, or even from one machine to
another. To strengthen generalization, most state-of-the-art techniques inject
knowledge of the data distribution shifts by enforcing constraints on learned
features or regularizing parameters. We offer an alternative approach: Learning
from Privileged Medical Imaging Information (LPMII). We show that using some
privileged information such as tumor shape or location leads to stronger domain
generalization ability than current state-of-the-art techniques. This paper
demonstrates that by using privileged information to predict the severity of
intra-layer retinal fluid in optical coherence tomography scans, the
classification accuracy of a deep learning model operating on
out-of-distribution data improves from $0.911$ to $0.934$. This paper provides
a strong starting point for using privileged information in other medical
problems requiring generalization.
Related papers
- A Textbook Remedy for Domain Shifts: Knowledge Priors for Medical Image Analysis [48.84443450990355]
Deep networks have achieved broad success in analyzing natural images, when applied to medical scans, they often fail in unexcepted situations.
We investigate this challenge and focus on model sensitivity to domain shifts, such as data sampled from different hospitals or data confounded by demographic variables such as sex, race, etc, in the context of chest X-rays and skin lesion images.
Taking inspiration from medical training, we propose giving deep networks a prior grounded in explicit medical knowledge communicated in natural language.
arXiv Detail & Related papers (2024-05-23T17:55:02Z) - MLIP: Enhancing Medical Visual Representation with Divergence Encoder
and Knowledge-guided Contrastive Learning [48.97640824497327]
We propose a novel framework leveraging domain-specific medical knowledge as guiding signals to integrate language information into the visual domain through image-text contrastive learning.
Our model includes global contrastive learning with our designed divergence encoder, local token-knowledge-patch alignment contrastive learning, and knowledge-guided category-level contrastive learning with expert knowledge.
Notably, MLIP surpasses state-of-the-art methods even with limited annotated data, highlighting the potential of multimodal pre-training in advancing medical representation learning.
arXiv Detail & Related papers (2024-02-03T05:48:50Z) - More From Less: Self-Supervised Knowledge Distillation for Routine
Histopathology Data [3.93181912653522]
We show that it is possible to distil knowledge during training from information-dense data into models which only require information-sparse data for inference.
This improves downstream classification accuracy on information-sparse data, making it comparable with the fully-supervised baseline.
This approach enables the design of models which require only routine images, but contain insights from state-of-the-art data, allowing better use of the available resources.
arXiv Detail & Related papers (2023-03-19T13:41:59Z) - From Labels to Priors in Capsule Endoscopy: A Prior Guided Approach for
Improving Generalization with Few Labels [4.9136996406481135]
We propose using freely available domain knowledge as priors to learn more robust and generalizable representations.
We experimentally show that domain priors can benefit representations by acting in proxy of labels.
Our method performs better than (or closes gap with) the state-of-the-art in the domain.
arXiv Detail & Related papers (2022-06-10T12:35:49Z) - Evaluation of Complexity Measures for Deep Learning Generalization in
Medical Image Analysis [77.34726150561087]
PAC-Bayes flatness-based and path norm-based measures produce the most consistent explanation for the combination of models and data.
We also investigate the use of multi-task classification and segmentation approach for breast images.
arXiv Detail & Related papers (2021-03-04T20:58:22Z) - Cross-Modal Information Maximization for Medical Imaging: CMIM [62.28852442561818]
In hospitals, data are siloed to specific information systems that make the same information available under different modalities.
This offers unique opportunities to obtain and use at train-time those multiple views of the same information that might not always be available at test-time.
We propose an innovative framework that makes the most of available data by learning good representations of a multi-modal input that are resilient to modality dropping at test-time.
arXiv Detail & Related papers (2020-10-20T20:05:35Z) - Medical Image Harmonization Using Deep Learning Based Canonical Mapping:
Toward Robust and Generalizable Learning in Imaging [4.396671464565882]
We propose a new paradigm in which data from a diverse range of acquisition conditions are "harmonized" to a common reference domain.
We test this approach on two example problems, namely MRI-based brain age prediction and classification of schizophrenia.
arXiv Detail & Related papers (2020-10-11T22:01:37Z) - Domain Generalization for Medical Imaging Classification with
Linear-Dependency Regularization [59.5104563755095]
We introduce a simple but effective approach to improve the generalization capability of deep neural networks in the field of medical imaging classification.
Motivated by the observation that the domain variability of the medical images is to some extent compact, we propose to learn a representative feature space through variational encoding.
arXiv Detail & Related papers (2020-09-27T12:30:30Z) - Learning Invariant Feature Representation to Improve Generalization
across Chest X-ray Datasets [55.06983249986729]
We show that a deep learning model performing well when tested on the same dataset as training data starts to perform poorly when it is tested on a dataset from a different source.
By employing an adversarial training strategy, we show that a network can be forced to learn a source-invariant representation.
arXiv Detail & Related papers (2020-08-04T07:41:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.