Learning Cross-domain Generalizable Features by Representation
Disentanglement
- URL: http://arxiv.org/abs/2003.00321v1
- Date: Sat, 29 Feb 2020 17:53:16 GMT
- Title: Learning Cross-domain Generalizable Features by Representation
Disentanglement
- Authors: Qingjie Meng and Daniel Rueckert and Bernhard Kainz
- Abstract summary: Deep learning models exhibit limited generalizability across different domains.
We propose Mutual-Information-based Disentangled Neural Networks (MIDNet) to extract generalizable features that enable transferring knowledge to unseen categorical features in target domains.
We demonstrate our method on handwritten digits datasets and a fetal ultrasound dataset for image classification tasks.
- Score: 11.74643883335152
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning models exhibit limited generalizability across different
domains. Specifically, transferring knowledge from available entangled domain
features(source/target domain) and categorical features to new unseen
categorical features in a target domain is an interesting and difficult problem
that is rarely discussed in the current literature. This problem is essential
for many real-world applications such as improving diagnostic classification or
prediction in medical imaging. To address this problem, we propose
Mutual-Information-based Disentangled Neural Networks (MIDNet) to extract
generalizable features that enable transferring knowledge to unseen categorical
features in target domains. The proposed MIDNet is developed as a
semi-supervised learning paradigm to alleviate the dependency on labeled data.
This is important for practical applications where data annotation requires
rare expertise as well as intense time and labor. We demonstrate our method on
handwritten digits datasets and a fetal ultrasound dataset for image
classification tasks. Experiments show that our method outperforms the
state-of-the-art and achieve expected performance with sparsely labeled data.
Related papers
- Domain Generalization through Meta-Learning: A Survey [6.524870790082051]
Deep neural networks (DNNs) have revolutionized artificial intelligence but often lack performance when faced with out-of-distribution (OOD) data.
This survey paper delves into the realm of meta-learning with a focus on its contribution to domain generalization.
arXiv Detail & Related papers (2024-04-03T14:55:17Z) - Multi-scale Feature Alignment for Continual Learning of Unlabeled
Domains [3.9498537297431167]
generative feature-driven image replay in conjunction with a dual-purpose discriminator enables the generation of images with realistic features for replay.
We present detailed ablation experiments studying our proposed method components and demonstrate a possible use-case of our continual UDA method for an unsupervised patch-based segmentation task.
arXiv Detail & Related papers (2023-02-02T18:19:01Z) - Coarse-to-fine Knowledge Graph Domain Adaptation based on
Distantly-supervised Iterative Training [12.62127290494378]
We propose an integrated framework for adapting and re-learning knowledge graphs.
No manual data annotation is required to train the model.
We introduce a novel iterative training strategy to facilitate the discovery of domain-specific named entities and triples.
arXiv Detail & Related papers (2022-11-05T08:16:38Z) - CHALLENGER: Training with Attribution Maps [63.736435657236505]
We show that utilizing attribution maps for training neural networks can improve regularization of models and thus increase performance.
In particular, we show that our generic domain-independent approach yields state-of-the-art results in vision, natural language processing and on time series tasks.
arXiv Detail & Related papers (2022-05-30T13:34:46Z) - On Generalizing Beyond Domains in Cross-Domain Continual Learning [91.56748415975683]
Deep neural networks often suffer from catastrophic forgetting of previously learned knowledge after learning a new task.
Our proposed approach learns new tasks under domain shift with accuracy boosts up to 10% on challenging datasets such as DomainNet and OfficeHome.
arXiv Detail & Related papers (2022-03-08T09:57:48Z) - Unsupervised Domain Adaption of Object Detectors: A Survey [87.08473838767235]
Recent advances in deep learning have led to the development of accurate and efficient models for various computer vision applications.
Learning highly accurate models relies on the availability of datasets with a large number of annotated images.
Due to this, model performance drops drastically when evaluated on label-scarce datasets having visually distinct images.
arXiv Detail & Related papers (2021-05-27T23:34:06Z) - Streaming Self-Training via Domain-Agnostic Unlabeled Images [62.57647373581592]
We present streaming self-training (SST) that aims to democratize the process of learning visual recognition models.
Key to SST are two crucial observations: (1) domain-agnostic unlabeled images enable us to learn better models with a few labeled examples without any additional knowledge or supervision; and (2) learning is a continuous process and can be done by constructing a schedule of learning updates.
arXiv Detail & Related papers (2021-04-07T17:58:39Z) - DomainMix: Learning Generalizable Person Re-Identification Without Human
Annotations [89.78473564527688]
This paper shows how to use labeled synthetic dataset and unlabeled real-world dataset to train a universal model.
In this way, human annotations are no longer required, and it is scalable to large and diverse real-world datasets.
Experimental results show that the proposed annotation-free method is more or less comparable to the counterpart trained with full human annotations.
arXiv Detail & Related papers (2020-11-24T08:15:53Z) - Mutual Information-based Disentangled Neural Networks for Classifying
Unseen Categories in Different Domains: Application to Fetal Ultrasound
Imaging [10.504733425082335]
Deep neural networks exhibit limited generalizability across images with different entangled domain features and categorical features.
We propose Mutual Information-based Disentangled Neural Networks (MIDNet), which extract generalizable categorical features to transfer knowledge to unseen categories in a target domain.
We extensively evaluate the proposed method on fetal ultrasound datasets for two different image classification tasks.
arXiv Detail & Related papers (2020-10-30T17:32:18Z) - Domain Generalization for Medical Imaging Classification with
Linear-Dependency Regularization [59.5104563755095]
We introduce a simple but effective approach to improve the generalization capability of deep neural networks in the field of medical imaging classification.
Motivated by the observation that the domain variability of the medical images is to some extent compact, we propose to learn a representative feature space through variational encoding.
arXiv Detail & Related papers (2020-09-27T12:30:30Z) - Self-Path: Self-supervision for Classification of Pathology Images with
Limited Annotations [4.713391888104896]
We propose a self-supervised CNN approach to leverage unlabeled data for learning generalizable and domain invariant representations in pathology images.
We introduce novel domain specific self-supervision tasks that leverage contextual, multi-resolution and semantic features in pathology images.
arXiv Detail & Related papers (2020-08-12T21:02:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.