What Can Be Transferred: Unsupervised Domain Adaptation for Endoscopic
Lesions Segmentation
- URL: http://arxiv.org/abs/2004.11500v1
- Date: Fri, 24 Apr 2020 00:57:05 GMT
- Title: What Can Be Transferred: Unsupervised Domain Adaptation for Endoscopic
Lesions Segmentation
- Authors: Jiahua Dong, Yang Cong, Gan Sun, Bineng Zhong, Xiaowei Xu
- Abstract summary: We develop a new unsupervised semantic transfer model including two complementary modules for endoscopic lesions segmentation.
Specifically, T_D focuses on where to translate transferable visual information of medical lesions via residual transferability-aware bottleneck.
T_F highlights how to augment transferable semantic features of various lesions and automatically ignore untransferable representations.
- Score: 51.7837386041158
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised domain adaptation has attracted growing research attention on
semantic segmentation. However, 1) most existing models cannot be directly
applied into lesions transfer of medical images, due to the diverse appearances
of same lesion among different datasets; 2) equal attention has been paid into
all semantic representations instead of neglecting irrelevant knowledge, which
leads to negative transfer of untransferable knowledge. To address these
challenges, we develop a new unsupervised semantic transfer model including two
complementary modules (i.e., T_D and T_F ) for endoscopic lesions segmentation,
which can alternatively determine where and how to explore transferable
domain-invariant knowledge between labeled source lesions dataset (e.g.,
gastroscope) and unlabeled target diseases dataset (e.g., enteroscopy).
Specifically, T_D focuses on where to translate transferable visual information
of medical lesions via residual transferability-aware bottleneck, while
neglecting untransferable visual characterizations. Furthermore, T_F highlights
how to augment transferable semantic features of various lesions and
automatically ignore untransferable representations, which explores
domain-invariant knowledge and in return improves the performance of T_D. To
the end, theoretical analysis and extensive experiments on medical endoscopic
dataset and several non-medical public datasets well demonstrate the
superiority of our proposed model.
Related papers
- Unsupervised Domain Adaptation for Brain Vessel Segmentation through
Transwarp Contrastive Learning [46.248404274124546]
Unsupervised domain adaptation (UDA) aims to align the labelled source distribution with the unlabelled target distribution to obtain domain-invariant predictive models.
This paper proposes a simple yet potent contrastive learning framework for UDA to narrow the inter-domain gap between labelled source and unlabelled target distribution.
arXiv Detail & Related papers (2024-02-23T10:01:22Z) - Domain-invariant Clinical Representation Learning by Bridging Data
Distribution Shift across EMR Datasets [16.317118701435742]
An effective prognostic model is expected to assist doctors in making right diagnosis and designing personalized treatment plan.
In the early stage of a disease, limited data collection and clinical experiences, plus the concern out of privacy and ethics, may result in restricted data availability for reference.
This article introduces a domain-invariant representation learning method to build a transition model from source dataset to target dataset.
arXiv Detail & Related papers (2023-10-11T18:32:21Z) - Deep Angiogram: Trivializing Retinal Vessel Segmentation [1.8479315677380455]
We propose a contrastive variational auto-encoder that can filter out irrelevant features and synthesize a latent image, named deep angiogram.
The generalizability of the synthetic network is improved by the contrastive loss that makes the model less sensitive to variations of image contrast and noisy features.
arXiv Detail & Related papers (2023-07-01T06:13:10Z) - Data-Efficient Vision Transformers for Multi-Label Disease
Classification on Chest Radiographs [55.78588835407174]
Vision Transformers (ViTs) have not been applied to this task despite their high classification performance on generic images.
ViTs do not rely on convolutions but on patch-based self-attention and in contrast to CNNs, no prior knowledge of local connectivity is present.
Our results show that while the performance between ViTs and CNNs is on par with a small benefit for ViTs, DeiTs outperform the former if a reasonably large data set is available for training.
arXiv Detail & Related papers (2022-08-17T09:07:45Z) - AlignTransformer: Hierarchical Alignment of Visual Regions and Disease
Tags for Medical Report Generation [50.21065317817769]
We propose an AlignTransformer framework, which includes the Align Hierarchical Attention (AHA) and the Multi-Grained Transformer (MGT) modules.
Experiments on the public IU-Xray and MIMIC-CXR datasets show that the AlignTransformer can achieve results competitive with state-of-the-art methods on the two datasets.
arXiv Detail & Related papers (2022-03-18T13:43:53Z) - FetReg: Placental Vessel Segmentation and Registration in Fetoscopy
Challenge Dataset [57.30136148318641]
Fetoscopy laser photocoagulation is a widely used procedure for the treatment of Twin-to-Twin Transfusion Syndrome (TTTS)
This may lead to increased procedural time and incomplete ablation, resulting in persistent TTTS.
Computer-assisted intervention may help overcome these challenges by expanding the fetoscopic field of view through video mosaicking and providing better visualization of the vessel network.
We present a large-scale multi-centre dataset for the development of generalized and robust semantic segmentation and video mosaicking algorithms for the fetal environment with a focus on creating drift-free mosaics from long duration fetoscopy videos.
arXiv Detail & Related papers (2021-06-10T17:14:27Z) - Weakly-Supervised Cross-Domain Adaptation for Endoscopic Lesions
Segmentation [79.58311369297635]
We propose a new weakly-supervised lesions transfer framework, which can explore transferable domain-invariant knowledge across different datasets.
A Wasserstein quantified transferability framework is developed to highlight widerange transferable contextual dependencies.
A novel self-supervised pseudo label generator is designed to equally provide confident pseudo pixel labels for both hard-to-transfer and easy-to-transfer target samples.
arXiv Detail & Related papers (2020-12-08T02:26:03Z) - CSCL: Critical Semantic-Consistent Learning for Unsupervised Domain
Adaptation [42.226842513334184]
We develop a new Critical Semantic-Consistent Learning model, which mitigates the discrepancy of both domain-wise and category-wise distributions.
Specifically, a critical transfer based adversarial framework is designed to highlight transferable domain-wise knowledge while neglecting untransferable knowledge.
arXiv Detail & Related papers (2020-08-24T14:12:04Z) - Manifolds for Unsupervised Visual Anomaly Detection [79.22051549519989]
Unsupervised learning methods that don't necessarily encounter anomalies in training would be immensely useful.
We develop a novel hyperspherical Variational Auto-Encoder (VAE) via stereographic projections with a gyroplane layer.
We present state-of-the-art results on visual anomaly benchmarks in precision manufacturing and inspection, demonstrating real-world utility in industrial AI scenarios.
arXiv Detail & Related papers (2020-06-19T20:41:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.