Multi-scale Feature Alignment for Continual Learning of Unlabeled
Domains
- URL: http://arxiv.org/abs/2302.01287v1
- Date: Thu, 2 Feb 2023 18:19:01 GMT
- Title: Multi-scale Feature Alignment for Continual Learning of Unlabeled
Domains
- Authors: Kevin Thandiackal, Luigi Piccinelli, Pushpak Pati, Orcun Goksel
- Abstract summary: generative feature-driven image replay in conjunction with a dual-purpose discriminator enables the generation of images with realistic features for replay.
We present detailed ablation experiments studying our proposed method components and demonstrate a possible use-case of our continual UDA method for an unsupervised patch-based segmentation task.
- Score: 3.9498537297431167
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Methods for unsupervised domain adaptation (UDA) help to improve the
performance of deep neural networks on unseen domains without any labeled data.
Especially in medical disciplines such as histopathology, this is crucial since
large datasets with detailed annotations are scarce. While the majority of
existing UDA methods focus on the adaptation from a labeled source to a single
unlabeled target domain, many real-world applications with a long life cycle
involve more than one target domain. Thus, the ability to sequentially adapt to
multiple target domains becomes essential. In settings where the data from
previously seen domains cannot be stored, e.g., due to data protection
regulations, the above becomes a challenging continual learning problem. To
this end, we propose to use generative feature-driven image replay in
conjunction with a dual-purpose discriminator that not only enables the
generation of images with realistic features for replay, but also promotes
feature alignment during domain adaptation. We evaluate our approach
extensively on a sequence of three histopathological datasets for tissue-type
classification, achieving state-of-the-art results. We present detailed
ablation experiments studying our proposed method components and demonstrate a
possible use-case of our continual UDA method for an unsupervised patch-based
segmentation task given high-resolution tissue images.
Related papers
- Source-Free Domain Adaptation for Medical Image Segmentation via
Prototype-Anchored Feature Alignment and Contrastive Learning [57.43322536718131]
We present a two-stage source-free domain adaptation (SFDA) framework for medical image segmentation.
In the prototype-anchored feature alignment stage, we first utilize the weights of the pre-trained pixel-wise classifier as source prototypes.
Then, we introduce the bi-directional transport to align the target features with class prototypes by minimizing its expected cost.
arXiv Detail & Related papers (2023-07-19T06:07:12Z) - Generative appearance replay for continual unsupervised domain
adaptation [4.623578780480946]
GarDA is a generative-replay based approach that can adapt a segmentation model sequentially to new domains with unlabeled data.
We evaluate GarDA on two datasets with different organs and modalities, where it substantially outperforms existing techniques.
arXiv Detail & Related papers (2023-01-03T17:04:05Z) - LE-UDA: Label-efficient unsupervised domain adaptation for medical image
segmentation [24.655779957716558]
We propose a novel and generic framework called Label-Efficient Unsupervised Domain Adaptation"(LE-UDA)
In LE-UDA, we construct self-ensembling consistency for knowledge transfer between both domains, as well as a self-ensembling adversarial learning module to achieve better feature alignment for UDA.
Experimental results demonstrate that the proposed LE-UDA can efficiently leverage limited source labels to improve cross-domain segmentation performance, outperforming state-of-the-art UDA approaches in the literature.
arXiv Detail & Related papers (2022-12-05T07:47:35Z) - AADG: Automatic Augmentation for Domain Generalization on Retinal Image
Segmentation [1.0452185327816181]
We propose a data manipulation based domain generalization method, called Automated Augmentation for Domain Generalization (AADG)
Our AADG framework can effectively sample data augmentation policies that generate novel domains.
Our proposed AADG exhibits state-of-the-art generalization performance and outperforms existing approaches.
arXiv Detail & Related papers (2022-07-27T02:26:01Z) - Domain-Agnostic Prior for Transfer Semantic Segmentation [197.9378107222422]
Unsupervised domain adaptation (UDA) is an important topic in the computer vision community.
We present a mechanism that regularizes cross-domain representation learning with a domain-agnostic prior (DAP)
Our research reveals that UDA benefits much from better proxies, possibly from other data modalities.
arXiv Detail & Related papers (2022-04-06T09:13:25Z) - Decompose to Adapt: Cross-domain Object Detection via Feature
Disentanglement [79.2994130944482]
We design a Domain Disentanglement Faster-RCNN (DDF) to eliminate the source-specific information in the features for detection task learning.
Our DDF method facilitates the feature disentanglement at the global and local stages, with a Global Triplet Disentanglement (GTD) module and an Instance Similarity Disentanglement (ISD) module.
By outperforming state-of-the-art methods on four benchmark UDA object detection tasks, our DDF method is demonstrated to be effective with wide applicability.
arXiv Detail & Related papers (2022-01-06T05:43:01Z) - Stagewise Unsupervised Domain Adaptation with Adversarial Self-Training
for Road Segmentation of Remote Sensing Images [93.50240389540252]
Road segmentation from remote sensing images is a challenging task with wide ranges of application potentials.
We propose a novel stagewise domain adaptation model called RoadDA to address the domain shift (DS) issue in this field.
Experiment results on two benchmarks demonstrate that RoadDA can efficiently reduce the domain gap and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-08-28T09:29:14Z) - Self-Rule to Adapt: Generalized Multi-source Feature Learning Using
Unsupervised Domain Adaptation for Colorectal Cancer Tissue Detection [9.074125289002911]
Supervised learning is constrained by the availability of labeled data.
We propose SRA, which takes advantage of self-supervised learning to perform domain adaptation.
arXiv Detail & Related papers (2021-08-20T13:52:33Z) - A Review of Single-Source Deep Unsupervised Visual Domain Adaptation [81.07994783143533]
Large-scale labeled training datasets have enabled deep neural networks to excel across a wide range of benchmark vision tasks.
In many applications, it is prohibitively expensive and time-consuming to obtain large quantities of labeled data.
To cope with limited labeled training data, many have attempted to directly apply models trained on a large-scale labeled source domain to another sparsely labeled or unlabeled target domain.
arXiv Detail & Related papers (2020-09-01T00:06:50Z) - Domain Conditioned Adaptation Network [90.63261870610211]
We propose a Domain Conditioned Adaptation Network (DCAN) to excite distinct convolutional channels with a domain conditioned channel attention mechanism.
This is the first work to explore the domain-wise convolutional channel activation for deep DA networks.
arXiv Detail & Related papers (2020-05-14T04:23:24Z) - Domain Adaptive Medical Image Segmentation via Adversarial Learning of
Disease-Specific Spatial Patterns [6.298270929323396]
We propose an unsupervised domain adaptation framework for boosting image segmentation performance across multiple domains.
We enforce architectures to be adaptive to new data by rejecting improbable segmentation patterns and implicitly learning through semantic and boundary information.
We demonstrate that recalibrating the deep networks on a few unlabeled images from the target domain improves the segmentation accuracy significantly.
arXiv Detail & Related papers (2020-01-25T13:48:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.