Active Learning Guided Federated Online Adaptation: Applications in
Medical Image Segmentation
- URL: http://arxiv.org/abs/2312.05407v1
- Date: Fri, 8 Dec 2023 23:43:17 GMT
- Title: Active Learning Guided Federated Online Adaptation: Applications in
Medical Image Segmentation
- Authors: Md Shazid Islam, Sayak Nag, Arindam Dutta, Miraj Ahmed, Fahim Faisal
Niloy, Amit K.Roy-Chowdhury
- Abstract summary: We propose a method for medical image segmentation that adapts to each incoming data batch (online adaptation) and incorporates physician feedback through active learning.
Our experiments on publicly available datasets show that the proposed distributed active learning-based online adaptation method outperforms unsupervised online adaptation methods.
- Score: 17.91288898488217
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Data privacy, storage, and distribution shifts are major bottlenecks in
medical image analysis. Data cannot be shared across patients, physicians, and
facilities due to privacy concerns, usually requiring each patient's data to be
analyzed in a discreet setting at a near real-time pace. However, one would
like to take advantage of the accumulated knowledge across healthcare
facilities as the computational systems analyze data of more and more patients
while incorporating feedback provided by physicians to improve accuracy.
Motivated by these, we propose a method for medical image segmentation that
adapts to each incoming data batch (online adaptation), incorporates physician
feedback through active learning, and assimilates knowledge across facilities
in a federated setup. Combining an online adaptation scheme at test time with
an efficient sampling strategy with budgeted annotation helps bridge the gap
between the source and the incoming stream of target domain data. A federated
setup allows collaborative aggregation of knowledge across distinct distributed
models without needing to share the data across different models. This
facilitates the improvement of performance over time by accumulating knowledge
across users. Towards achieving these goals, we propose a computationally
amicable, privacy-preserving image segmentation technique \textbf{DrFRODA} that
uses federated learning to adapt the model in an online manner with feedback
from doctors in the loop. Our experiments on publicly available datasets show
that the proposed distributed active learning-based online adaptation method
outperforms unsupervised online adaptation methods and shows competitive
results with offline active learning-based adaptation methods.
Related papers
- Unsupervised domain adaptation by learning using privileged information [6.748420131629902]
We show that training-time access to side information in the form of auxiliary variables can help relax restrictions on input variables.
We propose a simple two-stage learning algorithm, inspired by our analysis of the expected error in the target domain, and a practical end-to-end variant for image classification.
arXiv Detail & Related papers (2023-03-16T14:31:50Z) - Self-Supervised Pretraining for 2D Medical Image Segmentation [0.0]
Self-supervised learning offers a way to lower the need for manually annotated data by pretraining models for a specific domain on unlabelled data.
We find that self-supervised pretraining on natural images and target-domain-specific images leads to the fastest and most stable downstream convergence.
In low-data scenarios, supervised ImageNet pretraining achieves the best accuracy, requiring less than 100 annotated samples to realise close to minimal error.
arXiv Detail & Related papers (2022-09-01T09:25:22Z) - Domain-invariant Prototypes for Semantic Segmentation [30.932130453313537]
We present an easy-to-train framework that learns domain-invariant prototypes for domain adaptive semantic segmentation.
Our method involves only one-stage training and does not need to be trained on large-scale un-annotated target images.
arXiv Detail & Related papers (2022-08-12T02:21:05Z) - DecoupleNet: Decoupled Network for Domain Adaptive Semantic Segmentation [78.30720731968135]
Unsupervised domain adaptation in semantic segmentation has been raised to alleviate the reliance on expensive pixel-wise annotations.
We propose DecoupleNet that alleviates source domain overfitting and enables the final model to focus more on the segmentation task.
We also put forward Self-Discrimination (SD) and introduce an auxiliary classifier to learn more discriminative target domain features with pseudo labels.
arXiv Detail & Related papers (2022-07-20T15:47:34Z) - HYLDA: End-to-end Hybrid Learning Domain Adaptation for LiDAR Semantic
Segmentation [13.87939140266266]
This paper addresses the problem of training a LiDAR semantic segmentation network using a fully-labeled source dataset and a target dataset that only has a small number of labels.
We develop a novel image-to-image translation engine, and couple it with a LiDAR semantic segmentation network, resulting in an integrated domain adaptation architecture we call HYLDA.
arXiv Detail & Related papers (2022-01-14T18:13:09Z) - Towards Fewer Annotations: Active Learning via Region Impurity and
Prediction Uncertainty for Domain Adaptive Semantic Segmentation [19.55572909866489]
We propose a region-based active learning approach for semantic segmentation under a domain shift.
Our algorithm, Active Learning via Region Impurity and Prediction Uncertainty (AL-RIPU), introduces a novel acquisition strategy characterizing the spatial adjacency of image regions.
Our method only requires very few annotations to almost reach the supervised performance and substantially outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-11-25T06:40:58Z) - WEDGE: Web-Image Assisted Domain Generalization for Semantic
Segmentation [72.88657378658549]
We propose a WEb-image assisted Domain GEneralization scheme, which is the first to exploit the diversity of web-crawled images for generalizable semantic segmentation.
We also present a method which injects styles of the web-crawled images into training images on-the-fly during training, which enables the network to experience images of diverse styles with reliable labels for effective training.
arXiv Detail & Related papers (2021-09-29T05:19:58Z) - Stagewise Unsupervised Domain Adaptation with Adversarial Self-Training
for Road Segmentation of Remote Sensing Images [93.50240389540252]
Road segmentation from remote sensing images is a challenging task with wide ranges of application potentials.
We propose a novel stagewise domain adaptation model called RoadDA to address the domain shift (DS) issue in this field.
Experiment results on two benchmarks demonstrate that RoadDA can efficiently reduce the domain gap and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-08-28T09:29:14Z) - Phase Consistent Ecological Domain Adaptation [76.75730500201536]
We focus on the task of semantic segmentation, where annotated synthetic data are aplenty, but annotating real data is laborious.
The first criterion, inspired by visual psychophysics, is that the map between the two image domains be phase-preserving.
The second criterion aims to leverage ecological statistics, or regularities in the scene which are manifest in any image of it, regardless of the characteristics of the illuminant or the imaging sensor.
arXiv Detail & Related papers (2020-04-10T06:58:03Z) - Supervised Domain Adaptation using Graph Embedding [86.3361797111839]
Domain adaptation methods assume that distributions between the two domains are shifted and attempt to realign them.
We propose a generic framework based on graph embedding.
We show that the proposed approach leads to a powerful Domain Adaptation framework.
arXiv Detail & Related papers (2020-03-09T12:25:13Z) - Understanding Self-Training for Gradual Domain Adaptation [107.37869221297687]
We consider gradual domain adaptation, where the goal is to adapt an initial classifier trained on a source domain given only unlabeled data that shifts gradually in distribution towards a target domain.
We prove the first non-vacuous upper bound on the error of self-training with gradual shifts, under settings where directly adapting to the target domain can result in unbounded error.
The theoretical analysis leads to algorithmic insights, highlighting that regularization and label sharpening are essential even when we have infinite data, and suggesting that self-training works particularly well for shifts with small Wasserstein-infinity distance.
arXiv Detail & Related papers (2020-02-26T08:59:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.