A robust multi-domain network for short-scanning amyloid PET
reconstruction
- URL: http://arxiv.org/abs/2305.09986v1
- Date: Wed, 17 May 2023 06:31:10 GMT
- Title: A robust multi-domain network for short-scanning amyloid PET
reconstruction
- Authors: Hyoung Suk Park and Young Jin Jeong and Kiwan Jeon
- Abstract summary: This paper presents a robust multi-domain network designed to restore low-quality amyloid PET images acquired in a short period of time.
The proposed method is trained on pairs of PET images from short (2 minutes) and standard (20 minutes) scanning times, sourced from multiple domains.
- Score: 0.18750851274087485
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents a robust multi-domain network designed to restore
low-quality amyloid PET images acquired in a short period of time. The proposed
method is trained on pairs of PET images from short (2 minutes) and standard
(20 minutes) scanning times, sourced from multiple domains. Learning relevant
image features between these domains with a single network is challenging. Our
key contribution is the introduction of a mapping label, which enables
effective learning of specific representations between different domains. The
network, trained with various mapping labels, can efficiently correct amyloid
PET datasets in multiple training domains and unseen domains, such as those
obtained with new radiotracers, acquisition protocols, or PET scanners.
Internal, temporal, and external validations demonstrate the effectiveness of
the proposed method. Notably, for external validation datasets from unseen
domains, the proposed method achieved comparable or superior results relative
to methods trained with these datasets, in terms of quantitative metrics such
as normalized root mean-square error and structure similarity index measure.
Two nuclear medicine physicians evaluated the amyloid status as positive or
negative for the external validation datasets, with accuracies of 0.970 and
0.930 for readers 1 and 2, respectively.
Related papers
- Weakly-Supervised PET Anomaly Detection using Implicitly-Guided Attention-Conditional Counterfactual Diffusion Modeling: a Multi-Center, Multi-Cancer, and Multi-Tracer Study [0.391955592784358]
We present a weakly-supervised Implicitly guided COuNterfactual diffusion model for Detecting Anomalies in PET images (IgCONDA-PET)
The training is conditioned on image class labels (healthy vs. unhealthy) via attention modules.
We perform counterfactual generation which facilitates "unhealthy-to-healthy" domain translation by generating a synthetic, healthy version of an unhealthy input image.
arXiv Detail & Related papers (2024-04-30T23:09:54Z) - Self-supervised Domain-agnostic Domain Adaptation for Satellite Images [18.151134198549574]
We propose an self-supervised domain-agnostic domain adaptation (SS(DA)2) method to perform domain adaptation without such a domain definition.
We first design a contrastive generative adversarial loss to train a generative network to perform image-to-image translation between any two satellite image patches.
Then, we improve the generalizability of the downstream models by augmenting the training data with different testing spectral characteristics.
arXiv Detail & Related papers (2023-09-20T07:37:23Z) - Unsupervised Domain Adaptation for Anatomical Landmark Detection [5.070344284426738]
We propose a novel framework for anatomical landmark detection under the setting of unsupervised domain adaptation (UDA)
The framework leverages self-training and domain adversarial learning to address the domain gap during adaptation.
Our experiments on cephalometric and lung landmark detection show the effectiveness of the method, which reduces the domain gap by a large margin and outperforms other UDA methods consistently.
arXiv Detail & Related papers (2023-08-25T10:22:13Z) - Adapting the Mean Teacher for keypoint-based lung registration under
geometric domain shifts [75.51482952586773]
deep neural networks generally require plenty of labeled training data and are vulnerable to domain shifts between training and test data.
We present a novel approach to geometric domain adaptation for image registration, adapting a model from a labeled source to an unlabeled target domain.
Our method consistently improves on the baseline model by 50%/47% while even matching the accuracy of models trained on target data.
arXiv Detail & Related papers (2022-07-01T12:16:42Z) - Stagewise Unsupervised Domain Adaptation with Adversarial Self-Training
for Road Segmentation of Remote Sensing Images [93.50240389540252]
Road segmentation from remote sensing images is a challenging task with wide ranges of application potentials.
We propose a novel stagewise domain adaptation model called RoadDA to address the domain shift (DS) issue in this field.
Experiment results on two benchmarks demonstrate that RoadDA can efficiently reduce the domain gap and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-08-28T09:29:14Z) - Cross-Modality Brain Tumor Segmentation via Bidirectional
Global-to-Local Unsupervised Domain Adaptation [61.01704175938995]
In this paper, we propose a novel Bidirectional Global-to-Local (BiGL) adaptation framework under a UDA scheme.
Specifically, a bidirectional image synthesis and segmentation module is proposed to segment the brain tumor.
The proposed method outperforms several state-of-the-art unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2021-05-17T10:11:45Z) - Contrastive Learning and Self-Training for Unsupervised Domain
Adaptation in Semantic Segmentation [71.77083272602525]
UDA attempts to provide efficient knowledge transfer from a labeled source domain to an unlabeled target domain.
We propose a contrastive learning approach that adapts category-wise centroids across domains.
We extend our method with self-training, where we use a memory-efficient temporal ensemble to generate consistent and reliable pseudo-labels.
arXiv Detail & Related papers (2021-05-05T11:55:53Z) - Adapt Everywhere: Unsupervised Adaptation of Point-Clouds and Entropy
Minimisation for Multi-modal Cardiac Image Segmentation [10.417009344120917]
We present a novel UDA method for multi-modal cardiac image segmentation.
The proposed method is based on adversarial learning and adapts network features between source and target domain in different spaces.
We validated our method on two cardiac datasets by adapting from the annotated source domain to the unannotated target domain.
arXiv Detail & Related papers (2021-03-15T08:59:44Z) - Cross-Domain Similarity Learning for Face Recognition in Unseen Domains [90.35908506994365]
We introduce a novel cross-domain metric learning loss, which we dub Cross-Domain Triplet (CDT) loss, to improve face recognition in unseen domains.
The CDT loss encourages learning semantically meaningful features by enforcing compact feature clusters of identities from one domain.
Our method does not require careful hard-pair sample mining and filtering strategy during training.
arXiv Detail & Related papers (2021-03-12T19:48:01Z) - Effective Label Propagation for Discriminative Semi-Supervised Domain
Adaptation [76.41664929948607]
Semi-supervised domain adaptation (SSDA) methods have demonstrated great potential in large-scale image classification tasks.
We present a novel and effective method to tackle this problem by using effective inter-domain and intra-domain semantic information propagation.
Our source code and pre-trained models will be released soon.
arXiv Detail & Related papers (2020-12-04T14:28:19Z) - Unsupervised learning of multimodal image registration using domain
adaptation with projected Earth Move's discrepancies [8.88841928746097]
unsupervised domain adaptation can be beneficial in overcoming the current limitations for multimodal registration.
We propose the first use of unsupervised domain adaptation for discrete multimodal registration.
Our proof-of-concept demonstrates the applicability of domain transfer from mono- to multimodal (multi-contrast) 2D registration of canine MRI scans.
arXiv Detail & Related papers (2020-05-28T15:57:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.