Feedback Assisted Adversarial Learning to Improve the Quality of
Cone-beam CT Images
- URL: http://arxiv.org/abs/2210.12578v1
- Date: Sun, 23 Oct 2022 00:31:51 GMT
- Title: Feedback Assisted Adversarial Learning to Improve the Quality of
Cone-beam CT Images
- Authors: Takumi Hase, Megumi Nakao, Mitsuhiro Nakamura, Tetsuya Matsuda
- Abstract summary: We propose adversarial learning with a feedback mechanism from a discriminator to improve the quality of CBCT images.
This framework employs U-net as the discriminator and outputs a probability map representing the local discrimination results.
- Score: 2.179313476241343
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Unsupervised image translation using adversarial learning has been attracting
attention to improve the image quality of medical images. However, adversarial
training based on the global evaluation values of discriminators does not
provide sufficient translation performance for locally different image
features. We propose adversarial learning with a feedback mechanism from a
discriminator to improve the quality of CBCT images. This framework employs
U-net as the discriminator and outputs a probability map representing the local
discrimination results. The probability map is fed back to the generator and
used for training to improve the image translation. Our experiments using 76
corresponding CT-CBCT images confirmed that the proposed framework could
capture more diverse image features than conventional adversarial learning
frameworks and produced synthetic images with pixel values close to the
reference image and a correlation coefficient of 0.93.
Related papers
- Context-Aware Optimal Transport Learning for Retinal Fundus Image Enhancement [1.8339026473337505]
This paper proposes a context-informed optimal transport (OT) learning framework for tackling unpaired fundus image enhancement.
We derive the proposed context-aware OT using the earth's distance mover and show that the proposed context-OT has a solid theoretical guarantee.
Experimental results on a large-scale dataset demonstrate the superiority of the proposed method over several state-of-the-art supervised and unsupervised methods.
arXiv Detail & Related papers (2024-09-12T09:14:37Z) - Local Manifold Learning for No-Reference Image Quality Assessment [68.9577503732292]
We propose an innovative framework that integrates local manifold learning with contrastive learning for No-Reference Image Quality Assessment (NR-IQA)
Our approach demonstrates a better performance compared to state-of-the-art methods in 7 standard datasets.
arXiv Detail & Related papers (2024-06-27T15:14:23Z) - Transformer-based Clipped Contrastive Quantization Learning for
Unsupervised Image Retrieval [15.982022297570108]
Unsupervised image retrieval aims to learn the important visual characteristics without any given level to retrieve the similar images for a given query image.
In this paper, we propose a TransClippedCLR model by encoding the global context of an image using Transformer having local context through patch based processing.
Results using the proposed clipped contrastive learning are greatly improved on all datasets as compared to same backbone network with vanilla contrastive learning.
arXiv Detail & Related papers (2024-01-27T09:39:11Z) - OTRE: Where Optimal Transport Guided Unpaired Image-to-Image Translation
Meets Regularization by Enhancing [4.951748109810726]
Optimal retinal image quality is mandated for accurate medical diagnoses and automated analyses.
We propose an unpaired image-to-image translation scheme for mapping low-quality retinal CFPs to high-quality counterparts.
We validated the integrated framework, OTRE, on three publicly available retinal image datasets.
arXiv Detail & Related papers (2023-02-06T18:39:40Z) - Deep Semantic Statistics Matching (D2SM) Denoising Network [70.01091467628068]
We introduce the Deep Semantic Statistics Matching (D2SM) Denoising Network.
It exploits semantic features of pretrained classification networks, then it implicitly matches the probabilistic distribution of clear images at the semantic feature space.
By learning to preserve the semantic distribution of denoised images, we empirically find our method significantly improves the denoising capabilities of networks.
arXiv Detail & Related papers (2022-07-19T14:35:42Z) - ExCon: Explanation-driven Supervised Contrastive Learning for Image
Classification [12.109442912963969]
We propose to leverage saliency-based explanation methods to create content-preserving masked augmentations for contrastive learning.
Our novel explanation-driven supervised contrastive learning (ExCon) methodology critically serves the dual goals of encouraging nearby image embeddings to have similar content and explanation.
We demonstrate that ExCon outperforms vanilla supervised contrastive learning in terms of classification, explanation quality, adversarial robustness as well as calibration of probabilistic predictions of the model in the context of distributional shift.
arXiv Detail & Related papers (2021-11-28T23:15:26Z) - Image Quality Assessment using Contrastive Learning [50.265638572116984]
We train a deep Convolutional Neural Network (CNN) using a contrastive pairwise objective to solve the auxiliary problem.
We show through extensive experiments that CONTRIQUE achieves competitive performance when compared to state-of-the-art NR image quality models.
Our results suggest that powerful quality representations with perceptual relevance can be obtained without requiring large labeled subjective image quality datasets.
arXiv Detail & Related papers (2021-10-25T21:01:00Z) - Learning Conditional Knowledge Distillation for Degraded-Reference Image
Quality Assessment [157.1292674649519]
We propose a practical solution named degraded-reference IQA (DR-IQA)
DR-IQA exploits the inputs of IR models, degraded images, as references.
Our results can even be close to the performance of full-reference settings.
arXiv Detail & Related papers (2021-08-18T02:35:08Z) - Towards Unsupervised Deep Image Enhancement with Generative Adversarial
Network [92.01145655155374]
We present an unsupervised image enhancement generative network (UEGAN)
It learns the corresponding image-to-image mapping from a set of images with desired characteristics in an unsupervised manner.
Results show that the proposed model effectively improves the aesthetic quality of images.
arXiv Detail & Related papers (2020-12-30T03:22:46Z) - Supervised and Unsupervised Learning of Parameterized Color Enhancement [112.88623543850224]
We tackle the problem of color enhancement as an image translation task using both supervised and unsupervised learning.
We achieve state-of-the-art results compared to both supervised (paired data) and unsupervised (unpaired data) image enhancement methods on the MIT-Adobe FiveK benchmark.
We show the generalization capability of our method, by applying it on photos from the early 20th century and to dark video frames.
arXiv Detail & Related papers (2019-12-30T13:57:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.