TagGAN: A Generative Model for Data Tagging
- URL: http://arxiv.org/abs/2502.17836v1
- Date: Tue, 25 Feb 2025 04:29:18 GMT
- Title: TagGAN: A Generative Model for Data Tagging
- Authors: Muhammad Nawaz, Basma Nasir, Tehseen Zia, Zawar Hussain, Catarina Moreira,
- Abstract summary: We propose a novel Generative Adversarial Networks (GANs)-based framework, TagGAN.<n>TagGAN is tailored for weakly-supervised fine-grained disease map generation from purely image-level labeled data.<n>Our method is first to generate fine-grained disease maps to visualize disease lesions in a weekly supervised setting without requiring pixel-level annotations.
- Score: 1.820857020024539
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Precise identification and localization of disease-specific features at the pixel-level are particularly important for early diagnosis, disease progression monitoring, and effective treatment in medical image analysis. However, conventional diagnostic AI systems lack decision transparency and cannot operate well in environments where there is a lack of pixel-level annotations. In this study, we propose a novel Generative Adversarial Networks (GANs)-based framework, TagGAN, which is tailored for weakly-supervised fine-grained disease map generation from purely image-level labeled data. TagGAN generates a pixel-level disease map during domain translation from an abnormal image to a normal representation. Later, this map is subtracted from the input abnormal image to convert it into its normal counterpart while preserving all the critical anatomical details. Our method is first to generate fine-grained disease maps to visualize disease lesions in a weekly supervised setting without requiring pixel-level annotations. This development enhances the interpretability of diagnostic AI by providing precise visualizations of disease-specific regions. It also introduces automated binary mask generation to assist radiologists. Empirical evaluations carried out on the benchmark datasets, CheXpert, TBX11K, and COVID-19, demonstrate the capability of TagGAN to outperform current top models in accurately identifying disease-specific pixels. This outcome highlights the capability of the proposed model to tag medical images, significantly reducing the workload for radiologists by eliminating the need for binary masks during training.
Related papers
- RadGazeGen: Radiomics and Gaze-guided Medical Image Generation using Diffusion Models [11.865553250973589]
RadGazeGen is a framework for integrating experts' eye gaze patterns and radiomic feature maps as controls to text-to-image diffusion models.
arXiv Detail & Related papers (2024-10-01T01:10:07Z) - Spatial-aware Attention Generative Adversarial Network for Semi-supervised Anomaly Detection in Medical Image [63.59114880750643]
We introduce a novel Spatial-aware Attention Generative Adrialversa Network (SAGAN) for one-class semi-supervised generation of health images.
SAGAN generates high-quality health images corresponding to unlabeled data, guided by the reconstruction of normal images and restoration of pseudo-anomaly images.
Extensive experiments on three medical datasets demonstrate that the proposed SAGAN outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2024-05-21T15:41:34Z) - MS-Twins: Multi-Scale Deep Self-Attention Networks for Medical Image Segmentation [6.6467547151592505]
This paper proposes MS-Twins (Multi-Scale Twins) as a powerful segmentation model on account of the bond of self-attention and convolution.
Compared with the existing network structure, MS-Twins has made progress on the previous method based on the transformer of two in common use data sets, Synapse and ACDC.
arXiv Detail & Related papers (2023-12-12T10:04:11Z) - Anatomy-Guided Weakly-Supervised Abnormality Localization in Chest
X-rays [17.15666977702355]
We propose an Anatomy-Guided chest X-ray Network (AGXNet) to address weak annotation issues.
Our framework consists of a cascade of two networks, one responsible for identifying anatomical abnormalities and the second responsible for pathological observations.
Our results on the MIMIC-CXR dataset demonstrate the effectiveness of AGXNet in disease and anatomical abnormality localization.
arXiv Detail & Related papers (2022-06-25T18:33:27Z) - AlignTransformer: Hierarchical Alignment of Visual Regions and Disease
Tags for Medical Report Generation [50.21065317817769]
We propose an AlignTransformer framework, which includes the Align Hierarchical Attention (AHA) and the Multi-Grained Transformer (MGT) modules.
Experiments on the public IU-Xray and MIMIC-CXR datasets show that the AlignTransformer can achieve results competitive with state-of-the-art methods on the two datasets.
arXiv Detail & Related papers (2022-03-18T13:43:53Z) - Generative Residual Attention Network for Disease Detection [51.60842580044539]
We present a novel approach for disease generation in X-rays using a conditional generative adversarial learning.
We generate a corresponding radiology image in a target domain while preserving the identity of the patient.
We then use the generated X-ray image in the target domain to augment our training to improve the detection performance.
arXiv Detail & Related papers (2021-10-25T14:15:57Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - Weakly Supervised Thoracic Disease Localization via Disease Masks [29.065791290544983]
weakly supervised localization methods have been proposed that use only image-level annotation.
We propose a spatial attention method using disease masks that describe the areas where diseases mainly occur.
We show that the proposed method results in superior localization performances compared to state-of-the-art methods.
arXiv Detail & Related papers (2021-01-25T06:52:57Z) - Image-based Plant Disease Diagnosis with Unsupervised Anomaly Detection
Based on Reconstructability of Colors [0.0]
We propose an unsupervised anomaly detection technique for image-based plant disease diagnosis.
Our proposed method includes a new image-based framework for plant disease detection that utilizes a conditional adversarial network called pix2pix.
Experiments with PlantVillage dataset demonstrated the superiority of our proposed method compared to an existing anomaly detector.
arXiv Detail & Related papers (2020-11-29T07:44:05Z) - Collaborative Unsupervised Domain Adaptation for Medical Image Diagnosis [102.40869566439514]
We seek to exploit rich labeled data from relevant domains to help the learning in the target task via Unsupervised Domain Adaptation (UDA)
Unlike most UDA methods that rely on clean labeled data or assume samples are equally transferable, we innovatively propose a Collaborative Unsupervised Domain Adaptation algorithm.
We theoretically analyze the generalization performance of the proposed method, and also empirically evaluate it on both medical and general images.
arXiv Detail & Related papers (2020-07-05T11:49:17Z) - Auxiliary Signal-Guided Knowledge Encoder-Decoder for Medical Report
Generation [107.3538598876467]
We propose an Auxiliary Signal-Guided Knowledge-Decoder (ASGK) to mimic radiologists' working patterns.
ASGK integrates internal visual feature fusion and external medical linguistic information to guide medical knowledge transfer and learning.
arXiv Detail & Related papers (2020-06-06T01:00:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.