XReal: Realistic Anatomy and Pathology-Aware X-ray Generation via Controllable Diffusion Model
- URL: http://arxiv.org/abs/2403.09240v2
- Date: Tue, 22 Oct 2024 20:26:09 GMT
- Title: XReal: Realistic Anatomy and Pathology-Aware X-ray Generation via Controllable Diffusion Model
- Authors: Anees Ur Rehman Hashmi, Ibrahim Almakky, Mohammad Areeb Qazi, Santosh Sanjeev, Vijay Ram Papineni, Jagalpathy Jagdish, Mohammad Yaqub,
- Abstract summary: Large-scale generative models have demonstrated impressive capabilities in producing visually compelling images.
However, they continue to grapple with hallucination challenges and the generation of anatomically inaccurate outputs.
We present XReal, a novel controllable diffusion model for generating realistic chest X-ray images.
- Score: 0.7381551917607596
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Large-scale generative models have demonstrated impressive capabilities in producing visually compelling images, with increasing applications in medical imaging. However, they continue to grapple with hallucination challenges and the generation of anatomically inaccurate outputs. These limitations are mainly due to the reliance on textual inputs and lack of spatial control over the generated images, hindering the potential usefulness of such models in real-life settings. In this work, we present XReal, a novel controllable diffusion model for generating realistic chest X-ray images through precise anatomy and pathology location control. Our lightweight method comprises an Anatomy Controller and a Pathology Controller to introduce spatial control over anatomy and pathology in a pre-trained Text-to-Image Diffusion Model, respectively, without fine-tuning the model. XReal outperforms state-of-the-art X-ray diffusion models in quantitative metrics and radiologists' ratings, showing significant gains in anatomy and pathology realism. Our model holds promise for advancing generative models in medical imaging, offering greater precision and adaptability while inviting further exploration in this evolving field. The code and pre-trained model weights are publicly available at https://github.com/BioMedIA-MBZUAI/XReal.
Related papers
- Reference-Guided Diffusion Inpainting For Multimodal Counterfactual Generation [55.2480439325792]
Safety-critical applications, such as autonomous driving and medical image analysis, require extensive multimodal data for rigorous testing.<n>This work introduces two novel methods for synthetic data generation in autonomous driving and medical image analysis, namely MObI and AnydoorMed, respectively.
arXiv Detail & Related papers (2025-07-30T19:43:47Z) - X-GRM: Large Gaussian Reconstruction Model for Sparse-view X-rays to Computed Tomography [89.84588038174721]
Computed Tomography serves as an indispensable tool in clinical, providing non-invasive visualization of internal anatomical structures.<n>Existing CT reconstruction works are limited to small-capacity model architecture and inflexible volume representation.<n>We present X-GRM, a large feedforward model for reconstructing 3D CT volumes from sparse-view 2D X-ray projections.
arXiv Detail & Related papers (2025-05-21T08:14:10Z) - CheXWorld: Exploring Image World Modeling for Radiograph Representation Learning [76.98039909663756]
We present CheXWorld, the first effort towards a self-supervised world model for radiographic images.
Our work develops a unified framework that simultaneously models three aspects of medical knowledge essential for qualified radiologists.
arXiv Detail & Related papers (2025-04-18T17:50:43Z) - Interactive Tumor Progression Modeling via Sketch-Based Image Editing [54.47725383502915]
We propose SkEditTumor, a sketch-based diffusion model for controllable tumor progression editing.
By leveraging sketches as structural priors, our method enables precise modifications of tumor regions while maintaining structural integrity and visual realism.
Our contributions include a novel integration of sketches with diffusion models for medical image editing, fine-grained control over tumor progression visualization, and extensive validation across multiple datasets, setting a new benchmark in the field.
arXiv Detail & Related papers (2025-03-10T00:04:19Z) - Unraveling Normal Anatomy via Fluid-Driven Anomaly Randomization [3.513196894656874]
We introduce UNA (Unraveling Normal Anatomy), the first modality-agnostic learning approach for normal brain anatomy reconstruction.
We propose a fluid-driven anomaly randomization method that generates an unlimited number of realistic pathology profiles on-the-fly.
We demonstrate UNA's effectiveness in reconstructing healthy brain anatomy and showcase its direct application to anomaly detection.
arXiv Detail & Related papers (2025-01-23T04:17:20Z) - Latent Drifting in Diffusion Models for Counterfactual Medical Image Synthesis [55.959002385347645]
Latent Drifting enables diffusion models to be conditioned for medical images fitted for the complex task of counterfactual image generation.
We evaluate our method on three public longitudinal benchmark datasets of brain MRI and chest X-rays for counterfactual image generation.
arXiv Detail & Related papers (2024-12-30T01:59:34Z) - RadGazeGen: Radiomics and Gaze-guided Medical Image Generation using Diffusion Models [11.865553250973589]
RadGazeGen is a framework for integrating experts' eye gaze patterns and radiomic feature maps as controls to text-to-image diffusion models.
arXiv Detail & Related papers (2024-10-01T01:10:07Z) - Multi-Conditioned Denoising Diffusion Probabilistic Model (mDDPM) for Medical Image Synthesis [22.0080610434872]
We propose a controlled generation framework for synthetic images with annotations.
We show that our approach can produce annotated lung CT images that can faithfully represent anatomy.
Our experiments demonstrate that controlled generative frameworks of this nature can surpass nearly every state-of-the-art image generative model.
arXiv Detail & Related papers (2024-09-07T01:19:02Z) - Anatomically-Controllable Medical Image Generation with Segmentation-Guided Diffusion Models [11.835841459200632]
We propose a diffusion model-based method that supports anatomically-controllable medical image generation.
We additionally introduce a random mask ablation training algorithm to enable conditioning on a selected combination of anatomical constraints.
SegGuidedDiff reaches a new state-of-the-art in the faithfulness of generated images to input anatomical masks.
arXiv Detail & Related papers (2024-02-07T19:35:09Z) - XrayGPT: Chest Radiographs Summarization using Medical Vision-Language
Models [60.437091462613544]
We introduce XrayGPT, a novel conversational medical vision-language model.
It can analyze and answer open-ended questions about chest radiographs.
We generate 217k interactive and high-quality summaries from free-text radiology reports.
arXiv Detail & Related papers (2023-06-13T17:59:59Z) - Trade-offs in Fine-tuned Diffusion Models Between Accuracy and
Interpretability [5.865936619867771]
We unravel a consequential trade-off between image fidelity as gauged by conventional metrics and model interpretability in generative diffusion models.
We present a set of design principles for the development of truly interpretable generative models.
arXiv Detail & Related papers (2023-03-31T09:11:26Z) - Patched Diffusion Models for Unsupervised Anomaly Detection in Brain MRI [55.78588835407174]
We propose a method that reformulates the generation task of diffusion models as a patch-based estimation of healthy brain anatomy.
We evaluate our approach on data of tumors and multiple sclerosis lesions and demonstrate a relative improvement of 25.1% compared to existing baselines.
arXiv Detail & Related papers (2023-03-07T09:40:22Z) - Multi-Domain Balanced Sampling Improves Out-of-Distribution
Generalization of Chest X-ray Pathology Prediction Models [67.2867506736665]
We propose an idea for out-of-distribution generalization of chest X-ray pathologies that uses a simple balanced batch sampling technique.
We observed that balanced sampling between the multiple training datasets improves the performance over baseline models trained without balancing.
arXiv Detail & Related papers (2021-12-27T15:28:01Z) - SQUID: Deep Feature In-Painting for Unsupervised Anomaly Detection [76.01333073259677]
We propose the use of Space-aware Memory Queues for In-painting and Detecting anomalies from radiography images (abbreviated as SQUID)
We show that SQUID can taxonomize the ingrained anatomical structures into recurrent patterns; and in the inference, it can identify anomalies (unseen/modified patterns) in the image.
arXiv Detail & Related papers (2021-11-26T13:47:34Z) - Generative Residual Attention Network for Disease Detection [51.60842580044539]
We present a novel approach for disease generation in X-rays using a conditional generative adversarial learning.
We generate a corresponding radiology image in a target domain while preserving the identity of the patient.
We then use the generated X-ray image in the target domain to augment our training to improve the detection performance.
arXiv Detail & Related papers (2021-10-25T14:15:57Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - Evaluating the Clinical Realism of Synthetic Chest X-Rays Generated
Using Progressively Growing GANs [0.0]
Chest x-rays are a vital tool in the workup of many patients.
There is an ever pressing need for greater quantities of labelled data to develop new diagnostic tools.
Previous work has sought to address these concerns by creating class-specific GANs that synthesise images to augment training data.
arXiv Detail & Related papers (2020-10-07T11:47:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.