SyntheX: Scaling Up Learning-based X-ray Image Analysis Through In
Silico Experiments
- URL: http://arxiv.org/abs/2206.06127v1
- Date: Mon, 13 Jun 2022 13:08:41 GMT
- Title: SyntheX: Scaling Up Learning-based X-ray Image Analysis Through In
Silico Experiments
- Authors: Cong Gao, Benjamin D. Killeen, Yicheng Hu, Robert B. Grupp, Russell H.
Taylor, Mehran Armand, Mathias Unberath
- Abstract summary: We show that creating realistic simulated images from human models is a viable alternative to large-scale in situ data collection.
Because synthetic generation of training data from human-based models scales easily, we find that our model transfer paradigm for X-ray image analysis, which we refer to as SyntheX, can even outperform real data-trained models.
- Score: 12.019996672009375
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Artificial intelligence (AI) now enables automated interpretation of medical
images for clinical use. However, AI's potential use for interventional images
(versus those involved in triage or diagnosis), such as for guidance during
surgery, remains largely untapped. This is because surgical AI systems are
currently trained using post hoc analysis of data collected during live
surgeries, which has fundamental and practical limitations, including ethical
considerations, expense, scalability, data integrity, and a lack of ground
truth. Here, we demonstrate that creating realistic simulated images from human
models is a viable alternative and complement to large-scale in situ data
collection. We show that training AI image analysis models on realistically
synthesized data, combined with contemporary domain generalization or
adaptation techniques, results in models that on real data perform comparably
to models trained on a precisely matched real data training set. Because
synthetic generation of training data from human-based models scales easily, we
find that our model transfer paradigm for X-ray image analysis, which we refer
to as SyntheX, can even outperform real data-trained models due to the
effectiveness of training on a larger dataset. We demonstrate the potential of
SyntheX on three clinical tasks: Hip image analysis, surgical robotic tool
detection, and COVID-19 lung lesion segmentation. SyntheX provides an
opportunity to drastically accelerate the conception, design, and evaluation of
intelligent systems for X-ray-based medicine. In addition, simulated image
environments provide the opportunity to test novel instrumentation, design
complementary surgical approaches, and envision novel techniques that improve
outcomes, save time, or mitigate human error, freed from the ethical and
practical considerations of live human data collection.
Related papers
- Towards Virtual Clinical Trials of Radiology AI with Conditional Generative Modeling [10.014130930114172]
We introduce a conditional generative AI model designed for virtual clinical trials (VCTs) of radiology AI.
By learning the joint distribution of images and anatomical structures, our model enables precise replication of real-world patient populations.
We demonstrate meaningful evaluation of radiology AI models through VCTs powered by our synthetic CT study populations.
arXiv Detail & Related papers (2025-02-13T15:53:52Z) - Latent Drifting in Diffusion Models for Counterfactual Medical Image Synthesis [55.959002385347645]
Scaling by training on large datasets has been shown to enhance the quality and fidelity of image generation and manipulation with diffusion models.
Latent Drifting enables diffusion models to be conditioned for medical images fitted for the complex task of counterfactual image generation.
Our results demonstrate significant performance gains in various scenarios when combined with different fine-tuning schemes.
arXiv Detail & Related papers (2024-12-30T01:59:34Z) - SimuScope: Realistic Endoscopic Synthetic Dataset Generation through Surgical Simulation and Diffusion Models [1.28795255913358]
We introduce a fully-fledged surgical simulator that automatically produces all necessary annotations for modern CAS systems.
It offers a more complex and realistic simulation of surgical interactions, including the dynamics between surgical instruments and deformable anatomical environments.
We propose a lightweight and flexible image-to-image translation method based on Stable Diffusion and Low-Rank Adaptation.
arXiv Detail & Related papers (2024-12-03T09:49:43Z) - Embryo 2.0: Merging Synthetic and Real Data for Advanced AI Predictions [69.07284335967019]
We train two generative models using two datasets, one created and made publicly available, and one existing public dataset.
We generate synthetic embryo images at various cell stages, including 2-cell, 4-cell, 8-cell, morula, and blastocyst.
These were combined with real images to train classification models for embryo cell stage prediction.
arXiv Detail & Related papers (2024-12-02T08:24:49Z) - Leveraging Computational Pathology AI for Noninvasive Optical Imaging Analysis Without Retraining [3.6835809728620634]
Noninvasive optical imaging modalities can probe patient's tissue in 3D and over time generate gigabytes of clinically relevant data per sample.
There is a need for AI models to analyze this data and assist clinical workflow.
In this paper we introduce FoundationShift, a method to apply any AI model from computational pathology without retraining.
arXiv Detail & Related papers (2024-11-18T14:35:01Z) - Realistic Surgical Image Dataset Generation Based On 3D Gaussian Splatting [3.5351922399745166]
This research introduces a novel method that employs 3D Gaussian Splatting to generate synthetic surgical datasets.
We developed a data recording system capable of acquiring images alongside tool and camera poses in a surgical scene.
Using this pose data, we synthetically replicate the scene, thereby enabling direct comparisons of the synthetic image quality.
arXiv Detail & Related papers (2024-07-20T11:20:07Z) - CathFlow: Self-Supervised Segmentation of Catheters in Interventional Ultrasound Using Optical Flow and Transformers [66.15847237150909]
We introduce a self-supervised deep learning architecture to segment catheters in longitudinal ultrasound images.
The network architecture builds upon AiAReSeg, a segmentation transformer built with the Attention in Attention mechanism.
We validated our model on a test dataset, consisting of unseen synthetic data and images collected from silicon aorta phantoms.
arXiv Detail & Related papers (2024-03-21T15:13:36Z) - Radiology Report Generation Using Transformers Conditioned with
Non-imaging Data [55.17268696112258]
This paper proposes a novel multi-modal transformer network that integrates chest x-ray (CXR) images and associated patient demographic information.
The proposed network uses a convolutional neural network to extract visual features from CXRs and a transformer-based encoder-decoder network that combines the visual features with semantic text embeddings of patient demographic information.
arXiv Detail & Related papers (2023-11-18T14:52:26Z) - UAV-Sim: NeRF-based Synthetic Data Generation for UAV-based Perception [62.71374902455154]
We leverage recent advancements in neural rendering to improve static and dynamic novelview UAV-based image rendering.
We demonstrate a considerable performance boost when a state-of-the-art detection model is optimized primarily on hybrid sets of real and synthetic data.
arXiv Detail & Related papers (2023-10-25T00:20:37Z) - SYNTA: A novel approach for deep learning-based image analysis in muscle
histopathology using photo-realistic synthetic data [2.1616289178832666]
We introduce SYNTA (synthetic data) as a novel approach for the generation of synthetic, photo-realistic, and highly complex biomedical images as training data.
We demonstrate that it is possible to perform robust and expert-level segmentation tasks on previously unseen real-world data, without the need for manual annotations.
arXiv Detail & Related papers (2022-07-29T12:50:32Z) - Ultrasound Signal Processing: From Models to Deep Learning [64.56774869055826]
Medical ultrasound imaging relies heavily on high-quality signal processing to provide reliable and interpretable image reconstructions.
Deep learning based methods, which are optimized in a data-driven fashion, have gained popularity.
A relatively new paradigm combines the power of the two: leveraging data-driven deep learning, as well as exploiting domain knowledge.
arXiv Detail & Related papers (2022-04-09T13:04:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.