Text-Driven Tumor Synthesis
- URL: http://arxiv.org/abs/2412.18589v1
- Date: Tue, 24 Dec 2024 18:43:09 GMT
- Title: Text-Driven Tumor Synthesis
- Authors: Xinran Li, Yi Shuai, Chen Liu, Qi Chen, Qilong Wu, Pengfei Guo, Dong Yang, Can Zhao, Pedro R. A. S. Bassi, Daguang Xu, Kang Wang, Yang Yang, Alan Yuille, Zongwei Zhou,
- Abstract summary: Tumor synthesis can generate examples that AI often misses or over-detects.
Existing synthesis methods lack controllability over specific tumor characteristics.
We propose a new text-driven tumor synthesis approach, called TextoMorph.
- Score: 28.654516965292444
- License:
- Abstract: Tumor synthesis can generate examples that AI often misses or over-detects, improving AI performance by training on these challenging cases. However, existing synthesis methods, which are typically unconditional -- generating images from random variables -- or conditioned only by tumor shapes, lack controllability over specific tumor characteristics such as texture, heterogeneity, boundaries, and pathology type. As a result, the generated tumors may be overly similar or duplicates of existing training data, failing to effectively address AI's weaknesses. We propose a new text-driven tumor synthesis approach, termed TextoMorph, that provides textual control over tumor characteristics. This is particularly beneficial for examples that confuse the AI the most, such as early tumor detection (increasing Sensitivity by +8.5%), tumor segmentation for precise radiotherapy (increasing DSC by +6.3%), and classification between benign and malignant tumors (improving Sensitivity by +8.2%). By incorporating text mined from radiology reports into the synthesis process, we increase the variability and controllability of the synthetic tumors to target AI's failure cases more precisely. Moreover, TextoMorph uses contrastive learning across different texts and CT scans, significantly reducing dependence on scarce image-report pairs (only 141 pairs used in this study) by leveraging a large corpus of 34,035 radiology reports. Finally, we have developed rigorous tests to evaluate synthetic tumors, including Text-Driven Visual Turing Test and Radiomics Pattern Analysis, showing that our synthetic tumors is realistic and diverse in texture, heterogeneity, boundaries, and pathology.
Related papers
- Analyzing Tumors by Synthesis [11.942932753828854]
Tumor synthesis generates numerous tumor examples in medical images, aiding AI training for tumor detection and segmentation.
This chapter reviews AI development on real and synthetic data.
Case studies show that AI trained on synthetic tumors can achieve performance comparable to, or better than, AI only trained on real data.
arXiv Detail & Related papers (2024-09-09T19:51:44Z) - FreeTumor: Advance Tumor Segmentation via Large-Scale Tumor Synthesis [7.064154713491736]
FreeTumor is a robust solution for robust tumor synthesis and segmentation.
It uses adversarial training strategy to leverage large-scale and diversified unlabeled data in synthesis training.
In FreeTumor, we investigate the data scaling law in tumor segmentation by scaling up the dataset to 11k cases.
arXiv Detail & Related papers (2024-06-03T12:27:29Z) - Analysis of the BraTS 2023 Intracranial Meningioma Segmentation Challenge [44.586530244472655]
We describe the design and results from the BraTS 2023 Intracranial Meningioma Challenge.
The BraTS Meningioma Challenge differed from prior BraTS Glioma challenges in that it focused on meningiomas.
The top ranked team had a lesion-wise median dice similarity coefficient (DSC) of 0.976, 0.976, and 0.964 for enhancing tumor, tumor core, and whole tumor.
arXiv Detail & Related papers (2024-05-16T03:23:57Z) - From Pixel to Cancer: Cellular Automata in Computed Tomography [12.524228287083888]
Tumor synthesis seeks to create artificial tumors in medical images.
This paper establishes a set of generic rules to simulate tumor development.
We integrate the tumor state into the original computed tomography (CT) images to generate synthetic tumors across different organs.
arXiv Detail & Related papers (2024-03-11T06:46:31Z) - Towards Generalizable Tumor Synthesis [48.45704270448412]
Tumor synthesis enables the creation of artificial tumors in medical images, facilitating the training of AI models for tumor detection and segmentation.
This paper made a progressive stride toward generalizable tumor synthesis by leveraging a critical observation.
We have ascertained that generative AI models, e.g., Diffusion Models, can create realistic tumors generalized to a range of organs even when trained on a limited number of tumor examples from only one organ.
arXiv Detail & Related papers (2024-02-29T18:57:39Z) - Beyond Images: An Integrative Multi-modal Approach to Chest X-Ray Report
Generation [47.250147322130545]
Image-to-text radiology report generation aims to automatically produce radiology reports that describe the findings in medical images.
Most existing methods focus solely on the image data, disregarding the other patient information accessible to radiologists.
We present a novel multi-modal deep neural network framework for generating chest X-rays reports by integrating structured patient data, such as vital signs and symptoms, alongside unstructured clinical notes.
arXiv Detail & Related papers (2023-11-18T14:37:53Z) - Label-Free Liver Tumor Segmentation [10.851067782021902]
We show that AI models can accurately segment liver tumors without the need for manual annotation by using synthetic tumors in CT scans.
Our synthetic tumors have two intriguing advantages: realistic in shape and texture, which even medical professionals can confuse with real tumors.
Our synthetic tumors can automatically generate many examples of small (or even tiny) synthetic tumors.
arXiv Detail & Related papers (2023-03-27T01:22:12Z) - CancerUniT: Towards a Single Unified Model for Effective Detection,
Segmentation, and Diagnosis of Eight Major Cancers Using a Large Collection
of CT Scans [45.83431075462771]
Human readers or radiologists routinely perform full-body multi-organ multi-disease detection and diagnosis in clinical practice.
Most medical AI systems are built to focus on single organs with a narrow list of a few diseases.
CancerUniT is a query-based Mask Transformer model with the output of multi-tumor prediction.
arXiv Detail & Related papers (2023-01-28T20:09:34Z) - EMT-NET: Efficient multitask network for computer-aided diagnosis of
breast cancer [58.720142291102135]
We propose an efficient and light-weighted learning architecture to classify and segment breast tumors simultaneously.
We incorporate a segmentation task into a tumor classification network, which makes the backbone network learn representations focused on tumor regions.
The accuracy, sensitivity, and specificity of tumor classification is 88.6%, 94.1%, and 85.3%, respectively.
arXiv Detail & Related papers (2022-01-13T05:24:40Z) - Triplet Contrastive Learning for Brain Tumor Classification [99.07846518148494]
We present a novel approach of directly learning deep embeddings for brain tumor types, which can be used for downstream tasks such as classification.
We evaluate our method on an extensive brain tumor dataset which consists of 27 different tumor classes, out of which 13 are defined as rare.
arXiv Detail & Related papers (2021-08-08T11:26:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.