ONCOPILOT: A Promptable CT Foundation Model For Solid Tumor Evaluation
- URL: http://arxiv.org/abs/2410.07908v3
- Date: Thu, 24 Oct 2024 15:35:58 GMT
- Title: ONCOPILOT: A Promptable CT Foundation Model For Solid Tumor Evaluation
- Authors: Léo Machado, Hélène Philippe, Élodie Ferreres, Julien Khlaut, Julie Dupuis, Korentin Le Floch, Denis Habip Gatenyo, Pascal Roux, Jules Grégory, Maxime Ronot, Corentin Dancette, Daniel Tordjman, Pierre Manceron, Paul Hérent,
- Abstract summary: ONCOPILOT is an interactive radiological foundation model trained on approximately 7,500 CT scans covering the whole body.
It performs 3D tumor segmentation using visual prompts like point-click and bounding boxes, outperforming state-of-the-art models.
ONCOPILOT also accelerates measurement processes and reduces inter-reader variability.
- Score: 3.956274064760269
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Carcinogenesis is a proteiform phenomenon, with tumors emerging in various locations and displaying complex, diverse shapes. At the crucial intersection of research and clinical practice, it demands precise and flexible assessment. However, current biomarkers, such as RECIST 1.1's long and short axis measurements, fall short of capturing this complexity, offering an approximate estimate of tumor burden and a simplistic representation of a more intricate process. Additionally, existing supervised AI models face challenges in addressing the variability in tumor presentations, limiting their clinical utility. These limitations arise from the scarcity of annotations and the models' focus on narrowly defined tasks. To address these challenges, we developed ONCOPILOT, an interactive radiological foundation model trained on approximately 7,500 CT scans covering the whole body, from both normal anatomy and a wide range of oncological cases. ONCOPILOT performs 3D tumor segmentation using visual prompts like point-click and bounding boxes, outperforming state-of-the-art models (e.g., nnUnet) and achieving radiologist-level accuracy in RECIST 1.1 measurements. The key advantage of this foundation model is its ability to surpass state-of-the-art performance while keeping the radiologist in the loop, a capability that previous models could not achieve. When radiologists interactively refine the segmentations, accuracy improves further. ONCOPILOT also accelerates measurement processes and reduces inter-reader variability, facilitating volumetric analysis and unlocking new biomarkers for deeper insights. This AI assistant is expected to enhance the precision of RECIST 1.1 measurements, unlock the potential of volumetric biomarkers, and improve patient stratification and clinical care, while seamlessly integrating into the radiological workflow.
Related papers
- Optimizing Synthetic Data for Enhanced Pancreatic Tumor Segmentation [1.6321136843816972]
This study critically evaluates the limitations of existing generative-AI based frameworks for pancreatic tumor segmentation.
We conduct a series of experiments to investigate the impact of synthetic textittumor size and textitboundary definition precision on model performance.
Our findings demonstrate that: (1) strategically selecting a combination of synthetic tumor sizes is crucial for optimal segmentation outcomes, and (2) generating synthetic tumors with precise boundaries significantly improves model accuracy.
arXiv Detail & Related papers (2024-07-27T15:38:07Z) - Potential of Multimodal Large Language Models for Data Mining of Medical Images and Free-text Reports [51.45762396192655]
Multimodal large language models (MLLMs) have recently transformed many domains, significantly affecting the medical field. Notably, Gemini-Vision-series (Gemini) and GPT-4-series (GPT-4) models have epitomized a paradigm shift in Artificial General Intelligence for computer vision.
This study evaluated the performance of the Gemini, GPT-4, and 4 popular large models for an exhaustive evaluation across 14 medical imaging datasets.
arXiv Detail & Related papers (2024-07-08T09:08:42Z) - Cross-modality Guidance-aided Multi-modal Learning with Dual Attention
for MRI Brain Tumor Grading [47.50733518140625]
Brain tumor represents one of the most fatal cancers around the world, and is very common in children and the elderly.
We propose a novel cross-modality guidance-aided multi-modal learning with dual attention for addressing the task of MRI brain tumor grading.
arXiv Detail & Related papers (2024-01-17T07:54:49Z) - Beyond Images: An Integrative Multi-modal Approach to Chest X-Ray Report
Generation [47.250147322130545]
Image-to-text radiology report generation aims to automatically produce radiology reports that describe the findings in medical images.
Most existing methods focus solely on the image data, disregarding the other patient information accessible to radiologists.
We present a novel multi-modal deep neural network framework for generating chest X-rays reports by integrating structured patient data, such as vital signs and symptoms, alongside unstructured clinical notes.
arXiv Detail & Related papers (2023-11-18T14:37:53Z) - Segmentation-based Assessment of Tumor-Vessel Involvement for Surgical
Resectability Prediction of Pancreatic Ductal Adenocarcinoma [1.880228463170355]
Pancreatic ductal adenocarcinoma (PDAC) is a highly aggressive cancer with limited treatment options.
This research proposes a workflow and deep learning-based segmentation models to automatically assess tumor-vessel involvement.
arXiv Detail & Related papers (2023-10-01T10:39:38Z) - Automated ensemble method for pediatric brain tumor segmentation [0.0]
This study introduces a novel ensemble approach using ONet and modified versions of UNet.
Data augmentation ensures robustness and accuracy across different scanning protocols.
Results indicate that this advanced ensemble approach offers promising prospects for enhanced diagnostic accuracy.
arXiv Detail & Related papers (2023-08-14T15:29:32Z) - CoNIC Challenge: Pushing the Frontiers of Nuclear Detection,
Segmentation, Classification and Counting [46.45578907156356]
We setup a community-wide challenge using the largest available dataset of its kind to assess nuclear segmentation and cellular composition.
We conducted an extensive post-challenge analysis based on the top-performing models using 1,658 whole-slide images of colon tissue.
Our findings suggest that nuclei and eosinophils play an important role in the tumour microevironment.
arXiv Detail & Related papers (2023-03-11T01:21:13Z) - Patched Diffusion Models for Unsupervised Anomaly Detection in Brain MRI [55.78588835407174]
We propose a method that reformulates the generation task of diffusion models as a patch-based estimation of healthy brain anatomy.
We evaluate our approach on data of tumors and multiple sclerosis lesions and demonstrate a relative improvement of 25.1% compared to existing baselines.
arXiv Detail & Related papers (2023-03-07T09:40:22Z) - StRegA: Unsupervised Anomaly Detection in Brain MRIs using a Compact
Context-encoding Variational Autoencoder [48.2010192865749]
Unsupervised anomaly detection (UAD) can learn a data distribution from an unlabelled dataset of healthy subjects and then be applied to detect out of distribution samples.
This research proposes a compact version of the "context-encoding" VAE (ceVAE) model, combined with pre and post-processing steps, creating a UAD pipeline (StRegA)
The proposed pipeline achieved a Dice score of 0.642$pm$0.101 while detecting tumours in T2w images of the BraTS dataset and 0.859$pm$0.112 while detecting artificially induced anomalies.
arXiv Detail & Related papers (2022-01-31T14:27:35Z) - Deep Learning for Reaction-Diffusion Glioma Growth Modelling: Towards a
Fully Personalised Model? [0.2609639566830968]
Reaction-diffusion models have been proposed for decades to capture the growth of gliomas.
Deep convolutional neural networks (DCNNs) can address the pitfalls commonly encountered in the field.
This approach may open the perspective of a clinical application of reaction-diffusion growth models for tumour prognosis and treatment planning.
arXiv Detail & Related papers (2021-11-26T10:16:57Z) - Harvesting, Detecting, and Characterizing Liver Lesions from Large-scale
Multi-phase CT Data via Deep Dynamic Texture Learning [24.633802585888812]
We propose a fully-automated and multi-stage liver tumor characterization framework for dynamic contrast computed tomography (CT)
Our system comprises four sequential processes of tumor proposal detection, tumor harvesting, primary tumor site selection, and deep texture-based tumor characterization.
arXiv Detail & Related papers (2020-06-28T19:55:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.