Curia: A Multi-Modal Foundation Model for Radiology
- URL: http://arxiv.org/abs/2509.06830v1
- Date: Mon, 08 Sep 2025 16:04:12 GMT
- Title: Curia: A Multi-Modal Foundation Model for Radiology
- Authors: Corentin Dancette, Julien Khlaut, Antoine Saporta, Helene Philippe, Elodie Ferreres, Baptiste Callard, Théo Danielou, Léo Alberge, Léo Machado, Daniel Tordjman, Julie Dupuis, Korentin Le Floch, Jean Du Terrail, Mariam Moshiri, Laurent Dercle, Tom Boeken, Jules Gregory, Maxime Ronot, François Legou, Pascal Roux, Marc Sapoval, Pierre Manceron, Paul Hérent,
- Abstract summary: We introduce Curia, a foundation model trained on the entire cross-sectional imaging output of a major hospital.<n>Curia accurately identifies organs, detects conditions like brain hemorrhages and myocardial infarctions, and predicts outcomes in tumor staging.
- Score: 3.5025024631649857
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: AI-assisted radiological interpretation is based on predominantly narrow, single-task models. This approach is impractical for covering the vast spectrum of imaging modalities, diseases, and radiological findings. Foundation models (FMs) hold the promise of broad generalization across modalities and in low-data settings. However, this potential has remained largely unrealized in radiology. We introduce Curia, a foundation model trained on the entire cross-sectional imaging output of a major hospital over several years, which to our knowledge is the largest such corpus of real-world data-encompassing 150,000 exams (130 TB). On a newly curated 19-task external validation benchmark, Curia accurately identifies organs, detects conditions like brain hemorrhages and myocardial infarctions, and predicts outcomes in tumor staging. Curia meets or surpasses the performance of radiologists and recent foundation models, and exhibits clinically significant emergent properties in cross-modality, and low-data regimes. To accelerate progress, we release our base model's weights at https://huggingface.co/raidium/curia.
Related papers
- A generalizable large-scale foundation model for musculoskeletal radiographs [6.440881664328117]
We present SKELEX, a large-scale foundation model for musculoskeletal radiographs trained using self-supervised learning.<n>The model was evaluated on 12 downstream diagnostic tasks and generally outperformed baselines in fracture detection, osteoarthritis grading, and bone tumor classification.<n>We developed an interpretable, region-guided model for predicting bone tumors, which maintained robust performance on independent external datasets.
arXiv Detail & Related papers (2026-02-03T04:04:45Z) - A multimodal vision foundation model for generalizable knee pathology [40.03838145472935]
Musculoskeletal disorders represent an urgent demand for precise interpretation of medical imaging.<n>Current artificial intelligence approaches in orthopedics rely on task-specific, supervised learning paradigms.<n>We introduce OrthoFoundation, a multimodal vision foundation model optimized for musculoskeletal pathology.
arXiv Detail & Related papers (2026-01-26T08:14:51Z) - X-ray Insights Unleashed: Pioneering the Enhancement of Multi-Label Long-Tail Data [86.52299247918637]
Long-tailed pulmonary anomalies in chest radiography present formidable diagnostic challenges.<n>Despite the recent strides in diffusion-based methods for enhancing the representation of tailed lesions, the paucity of rare lesion exemplars curtails the generative capabilities of these approaches.<n>We propose a novel data synthesis pipeline designed to augment tail lesions utilizing a copious supply of conventional normal X-rays.
arXiv Detail & Related papers (2025-12-24T06:14:55Z) - Multi Anatomy X-Ray Foundation Model [7.079609136804425]
We introduce XR-0, the multi-anatomy X-ray foundation model using self-supervised learning.<n> XR-0 achieves state-of-the-art performance on most multi-anatomy tasks and remains competitive on chest-specific benchmarks.
arXiv Detail & Related papers (2025-09-15T17:12:26Z) - Fast-staged CNN Model for Accurate pulmonary diseases and Lung cancer detection [0.0]
This research evaluates a deep learning model designed to detect lung cancer, specifically pulmonary nodules, along with eight other lung pathologies, using chest radiographs.<n>A two-stage classification system, utilizing ensemble methods and transfer learning, is employed to first triage images into Normal or Abnormal.<n>The model achieves notable results in classification, with a top-performing accuracy of 77%, a sensitivity of 0.713, a specificity of 0.776 during external validation, and an AUC score of 0.888.
arXiv Detail & Related papers (2024-12-16T11:47:07Z) - MGH Radiology Llama: A Llama 3 70B Model for Radiology [50.42811030970618]
This paper presents an advanced radiology-focused large language model: MGH Radiology Llama.<n>It is developed using the Llama 3 70B model, building upon previous domain-specific models like Radiology-GPT and Radiology-Llama2.<n>Our evaluation, incorporating both traditional metrics and a GPT-4-based assessment, highlights the enhanced performance of this work over general-purpose LLMs.
arXiv Detail & Related papers (2024-08-13T01:30:03Z) - Generative models of MRI-derived neuroimaging features and associated dataset of 18,000 samples [17.576301478946775]
GenMIND is a collection of generative models of normative regional volumetric features derived from structural brain imaging.
We offer 18,000 synthetic samples spanning the adult lifespan (ages 22-90 years), alongside the model's capability to generate unlimited data.
arXiv Detail & Related papers (2024-07-17T15:33:10Z) - Advancing human-centric AI for robust X-ray analysis through holistic self-supervised learning [33.9544297423474]
We present RayDINO, a large visual encoder trained by self-supervision on 873k chest X-rays.
We compare RayDINO to previous state-of-the-art models across nine radiology tasks, from classification and dense segmentation to text generation.
Our findings suggest that self-supervision allows patient-centric AI proving useful in clinical and interpreting X-rays holistically.
arXiv Detail & Related papers (2024-05-02T16:59:10Z) - Uncertainty Quantification in Detecting Choroidal Metastases on MRI via Evolutionary Strategies [0.0]
Uncertainty quantification plays a vital role in facilitating the practical implementation of AI in radiology.
We employed DNE to train a simple Convolutional Neural Network (CNN) with MRI images of the eyes for binary classification.
We found that subjective features appreciated by human radiologists explained images for which uncertainty was high.
arXiv Detail & Related papers (2024-04-12T23:49:37Z) - ChatRadio-Valuer: A Chat Large Language Model for Generalizable
Radiology Report Generation Based on Multi-institution and Multi-system Data [115.0747462486285]
ChatRadio-Valuer is a tailored model for automatic radiology report generation that learns generalizable representations.
The clinical dataset utilized in this study encompasses a remarkable total of textbf332,673 observations.
ChatRadio-Valuer consistently outperforms state-of-the-art models, especially ChatGPT (GPT-3.5-Turbo) and GPT-4 et al.
arXiv Detail & Related papers (2023-10-08T17:23:17Z) - Revisiting Computer-Aided Tuberculosis Diagnosis [56.80999479735375]
Tuberculosis (TB) is a major global health threat, causing millions of deaths annually.
Computer-aided tuberculosis diagnosis (CTD) using deep learning has shown promise, but progress is hindered by limited training data.
We establish a large-scale dataset, namely the Tuberculosis X-ray (TBX11K) dataset, which contains 11,200 chest X-ray (CXR) images with corresponding bounding box annotations for TB areas.
This dataset enables the training of sophisticated detectors for high-quality CTD.
arXiv Detail & Related papers (2023-07-06T08:27:48Z) - CoRSAI: A System for Robust Interpretation of CT Scans of COVID-19
Patients Using Deep Learning [133.87426554801252]
We adopted an approach based on using an ensemble of deep convolutionalneural networks for segmentation of lung CT scans.
Using our models we are able to segment the lesions, evaluatepatients dynamics, estimate relative volume of lungs affected by lesions and evaluate the lung damage stage.
arXiv Detail & Related papers (2021-05-25T12:06:55Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - Attention U-Net Based Adversarial Architectures for Chest X-ray Lung
Segmentation [0.0]
We present a novel deep learning approach for lung segmentation, a basic, but arduous task in the diagnostic pipeline.
Our method uses state-of-the-art fully convolutional neural networks in conjunction with an adversarial critic model.
It generalized well to CXR images of unseen datasets with different patient profiles, achieving a final DSCR of 97.5% on the JSRT dataset.
arXiv Detail & Related papers (2020-03-23T14:45:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.