Shap-MeD
- URL: http://arxiv.org/abs/2503.15562v1
- Date: Wed, 19 Mar 2025 00:40:14 GMT
- Title: Shap-MeD
- Authors: Nicolás Laverde, Melissa Robles, Johan Rodríguez,
- Abstract summary: We present Shap-MeD, a text-to-3D object generative model specialized in the biomedical domain.<n>We leverage Shap-e, an open-source text-to-3D generative model developed by OpenAI, and fine-tune it using a dataset of biomedical objects.<n>Our results indicate that Shap-MeD demonstrates higher structural accuracy in biomedical object generation.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present Shap-MeD, a text-to-3D object generative model specialized in the biomedical domain. The objective of this study is to develop an assistant that facilitates the 3D modeling of medical objects, thereby reducing development time. 3D modeling in medicine has various applications, including surgical procedure simulation and planning, the design of personalized prosthetic implants, medical education, the creation of anatomical models, and the development of research prototypes. To achieve this, we leverage Shap-e, an open-source text-to-3D generative model developed by OpenAI, and fine-tune it using a dataset of biomedical objects. Our model achieved a mean squared error (MSE) of 0.089 in latent generation on the evaluation set, compared to Shap-e's MSE of 0.147. Additionally, we conducted a qualitative evaluation, comparing our model with others in the generation of biomedical objects. Our results indicate that Shap-MeD demonstrates higher structural accuracy in biomedical object generation.
Related papers
- A Statistical 3D Stomach Shape Model for Anatomical Analysis [0.0]
We propose a novel pipeline for the generation of synthetic 3D stomach models.<n>We develop a 3D statistical shape model of the stomach, trained to capture natural anatomical variability.<n>This work introduces the first statistical 3D shape model of the stomach, with applications ranging from surgical simulation and pre-operative planning to medical education and computational modeling.
arXiv Detail & Related papers (2025-09-08T09:23:11Z) - Biomedical Foundation Model: A Survey [84.26268124754792]
Foundation models are large-scale pre-trained models that learn from extensive unlabeled datasets.<n>These models can be adapted to various applications such as question answering and visual understanding.<n>This survey explores the potential of foundation models across diverse domains within biomedical fields.
arXiv Detail & Related papers (2025-03-03T22:42:00Z) - Applications of Large Models in Medicine [1.7326218418566917]
Medical Large Models (MedLMs) are revolutionizing healthcare by enhancing disease prediction, diagnostic assistance, personalized treatment planning, and drug discovery.
This paper aims to provide a comprehensive overview of the current state and future directions of large models in medicine, underscoring their significance in advancing global health.
arXiv Detail & Related papers (2025-02-24T13:21:30Z) - Point Cloud Upsampling as Statistical Shape Model for Pelvic [1.4045865137356779]
We propose a novel framework that integrates medical image segmentation and point cloud upsampling for accurate shape reconstruction of pelvic models.<n>Using the SAM-Med3D model for segmentation and a point cloud upsampling network trained on the MedShapeNet dataset, our method transforms sparse medical imaging data into high-resolution 3D bone models.
arXiv Detail & Related papers (2025-01-28T05:47:50Z) - Generative Enhancement for 3D Medical Images [74.17066529847546]
We propose GEM-3D, a novel generative approach to the synthesis of 3D medical images.
Our method begins with a 2D slice, noted as the informed slice to serve the patient prior, and propagates the generation process using a 3D segmentation mask.
By decomposing the 3D medical images into masks and patient prior information, GEM-3D offers a flexible yet effective solution for generating versatile 3D images.
arXiv Detail & Related papers (2024-03-19T15:57:04Z) - Endora: Video Generation Models as Endoscopy Simulators [53.72175969751398]
This paper introduces model, an innovative approach to generate medical videos that simulate clinical endoscopy scenes.
We also pioneer the first public benchmark for endoscopy simulation with video generation models.
Endora marks a notable breakthrough in the deployment of generative AI for clinical endoscopy research.
arXiv Detail & Related papers (2024-03-17T00:51:59Z) - Towards a clinically accessible radiology foundation model: open-access and lightweight, with automated evaluation [113.5002649181103]
Training open-source small multimodal models (SMMs) to bridge competency gaps for unmet clinical needs in radiology.
For training, we assemble a large dataset of over 697 thousand radiology image-text pairs.
For evaluation, we propose CheXprompt, a GPT-4-based metric for factuality evaluation, and demonstrate its parity with expert evaluation.
The inference of LlaVA-Rad is fast and can be performed on a single V100 GPU in private settings, offering a promising state-of-the-art tool for real-world clinical applications.
arXiv Detail & Related papers (2024-03-12T18:12:02Z) - SAM-Med3D: Towards General-purpose Segmentation Models for Volumetric Medical Images [35.83393121891959]
We introduce SAM-Med3D for general-purpose segmentation on volumetric medical images.
SAM-Med3D can accurately segment diverse anatomical structures and lesions across various modalities.
Our approach demonstrates that substantial medical resources can be utilized to develop a general-purpose medical AI.
arXiv Detail & Related papers (2023-10-23T17:57:36Z) - MedShapeNet -- A Large-Scale Dataset of 3D Medical Shapes for Computer
Vision [119.29105800342779]
MedShapeNet was created to facilitate the translation of data-driven vision algorithms to medical applications.
As a unique feature, we directly model the majority of shapes on the imaging data of real patients.
Our data is freely accessible via a web interface and a Python application programming interface (API) and can be used for discriminative, reconstructive, and variational benchmarks.
arXiv Detail & Related papers (2023-08-30T16:52:20Z) - Towards Generalist Foundation Model for Radiology by Leveraging
Web-scale 2D&3D Medical Data [66.9359934608229]
This study aims to initiate the development of Radiology Foundation Model, termed as RadFM.
To the best of our knowledge, this is the first large-scale, high-quality, medical visual-language dataset, with both 2D and 3D scans.
We propose a new evaluation benchmark, RadBench, that comprises five tasks, including modality recognition, disease diagnosis, visual question answering, report generation and rationale diagnosis.
arXiv Detail & Related papers (2023-08-04T17:00:38Z) - BOSS: Bones, Organs and Skin Shape Model [10.50175010474078]
We propose a deformable human shape and pose model that combines skin, internal organs, and bones, learned from CT images.
By modeling the statistical variations in a pose-normalized space using probabilistic PCA, our approach offers a holistic representation of the body.
arXiv Detail & Related papers (2023-03-08T22:31:24Z) - A Point Cloud Generative Model via Tree-Structured Graph Convolutions
for 3D Brain Shape Reconstruction [31.436531681473753]
It is almost impossible to obtain the intraoperative 3D shape information by using physical methods such as sensor scanning.
In this paper, a general generative adversarial network (GAN) architecture is proposed to reconstruct the 3D point clouds (PCs) of brains by using one single 2D image.
arXiv Detail & Related papers (2021-07-21T07:57:37Z) - Benchmarking off-the-shelf statistical shape modeling tools in clinical
applications [53.47202621511081]
We systematically assess the outcome of widely used, state-of-the-art SSM tools.
We propose validation frameworks for anatomical landmark/measurement inference and lesion screening.
ShapeWorks and Deformetrica shape models are found to capture clinically relevant population-level variability.
arXiv Detail & Related papers (2020-09-07T03:51:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.