AI-Driven Three-Dimensional Reconstruction and Quantitative Analysis for Burn Injury Assessment
- URL: http://arxiv.org/abs/2602.00113v1
- Date: Tue, 27 Jan 2026 01:24:53 GMT
- Title: AI-Driven Three-Dimensional Reconstruction and Quantitative Analysis for Burn Injury Assessment
- Authors: S. Kalaycioglu, C. Hong, K. Zhai, H. Xie, J. N. Wong,
- Abstract summary: This paper presents an AI-enabled burn assessment and management platform that integrates photogrammetry, 3D surface reconstruction, and deep learning-based segmentation.<n>The system reconstructs patient-specific 3D burn surfaces and maps burn regions onto anatomy to compute objective metrics in real-world units.<n>The platform also supports structured patient intake, guided image capture, 3D analysis and visualization, treatment recommendations, and automated report generation.
- Score: 0.20308459813360544
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurate, reproducible burn assessment is critical for treatment planning, healing monitoring, and medico-legal documentation, yet conventional visual inspection and 2D photography are subjective and limited for longitudinal comparison. This paper presents an AI-enabled burn assessment and management platform that integrates multi-view photogrammetry, 3D surface reconstruction, and deep learning-based segmentation within a structured clinical workflow. Using standard multi-angle images from consumer-grade cameras, the system reconstructs patient-specific 3D burn surfaces and maps burn regions onto anatomy to compute objective metrics in real-world units, including surface area, TBSA, depth-related geometric proxies, and volumetric change. Successive reconstructions are spatially aligned to quantify healing progression over time, enabling objective tracking of wound contraction and depth reduction. The platform also supports structured patient intake, guided image capture, 3D analysis and visualization, treatment recommendations, and automated report generation. Simulation-based evaluation demonstrates stable reconstructions, consistent metric computation, and clinically plausible longitudinal trends, supporting a scalable, non-invasive approach to objective, geometry-aware burn assessment and decision support in acute and outpatient care.
Related papers
- Non-Invasive 3D Wound Measurement with RGB-D Imaging [6.009571668786525]
This paper presents a fast, non-invasive 3D wound measurement algorithm based on RGB-D imaging.<n>The method combines RGB-D odometry with B-spline surface reconstruction to generate detailed 3D wound meshes.
arXiv Detail & Related papers (2026-01-26T23:03:24Z) - Medical Scene Reconstruction and Segmentation based on 3D Gaussian Representation [6.980731532480765]
3D reconstruction of medical images is a key technology in medical image analysis and clinical diagnosis.<n>Traditional methods are computationally expensive and prone to structural discontinuities and loss of detail in sparse slices.<n>We propose an efficient 3D reconstruction method based on 3D Gaussian and tri-plane representations.
arXiv Detail & Related papers (2025-12-28T06:18:11Z) - Accelerating 3D Photoacoustic Computed Tomography with End-to-End Physics-Aware Neural Operators [74.65171736966131]
Photoacoustic computed tomography (PACT) combines optical contrast with ultrasonic resolution, achieving deep-tissue imaging beyond the optical diffusion limit.<n>Current implementations require dense transducer arrays and prolonged acquisition times, limiting clinical translation.<n>We introduce Pano, an end-to-end physics-aware model that directly learns the inverse acoustic mapping from sensor measurements to volumetric reconstructions.
arXiv Detail & Related papers (2025-09-11T23:12:55Z) - Wound3DAssist: A Practical Framework for 3D Wound Assessment [24.184493298243392]
We present Wound3DAssist, a framework for 3D wound assessment using monocular consumer-grade videos.<n>Our framework generates accurate 3D models from short handheld smartphone video recordings.<n>We integrate 3D reconstruction, wound segmentation, tissue classification, and periwound analysis into a modular workflow.
arXiv Detail & Related papers (2025-08-25T03:50:04Z) - SPIDER: Structure-Preferential Implicit Deep Network for Biplanar X-ray Reconstruction [30.432335038130866]
SPIDER is a supervised framework designed to reconstruct CT volumes from biplanar X-ray images.<n>It embeds anatomical constraints into the reconstruction process, thereby enhancing structural continuity and reducing soft-tissue artifacts.<n>Our approach demonstrates strong potential in downstream segmentation tasks, underscoring its utility in personalized treatment planning and image-guided surgical navigation.
arXiv Detail & Related papers (2025-07-07T06:06:28Z) - Are Pixel-Wise Metrics Reliable for Sparse-View Computed Tomography Reconstruction? [61.48804987263701]
We propose a suite of anatomy-aware evaluation metrics to assess structural completeness across anatomical structures.<n> CARE incorporates structural penalties during training to encourage anatomical preservation of significant structures.<n> CARE substantially improves structural completeness in CT reconstructions, achieving up to +32% improvement for large organs, +22% for small organs, +40% for intestines, and +36% for vessels.
arXiv Detail & Related papers (2025-06-02T17:07:10Z) - A 3D Facial Reconstruction Evaluation Methodology: Comparing Smartphone Scans with Deep Learning Based Methods Using Geometry and Morphometry Criteria [60.865754842465684]
Three-dimensional (3D) facial shape analysis has gained interest due to its potential clinical applications.<n>High cost of advanced 3D facial acquisition systems limits their widespread use, driving the development of low-cost acquisition and reconstruction methods.<n>This study introduces a novel evaluation methodology that goes beyond traditional geometry-based benchmarks by integrating morphometric shape analysis techniques.
arXiv Detail & Related papers (2025-02-13T15:47:45Z) - FLex: Joint Pose and Dynamic Radiance Fields Optimization for Stereo Endoscopic Videos [79.50191812646125]
Reconstruction of endoscopic scenes is an important asset for various medical applications, from post-surgery analysis to educational training.
We adress the challenging setup of a moving endoscope within a highly dynamic environment of deforming tissue.
We propose an implicit scene separation into multiple overlapping 4D neural radiance fields (NeRFs) and a progressive optimization scheme jointly optimizing for reconstruction and camera poses from scratch.
This improves the ease-of-use and allows to scale reconstruction capabilities in time to process surgical videos of 5,000 frames and more; an improvement of more than ten times compared to the state of the art while being agnostic to external tracking information
arXiv Detail & Related papers (2024-03-18T19:13:02Z) - A Quantitative Evaluation of Dense 3D Reconstruction of Sinus Anatomy
from Monocular Endoscopic Video [8.32570164101507]
We perform a quantitative analysis of a self-supervised approach for sinus reconstruction using endoscopic sequences and optical tracking.
Our results show that the generated reconstructions are in high agreement with the anatomy, yielding an average point-to-mesh error of 0.91 mm.
We identify that pose and depth estimation inaccuracies contribute equally to this error and that locally consistent sequences with shorter trajectories generate more accurate reconstructions.
arXiv Detail & Related papers (2023-10-22T17:11:40Z) - Attentive Symmetric Autoencoder for Brain MRI Segmentation [56.02577247523737]
We propose a novel Attentive Symmetric Auto-encoder based on Vision Transformer (ViT) for 3D brain MRI segmentation tasks.
In the pre-training stage, the proposed auto-encoder pays more attention to reconstruct the informative patches according to the gradient metrics.
Experimental results show that our proposed attentive symmetric auto-encoder outperforms the state-of-the-art self-supervised learning methods and medical image segmentation models.
arXiv Detail & Related papers (2022-09-19T09:43:19Z) - Tattoo tomography: Freehand 3D photoacoustic image reconstruction with
an optical pattern [49.240017254888336]
Photoacoustic tomography (PAT) is a novel imaging technique that can resolve both morphological and functional tissue properties.
A current drawback is the limited field-of-view provided by the conventionally applied 2D probes.
We present a novel approach to 3D reconstruction of PAT data that does not require an external tracking system.
arXiv Detail & Related papers (2020-11-10T09:27:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.