Monitoring of Pigmented Skin Lesions Using 3D Whole Body Imaging
- URL: http://arxiv.org/abs/2205.07085v1
- Date: Sat, 14 May 2022 15:24:06 GMT
- Title: Monitoring of Pigmented Skin Lesions Using 3D Whole Body Imaging
- Authors: David Ahmedt-Aristizabal, Chuong Nguyen, Lachlan Tychsen-Smith, Ashley
Stacey, Shenghong Li, Joseph Pathikulangara, Lars Petersson, Dadong Wang
- Abstract summary: We propose a 3D whole body imaging prototype to enable rapid evaluation and mapping of skin lesions.
A modular camera rig is designed to automatically capture synchronised images from multiple angles for entire body scanning.
We develop algorithms for 3D body image reconstruction, data processing and skin lesion detection based on deep convolutional neural networks.
- Score: 14.544274849288952
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern data-driven machine learning research that enables revolutionary
advances in image analysis has now become a critical tool to redefine how skin
lesions are documented, mapped, and tracked. We propose a 3D whole body imaging
prototype to enable rapid evaluation and mapping of skin lesions. A modular
camera rig arranged in a cylindrical configuration is designed to automatically
capture synchronised images from multiple angles for entire body scanning. We
develop algorithms for 3D body image reconstruction, data processing and skin
lesion detection based on deep convolutional neural networks. We also propose a
customised, intuitive and flexible interface that allows the user to interact
and collaborate with the machine to understand the data. The hybrid of the
human and computer is represented by the analysis of 2D lesion detection, 3D
mapping and data management. The experimental results using synthetic and real
images demonstrate the effectiveness of the proposed solution by providing
multiple views of the target skin lesion, enabling further 3D geometry
analysis. Skin lesions are identified as outliers which deserve more attention
from a skin cancer physician. Our detector identifies lesions at a comparable
performance level as a physician. The proposed 3D whole body imaging system can
be used by dermatological clinics, allowing for fast documentation of lesions,
quick and accurate analysis of the entire body to detect suspicious lesions.
Because of its fast examination, the method might be used for screening or
epidemiological investigations. 3D data analysis has the potential to change
the paradigm of total-body photography with many applications in skin diseases,
including inflammatory and pigmentary disorders.
Related papers
- 3D-CT-GPT: Generating 3D Radiology Reports through Integration of Large Vision-Language Models [51.855377054763345]
This paper introduces 3D-CT-GPT, a Visual Question Answering (VQA)-based medical visual language model for generating radiology reports from 3D CT scans.
Experiments on both public and private datasets demonstrate that 3D-CT-GPT significantly outperforms existing methods in terms of report accuracy and quality.
arXiv Detail & Related papers (2024-09-28T12:31:07Z) - Brain3D: Generating 3D Objects from fMRI [76.41771117405973]
We design a novel 3D object representation learning method, Brain3D, that takes as input the fMRI data of a subject.
We show that our model captures the distinct functionalities of each region of human vision system.
Preliminary evaluations indicate that Brain3D can successfully identify the disordered brain regions in simulated scenarios.
arXiv Detail & Related papers (2024-05-24T06:06:11Z) - Creating a Digital Twin of Spinal Surgery: A Proof of Concept [68.37190859183663]
Surgery digitalization is the process of creating a virtual replica of real-world surgery.
We present a proof of concept (PoC) for surgery digitalization that is applied to an ex-vivo spinal surgery.
We employ five RGB-D cameras for dynamic 3D reconstruction of the surgeon, a high-end camera for 3D reconstruction of the anatomy, an infrared stereo camera for surgical instrument tracking, and a laser scanner for 3D reconstruction of the operating room and data fusion.
arXiv Detail & Related papers (2024-03-25T13:09:40Z) - Generative Enhancement for 3D Medical Images [74.17066529847546]
We propose GEM-3D, a novel generative approach to the synthesis of 3D medical images.
Our method begins with a 2D slice, noted as the informed slice to serve the patient prior, and propagates the generation process using a 3D segmentation mask.
By decomposing the 3D medical images into masks and patient prior information, GEM-3D offers a flexible yet effective solution for generating versatile 3D images.
arXiv Detail & Related papers (2024-03-19T15:57:04Z) - Dense 3D Reconstruction Through Lidar: A Comparative Study on Ex-vivo
Porcine Tissue [16.786601606755013]
Researchers are actively investigating depth sensing and 3D reconstruction for vision-based surgical assistance.
It remains difficult to achieve real-time, accurate, and robust 3D representations of the abdominal cavity for minimally invasive surgery.
This work uses quantitative testing on fresh ex-vivo porcine tissue to thoroughly characterize the quality with which a 3D laser-based time-of-flight sensor can perform anatomical surface reconstruction.
arXiv Detail & Related papers (2024-01-19T14:14:26Z) - A Temporal Learning Approach to Inpainting Endoscopic Specularities and
Its effect on Image Correspondence [13.25903945009516]
We propose using a temporal generative adversarial network (GAN) to inpaint the hidden anatomy under specularities.
This is achieved using in-vivo data of gastric endoscopy (Hyper-Kvasir) in a fully unsupervised manner.
We also assess the effect of our method in computer vision tasks that underpin 3D reconstruction and camera motion estimation.
arXiv Detail & Related papers (2022-03-31T13:14:00Z) - A Point Cloud Generative Model via Tree-Structured Graph Convolutions
for 3D Brain Shape Reconstruction [31.436531681473753]
It is almost impossible to obtain the intraoperative 3D shape information by using physical methods such as sensor scanning.
In this paper, a general generative adversarial network (GAN) architecture is proposed to reconstruct the 3D point clouds (PCs) of brains by using one single 2D image.
arXiv Detail & Related papers (2021-07-21T07:57:37Z) - Comprehensive Validation of Automated Whole Body Skeletal Muscle,
Adipose Tissue, and Bone Segmentation from 3D CT images for Body Composition
Analysis: Towards Extended Body Composition [0.6176955945418618]
Powerful tools of artificial intelligence such as deep learning are making it feasible now to segment the entire 3D image and generate accurate measurements of all internal anatomy.
These will enable the overcoming of the severe bottleneck that existed previously, namely, the need for manual segmentation.
These measurements were hitherto unavailable thereby limiting the field to a very small and limited subset.
arXiv Detail & Related papers (2021-06-01T17:30:45Z) - Detection and Longitudinal Tracking of Pigmented Skin Lesions in 3D
Total-Body Skin Textured Meshes [13.93503694899408]
We present an automated approach to detect and longitudinally track skin lesions on 3D total-body skin surfaces scans.
The acquired 3D mesh of the subject is unwrapped to a 2D texture image, where a trained region convolutional neural network (R-CNN) localizes the lesions within the 2D domain.
Our results, on test subjects annotated by three human annotators, suggest that the trained R-CNN detects lesions at a similar performance level as the human annotators.
arXiv Detail & Related papers (2021-05-02T01:52:28Z) - Revisiting 3D Context Modeling with Supervised Pre-training for
Universal Lesion Detection in CT Slices [48.85784310158493]
We propose a Modified Pseudo-3D Feature Pyramid Network (MP3D FPN) to efficiently extract 3D context enhanced 2D features for universal lesion detection in CT slices.
With the novel pre-training method, the proposed MP3D FPN achieves state-of-the-art detection performance on the DeepLesion dataset.
The proposed 3D pre-trained weights can potentially be used to boost the performance of other 3D medical image analysis tasks.
arXiv Detail & Related papers (2020-12-16T07:11:16Z) - XraySyn: Realistic View Synthesis From a Single Radiograph Through CT
Priors [118.27130593216096]
A radiograph visualizes the internal anatomy of a patient through the use of X-ray, which projects 3D information onto a 2D plane.
To the best of our knowledge, this is the first work on radiograph view synthesis.
We show that by gaining an understanding of radiography in 3D space, our method can be applied to radiograph bone extraction and suppression without groundtruth bone labels.
arXiv Detail & Related papers (2020-12-04T05:08:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.