Detection and Longitudinal Tracking of Pigmented Skin Lesions in 3D
Total-Body Skin Textured Meshes
- URL: http://arxiv.org/abs/2105.00374v1
- Date: Sun, 2 May 2021 01:52:28 GMT
- Title: Detection and Longitudinal Tracking of Pigmented Skin Lesions in 3D
Total-Body Skin Textured Meshes
- Authors: Mengliu Zhao, Jeremy Kawahara, Sajjad Shamanian, Kumar Abhishek,
Priyanka Chandrashekar, Ghassan Hamarneh
- Abstract summary: We present an automated approach to detect and longitudinally track skin lesions on 3D total-body skin surfaces scans.
The acquired 3D mesh of the subject is unwrapped to a 2D texture image, where a trained region convolutional neural network (R-CNN) localizes the lesions within the 2D domain.
Our results, on test subjects annotated by three human annotators, suggest that the trained R-CNN detects lesions at a similar performance level as the human annotators.
- Score: 13.93503694899408
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present an automated approach to detect and longitudinally track skin
lesions on 3D total-body skin surfaces scans. The acquired 3D mesh of the
subject is unwrapped to a 2D texture image, where a trained region
convolutional neural network (R-CNN) localizes the lesions within the 2D
domain. These detected skin lesions are mapped back to the 3D surface of the
subject and, for subjects imaged multiple times, the anatomical correspondences
among pairs of meshes and the geodesic distances among lesions are leveraged in
our longitudinal lesion tracking algorithm.
We evaluated the proposed approach using three sources of data. Firstly, we
augmented the 3D meshes of human subjects from the public FAUST dataset with a
variety of poses, textures, and images of lesions. Secondly, using a handheld
structured light 3D scanner, we imaged a mannequin with multiple synthetic skin
lesions at selected location and with varying shapes, sizes, and colours.
Finally, we used 3DBodyTex, a publicly available dataset composed of 3D scans
imaging the colored (textured) skin of 200 human subjects. We manually
annotated locations that appeared to the human eye to contain a pigmented skin
lesion as well as tracked a subset of lesions occurring on the same subject
imaged in different poses.
Our results, on test subjects annotated by three human annotators, suggest
that the trained R-CNN detects lesions at a similar performance level as the
human annotators. Our lesion tracking algorithm achieves an average accuracy of
80% when identifying corresponding pairs of lesions across subjects imaged in
different poses. As there currently is no other large-scale publicly available
dataset of 3D total-body skin lesions, we publicly release the 10 mannequin
meshes and over 25,000 3DBodyTex manual annotations, which we hope will further
research on total-body skin lesion analysis.
Related papers
- Occlusion-Aware 3D Motion Interpretation for Abnormal Behavior Detection [10.782354892545651]
We present OAD2D, which discriminates against motion abnormalities based on reconstructing 3D coordinates of mesh vertices and human joints from monocular videos.
We reformulate the abnormal posture estimation by coupling it with Motion to Text (M2T) model in which, the VQVAE is employed to quantize motion features.
Our approach demonstrates the robustness of abnormal behavior detection against severe and self-occlusions, as it reconstructs human motion trajectories in global coordinates.
arXiv Detail & Related papers (2024-07-23T18:41:16Z) - Cloth2Body: Generating 3D Human Body Mesh from 2D Clothing [54.29207348918216]
Cloth2Body needs to address new and emerging challenges raised by the partial observation of the input and the high diversity of the output.
We propose an end-to-end framework that can accurately estimate 3D body mesh parameterized by pose and shape from a 2D clothing image.
As shown by experimental results, the proposed framework achieves state-of-the-art performance and can effectively recover natural and diverse 3D body meshes from 2D images.
arXiv Detail & Related papers (2023-09-28T06:18:38Z) - Skin Lesion Correspondence Localization in Total Body Photography [4.999387255024588]
We propose a novel framework combining geometric and texture information to localize skin lesion correspondence from a source scan to a target scan in total body photography (TBP)
As full-body 3D capture becomes more prevalent and has higher quality, we expect the proposed method to constitute a valuable step in the longitudinal tracking of skin lesions.
arXiv Detail & Related papers (2023-07-18T21:10:59Z) - Automatic 3D Registration of Dental CBCT and Face Scan Data using 2D
Projection Images [0.9226931037259524]
This paper presents a fully automatic registration method of dental cone-beam computed tomography (CBCT) and face scan data.
It can be used for a digital platform of 3D jaw-teeth-face models in a variety of applications, including 3D digital treatment planning and orthognathic surgery.
arXiv Detail & Related papers (2023-05-17T11:26:43Z) - Zolly: Zoom Focal Length Correctly for Perspective-Distorted Human Mesh
Reconstruction [66.10717041384625]
Zolly is the first 3DHMR method focusing on perspective-distorted images.
We propose a new camera model and a novel 2D representation, termed distortion image, which describes the 2D dense distortion scale of the human body.
We extend two real-world datasets tailored for this task, all containing perspective-distorted human images.
arXiv Detail & Related papers (2023-03-24T04:22:41Z) - Decomposing 3D Neuroimaging into 2+1D Processing for Schizophrenia
Recognition [25.80846093248797]
We propose to process the 3D data by a 2+1D framework so that we can exploit the powerful deep 2D Convolutional Neural Network (CNN) networks pre-trained on the huge ImageNet dataset for 3D neuroimaging recognition.
Specifically, 3D volumes of Magnetic Resonance Imaging (MRI) metrics are decomposed to 2D slices according to neighboring voxel positions.
Global pooling is applied to remove redundant information as the activation patterns are sparsely distributed over feature maps.
Channel-wise and slice-wise convolutions are proposed to aggregate the contextual information in the third dimension unprocessed by the 2D CNN model.
arXiv Detail & Related papers (2022-11-21T15:22:59Z) - Monitoring of Pigmented Skin Lesions Using 3D Whole Body Imaging [14.544274849288952]
We propose a 3D whole body imaging prototype to enable rapid evaluation and mapping of skin lesions.
A modular camera rig is designed to automatically capture synchronised images from multiple angles for entire body scanning.
We develop algorithms for 3D body image reconstruction, data processing and skin lesion detection based on deep convolutional neural networks.
arXiv Detail & Related papers (2022-05-14T15:24:06Z) - PONet: Robust 3D Human Pose Estimation via Learning Orientations Only [116.1502793612437]
We propose a novel Pose Orientation Net (PONet) that is able to robustly estimate 3D pose by learning orientations only.
PONet estimates the 3D orientation of these limbs by taking advantage of the local image evidence to recover the 3D pose.
We evaluate our method on multiple datasets, including Human3.6M, MPII, MPI-INF-3DHP, and 3DPW.
arXiv Detail & Related papers (2021-12-21T12:48:48Z) - 3D Convolutional Neural Networks for Stalled Brain Capillary Detection [72.21315180830733]
Brain vasculature dysfunctions such as stalled blood flow in cerebral capillaries are associated with cognitive decline and pathogenesis in Alzheimer's disease.
Here, we describe a deep learning-based approach for automatic detection of stalled capillaries in brain images based on 3D convolutional neural networks.
In this setting, our approach outperformed other methods and demonstrated state-of-the-art results, achieving 0.85 Matthews correlation coefficient, 85% sensitivity, and 99.3% specificity.
arXiv Detail & Related papers (2021-04-04T20:30:14Z) - Revisiting 3D Context Modeling with Supervised Pre-training for
Universal Lesion Detection in CT Slices [48.85784310158493]
We propose a Modified Pseudo-3D Feature Pyramid Network (MP3D FPN) to efficiently extract 3D context enhanced 2D features for universal lesion detection in CT slices.
With the novel pre-training method, the proposed MP3D FPN achieves state-of-the-art detection performance on the DeepLesion dataset.
The proposed 3D pre-trained weights can potentially be used to boost the performance of other 3D medical image analysis tasks.
arXiv Detail & Related papers (2020-12-16T07:11:16Z) - Towards Generalization of 3D Human Pose Estimation In The Wild [73.19542580408971]
3DBodyTex.Pose is a dataset that addresses the task of 3D human pose estimation in-the-wild.
3DBodyTex.Pose offers high quality and rich data containing 405 different real subjects in various clothing and poses, and 81k image samples with ground-truth 2D and 3D pose annotations.
arXiv Detail & Related papers (2020-04-21T13:31:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.