Predicting Anthropometric Body Composition Variables Using 3D Optical Imaging and Machine Learning
- URL: http://arxiv.org/abs/2506.14815v1
- Date: Sun, 08 Jun 2025 03:42:56 GMT
- Title: Predicting Anthropometric Body Composition Variables Using 3D Optical Imaging and Machine Learning
- Authors: Gyaneshwar Agrahari, Kiran Bist, Monika Pandey, Jacob Kapita, Zachary James, Jackson Knox, Steven Heymsfield, Sophia Ramirez, Peter Wolenski, Nadejda Drenska,
- Abstract summary: This work proposes an alternative to DXA scans by applying statistical and machine learning models on biomarkers obtained from 3D optical images.<n>Extracting patients' data in healthcare faces many technical challenges and legal restrictions.<n>To overcome these limitations, we implemented a semi-supervised model, the $p$-Laplacian regression model.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Accurate prediction of anthropometric body composition variables, such as Appendicular Lean Mass (ALM), Body Fat Percentage (BFP), and Bone Mineral Density (BMD), is essential for early diagnosis of several chronic diseases. Currently, researchers rely on Dual-Energy X-ray Absorptiometry (DXA) scans to measure these metrics; however, DXA scans are costly and time-consuming. This work proposes an alternative to DXA scans by applying statistical and machine learning models on biomarkers (height, volume, left calf circumference, etc) obtained from 3D optical images. The dataset consists of 847 patients and was sourced from Pennington Biomedical Research Center. Extracting patients' data in healthcare faces many technical challenges and legal restrictions. However, most supervised machine learning algorithms are inherently data-intensive, requiring a large amount of training data. To overcome these limitations, we implemented a semi-supervised model, the $p$-Laplacian regression model. This paper is the first to demonstrate the application of a $p$-Laplacian model for regression. Our $p$-Laplacian model yielded errors of $\sim13\%$ for ALM, $\sim10\%$ for BMD, and $\sim20\%$ for BFP when the training data accounted for 10 percent of all data. Among the supervised algorithms we implemented, Support Vector Regression (SVR) performed the best for ALM and BMD, yielding errors of $\sim 8\%$ for both, while Least Squares SVR performed the best for BFP with $\sim 11\%$ error when trained on 80 percent of the data. Our findings position the $p$-Laplacian model as a promising tool for healthcare applications, particularly in a data-constrained environment.
Related papers
- Geodesic Optimization for Predictive Shift Adaptation on EEG data [53.58711912565724]
Domain adaptation methods struggle when distribution shifts occur simultaneously in $X$ and $y$.
This paper proposes a novel method termed Geodesic Optimization for Predictive Shift Adaptation (GOPSA) to address test-time multi-source DA.
GOPSA has the potential to combine the advantages of mixed-effects modeling with machine learning for biomedical applications of EEG.
arXiv Detail & Related papers (2024-07-04T12:15:42Z) - Swin UNETR++: Advancing Transformer-Based Dense Dose Prediction Towards Fully Automated Radiation Oncology Treatments [0.0]
We propose Swin UNETR++, that contains a lightweight 3D Dual Cross-Attention (DCA) module to capture the intra and inter-volume relationships of each patient's anatomy.
Our model was trained, validated, and tested on the Open Knowledge-Based Planning dataset.
arXiv Detail & Related papers (2023-11-11T13:52:59Z) - Machine Learning Force Fields with Data Cost Aware Training [94.78998399180519]
Machine learning force fields (MLFF) have been proposed to accelerate molecular dynamics (MD) simulation.
Even for the most data-efficient MLFFs, reaching chemical accuracy can require hundreds of frames of force and energy labels.
We propose a multi-stage computational framework -- ASTEROID, which lowers the data cost of MLFFs by leveraging a combination of cheap inaccurate data and expensive accurate data.
arXiv Detail & Related papers (2023-06-05T04:34:54Z) - Multidimensional analysis using sensor arrays with deep learning for
high-precision and high-accuracy diagnosis [0.0]
We demonstrate that it becomes possible to significantly improve the measurements' precision and accuracy by feeding a deep neural network (DNN) with the data from a low-cost and low-accuracy sensor array.
The data collection is done with an array composed of 32 temperature sensors, including 16 analog and 16 digital sensors.
arXiv Detail & Related papers (2022-11-30T16:14:55Z) - StRegA: Unsupervised Anomaly Detection in Brain MRIs using a Compact
Context-encoding Variational Autoencoder [48.2010192865749]
Unsupervised anomaly detection (UAD) can learn a data distribution from an unlabelled dataset of healthy subjects and then be applied to detect out of distribution samples.
This research proposes a compact version of the "context-encoding" VAE (ceVAE) model, combined with pre and post-processing steps, creating a UAD pipeline (StRegA)
The proposed pipeline achieved a Dice score of 0.642$pm$0.101 while detecting tumours in T2w images of the BraTS dataset and 0.859$pm$0.112 while detecting artificially induced anomalies.
arXiv Detail & Related papers (2022-01-31T14:27:35Z) - FedMed-ATL: Misaligned Unpaired Brain Image Synthesis via Affine
Transform Loss [58.58979566599889]
We propose a novel self-supervised learning (FedMed) for brain image synthesis.
An affine transform loss (ATL) was formulated to make use of severely distorted images without violating privacy legislation.
The proposed method demonstrates advanced performance in both the quality of synthesized results under a severely misaligned and unpaired data setting.
arXiv Detail & Related papers (2022-01-29T13:45:39Z) - Predicting Knee Osteoarthritis Progression from Structural MRI using
Deep Learning [2.9822184411723645]
Prior art focused on manually designed imaging biomarkers, which may not fully exploit all disease-related information present in MRI scan.
In contrast, our method learns relevant representations from raw data end-to-end using Deep Learning.
The method employs a 2D CNN to process the data slice-wise and aggregate the extracted features using a Transformer.
arXiv Detail & Related papers (2022-01-26T10:17:41Z) - SANSformers: Self-Supervised Forecasting in Electronic Health Records
with Attention-Free Models [48.07469930813923]
This work aims to forecast the demand for healthcare services, by predicting the number of patient visits to healthcare facilities.
We introduce SANSformer, an attention-free sequential model designed with specific inductive biases to cater for the unique characteristics of EHR data.
Our results illuminate the promising potential of tailored attention-free models and self-supervised pretraining in refining healthcare utilization predictions across various patient demographics.
arXiv Detail & Related papers (2021-08-31T08:23:56Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - AutoSimulate: (Quickly) Learning Synthetic Data Generation [70.82315853981838]
We propose an efficient alternative for optimal synthetic data generation based on a novel differentiable approximation of the objective.
We demonstrate that the proposed method finds the optimal data distribution faster (up to $50times$), with significantly reduced training data generation (up to $30times$) and better accuracy ($+8.7%$) on real-world test datasets than previous methods.
arXiv Detail & Related papers (2020-08-16T11:36:11Z) - Limited Angle Tomography for Transmission X-Ray Microscopy Using Deep
Learning [12.991428974915795]
Deep learning is applied to limited angle reconstruction in X-ray microscopy for the first time.
The U-Net, the state-of-the-art neural network in biomedical imaging, is trained from synthetic data.
The proposed method remarkably improves the 3-D visualization of the subcellular structures in the chlorella cell.
arXiv Detail & Related papers (2020-01-08T12:11:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.