Advancing Smart Malnutrition Monitoring: A Multi-Modal Learning Approach
for Vital Health Parameter Estimation
- URL: http://arxiv.org/abs/2307.16745v1
- Date: Mon, 31 Jul 2023 15:08:02 GMT
- Title: Advancing Smart Malnutrition Monitoring: A Multi-Modal Learning Approach
for Vital Health Parameter Estimation
- Authors: Ashish Marisetty, Prathistith Raj M, Praneeth Nemani, Venkanna
Udutalapally and Debanjan Das
- Abstract summary: This study presents a groundbreaking, scalable, and robust smart malnutrition-monitoring system.
It uses a single full-body image of an individual to estimate height, weight, and other crucial health parameters.
Our model achieves a low Mean Absolute Error (MAE) of $pm$ 4.7 cm and $pm$ 5.3 kg in estimating height and weight.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Malnutrition poses a significant threat to global health, resulting from an
inadequate intake of essential nutrients that adversely impacts vital organs
and overall bodily functioning. Periodic examinations and mass screenings,
incorporating both conventional and non-invasive techniques, have been employed
to combat this challenge. However, these approaches suffer from critical
limitations, such as the need for additional equipment, lack of comprehensive
feature representation, absence of suitable health indicators, and the
unavailability of smartphone implementations for precise estimations of Body
Fat Percentage (BFP), Basal Metabolic Rate (BMR), and Body Mass Index (BMI) to
enable efficient smart-malnutrition monitoring. To address these constraints,
this study presents a groundbreaking, scalable, and robust smart
malnutrition-monitoring system that leverages a single full-body image of an
individual to estimate height, weight, and other crucial health parameters
within a multi-modal learning framework. Our proposed methodology involves the
reconstruction of a highly precise 3D point cloud, from which 512-dimensional
feature embeddings are extracted using a headless-3D classification network.
Concurrently, facial and body embeddings are also extracted, and through the
application of learnable parameters, these features are then utilized to
estimate weight accurately. Furthermore, essential health metrics, including
BMR, BFP, and BMI, are computed to conduct a comprehensive analysis of the
subject's health, subsequently facilitating the provision of personalized
nutrition plans. While being robust to a wide range of lighting conditions
across multiple devices, our model achieves a low Mean Absolute Error (MAE) of
$\pm$ 4.7 cm and $\pm$ 5.3 kg in estimating height and weight.
Related papers
- Estimating Body Volume and Height Using 3D Data [0.0]
This paper presents a non-invasive method for estimating body weight using 3D imaging technology.
A RealSense D415 camera is employed to capture high-resolution depth maps of the patient.
The height is derived from the 3D model by identifying the distance between key points on the body.
arXiv Detail & Related papers (2024-09-18T16:20:46Z) - NutritionVerse-Direct: Exploring Deep Neural Networks for Multitask Nutrition Prediction from Food Images [63.314702537010355]
Self-reporting methods are often inaccurate and suffer from substantial bias.
Recent work has explored using computer vision prediction systems to predict nutritional information from food images.
This paper aims to enhance the efficacy of dietary intake estimation by leveraging various neural network architectures.
arXiv Detail & Related papers (2024-05-13T14:56:55Z) - Super-resolution of biomedical volumes with 2D supervision [84.5255884646906]
Masked slice diffusion for super-resolution exploits the inherent equivalence in the data-generating distribution across all spatial dimensions of biological specimens.
We focus on the application of SliceR to stimulated histology (SRH), characterized by its rapid acquisition of high-resolution 2D images but slow and costly optical z-sectioning.
arXiv Detail & Related papers (2024-04-15T02:41:55Z) - PatchBMI-Net: Lightweight Facial Patch-based Ensemble for BMI Prediction [3.9440964696313485]
Self-diagnostic facial image-based BMI prediction methods are proposed for healthy weight monitoring.
These methods have mostly used convolutional neural network (CNN) based regression baselines, such as VGG19, ResNet50, and Efficient-NetB0.
This paper aims to develop a lightweight facial patch-based ensemble (PatchBMI-Net) for BMI prediction to facilitate the deployment and weight monitoring using smartphones.
arXiv Detail & Related papers (2023-11-29T21:39:24Z) - OBESEYE: Interpretable Diet Recommender for Obesity Management using
Machine Learning and Explainable AI [0.0]
Obesity, the leading cause of many non-communicable diseases, occurs mainly for eating more than our body requirements.
It is difficult to figure out the exact quantity of each nutrient because nutrients requirement varies based on physical and disease conditions.
We proposed a novel machine learning based system to predict the amount of nutrients one individual requires for being healthy.
arXiv Detail & Related papers (2023-08-05T06:02:28Z) - Body Fat Estimation from Surface Meshes using Graph Neural Networks [48.85291874087541]
We show that triangulated body surface meshes can be used to accurately predict VAT and ASAT volumes using graph neural networks.
Our methods achieve high performance while reducing training time and required resources compared to state-of-the-art convolutional neural networks in this area.
arXiv Detail & Related papers (2023-07-13T10:21:34Z) - Vision-Based Food Analysis for Automatic Dietary Assessment [49.32348549508578]
This review presents one unified Vision-Based Dietary Assessment (VBDA) framework, which generally consists of three stages: food image analysis, volume estimation and nutrient derivation.
Deep learning makes VBDA gradually move to an end-to-end implementation, which applies food images to a single network to directly estimate the nutrition.
arXiv Detail & Related papers (2021-08-06T05:46:01Z) - 3D Human Body Reshaping with Anthropometric Modeling [59.51820187982793]
Reshaping accurate and realistic 3D human bodies from anthropometric parameters poses a fundamental challenge for person identification, online shopping and virtual reality.
Existing approaches for creating such 3D shapes often suffer from complex measurement by range cameras or high-end scanners.
This paper proposes a novel feature-selection-based local mapping technique, which enables automatic anthropometric parameter modeling for each body facet.
arXiv Detail & Related papers (2021-04-05T04:09:39Z) - An Artificial Intelligence-Based System to Assess Nutrient Intake for
Hospitalised Patients [4.048427587958764]
Regular monitoring of nutrient intake in hospitalised patients plays a critical role in reducing the risk of disease-related malnutrition.
We propose a novel system based on artificial intelligence (AI) to accurately estimate nutrient intake.
arXiv Detail & Related papers (2020-03-18T15:28:51Z) - HEMlets PoSh: Learning Part-Centric Heatmap Triplets for 3D Human Pose
and Shape Estimation [60.35776484235304]
This work attempts to address the uncertainty of lifting the detected 2D joints to the 3D space by introducing an intermediate state-Part-Centric Heatmap Triplets (HEMlets)
The HEMlets utilize three joint-heatmaps to represent the relative depth information of the end-joints for each skeletal body part.
A Convolutional Network (ConvNet) is first trained to predict HEMlets from the input image, followed by a volumetric joint-heatmap regression.
arXiv Detail & Related papers (2020-03-10T04:03:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.