SnappyMeal: Design and Longitudinal Evaluation of a Multimodal AI Food Logging Application
- URL: http://arxiv.org/abs/2511.03907v1
- Date: Wed, 05 Nov 2025 23:14:22 GMT
- Title: SnappyMeal: Design and Longitudinal Evaluation of a Multimodal AI Food Logging Application
- Authors: Liam Bakar, Zachary Englhardt, Vidya Srinivas, Girish Narayanswamy, Dilini Nissanka, Shwetak Patel, Vikram Iyer,
- Abstract summary: Food logging plays a critical role in uncovering correlations between diet, medical, fitness, and health outcomes.<n>Current methods, such as handwritten and app-based journaling, are inflexible and result in low adherence and potentially inaccurate nutritional summaries.<n>We propose SnappyMeal, an AI-powered dietary tracking system that leverages multimodal inputs to enable users to more flexibly log their food intake.
- Score: 7.101331257479778
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Food logging, both self-directed and prescribed, plays a critical role in uncovering correlations between diet, medical, fitness, and health outcomes. Through conversations with nutritional experts and individuals who practice dietary tracking, we find current logging methods, such as handwritten and app-based journaling, are inflexible and result in low adherence and potentially inaccurate nutritional summaries. These findings, corroborated by prior literature, emphasize the urgent need for improved food logging methods. In response, we propose SnappyMeal, an AI-powered dietary tracking system that leverages multimodal inputs to enable users to more flexibly log their food intake. SnappyMeal introduces goal-dependent follow-up questions to intelligently seek missing context from the user and information retrieval from user grocery receipts and nutritional databases to improve accuracy. We evaluate SnappyMeal through publicly available nutrition benchmarks and a multi-user, 3-week, in-the-wild deployment capturing over 500 logged food instances. Users strongly praised the multiple available input methods and reported a strong perceived accuracy. These insights suggest that multimodal AI systems can be leveraged to significantly improve dietary tracking flexibility and context-awareness, laying the groundwork for a new class of intelligent self-tracking applications.
Related papers
- A Closed-Loop Multi-Agent System Driven by LLMs for Meal-Level Personalized Nutrition Management [0.0]
We present a next-generation mobile nutrition assistant that combines image based meal logging with an LLM driven multi agent controller to provide meal level closed loop support.<n>The system coordinates vision, dialogue and state management agents to estimate nutrients from photos and update a daily intake budget.
arXiv Detail & Related papers (2026-01-08T01:51:37Z) - Exploring approaches to computational representation and classification of user-generated meal logs [6.888077368936294]
This study examined the use of machine learning and domain specific enrichment on patient generated health data to classify meals on alignment with different nutritional goals.<n>We used a dataset of over 3000 meal records collected by 114 individuals from a diverse, low income community in a major US city using a mobile app.
arXiv Detail & Related papers (2025-09-08T04:23:48Z) - Advancing Food Nutrition Estimation via Visual-Ingredient Feature Fusion [69.84988999191343]
We introduce FastFood, a dataset with 84,446 images across 908 fast food categories, featuring ingredient and nutritional annotations.<n>We propose a new model-agnostic Visual-Ingredient Feature Fusion (VIF$2$) method to enhance nutrition estimation.
arXiv Detail & Related papers (2025-05-13T17:01:21Z) - DietGlance: Dietary Monitoring and Personalized Analysis at a Glance with Knowledge-Empowered AI Assistant [36.806619917276414]
We present DietGlance, a system that automatically monitors dietary in daily routines and delivers personalized analysis from knowledge sources.<n>DietGlance first detects ingestive episodes from multimodal inputs using eyeglasses, capturing privacy-preserving meal images of various dishes being consumed.<n>Based on the inferred food items and consumed quantities from these images, DietGlance further provides nutritional analysis and personalized dietary suggestions.
arXiv Detail & Related papers (2025-02-03T12:46:37Z) - MOPI-HFRS: A Multi-objective Personalized Health-aware Food Recommendation System with LLM-enhanced Interpretation [50.309987904297415]
Major food recommendation platforms such as Yelp prioritize users' dietary preferences over the healthiness of their choices.<n>We develop a novel framework, Multi-Objective Personalized Interpretable Health-aware Food Recommendation System (MOPI-HFRS)<n>It provides food recommendations by jointly optimizing the three objectives: user preference, personalized healthiness and nutritional diversity, along with a large language model (LLM)-enhanced reasoning module.
arXiv Detail & Related papers (2024-12-12T01:02:09Z) - NutrifyAI: An AI-Powered System for Real-Time Food Detection, Nutritional Analysis, and Personalized Meal Recommendations [14.036206693783198]
This paper introduces a comprehensive system that combines advanced computer vision techniques with nutritional analysis, implemented in a versatile mobile and web application.
Preliminary results showcase the system's effectiveness by providing immediate, accurate dietary insights, with a demonstrated food recognition accuracy of nearly 80%.
arXiv Detail & Related papers (2024-08-20T04:18:53Z) - NutritionVerse-Direct: Exploring Deep Neural Networks for Multitask Nutrition Prediction from Food Images [63.314702537010355]
Self-reporting methods are often inaccurate and suffer from substantial bias.
Recent work has explored using computer vision prediction systems to predict nutritional information from food images.
This paper aims to enhance the efficacy of dietary intake estimation by leveraging various neural network architectures.
arXiv Detail & Related papers (2024-05-13T14:56:55Z) - How Much You Ate? Food Portion Estimation on Spoons [63.611551981684244]
Current image-based food portion estimation algorithms assume that users take images of their meals one or two times.
We introduce an innovative solution that utilizes stationary user-facing cameras to track food items on utensils.
The system is reliable for estimation of nutritional content of liquid-solid heterogeneous mixtures such as soups and stews.
arXiv Detail & Related papers (2024-05-12T00:16:02Z) - NutritionVerse: Empirical Study of Various Dietary Intake Estimation Approaches [59.38343165508926]
Accurate dietary intake estimation is critical for informing policies and programs to support healthy eating.
Recent work has focused on using computer vision and machine learning to automatically estimate dietary intake from food images.
We introduce NutritionVerse- Synth, the first large-scale dataset of 84,984 synthetic 2D food images with associated dietary information.
We also collect a real image dataset, NutritionVerse-Real, containing 889 images of 251 dishes to evaluate realism.
arXiv Detail & Related papers (2023-09-14T13:29:41Z) - Food Recognition and Nutritional Apps [0.0]
Food recognition and nutritional apps are trending technologies that may revolutionise the way people with diabetes manage their diet.
These apps offer a promising solution for managing diabetes, but are rarely used by patients.
This chapter aims to provide an in-depth assessment of the current status of apps for food recognition and nutrition, to identify factors that may inhibit or facilitate their use.
arXiv Detail & Related papers (2023-06-20T13:23:59Z) - Vision-Based Food Analysis for Automatic Dietary Assessment [49.32348549508578]
This review presents one unified Vision-Based Dietary Assessment (VBDA) framework, which generally consists of three stages: food image analysis, volume estimation and nutrient derivation.
Deep learning makes VBDA gradually move to an end-to-end implementation, which applies food images to a single network to directly estimate the nutrition.
arXiv Detail & Related papers (2021-08-06T05:46:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.