Personal Food Model
- URL: http://arxiv.org/abs/2008.12855v1
- Date: Fri, 28 Aug 2020 21:36:09 GMT
- Title: Personal Food Model
- Authors: Ali Rostami, Vaibhav Pandey, Nitish Nag, Vesper Wang, Ramesh Jain
- Abstract summary: We adopt a person-centric multimedia and multimodal perspective on food computing.
Personal Food Model is the digitized representation of the food-related characteristics of an individual.
We use event mining approaches to relate food with other life and biological events to build a predictive model.
- Score: 4.093166743990079
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Food is central to life. Food provides us with energy and foundational
building blocks for our body and is also a major source of joy and new
experiences. A significant part of the overall economy is related to food. Food
science, distribution, processing, and consumption have been addressed by
different communities using silos of computational approaches. In this paper,
we adopt a person-centric multimedia and multimodal perspective on food
computing and show how multimedia and food computing are synergistic and
complementary.
Enjoying food is a truly multimedia experience involving sight, taste, smell,
and even sound, that can be captured using a multimedia food logger. The
biological response to food can be captured using multimodal data streams using
available wearable devices. Central to this approach is the Personal Food
Model. Personal Food Model is the digitized representation of the food-related
characteristics of an individual. It is designed to be used in food
recommendation systems to provide eating-related recommendations that improve
the user's quality of life. To model the food-related characteristics of each
person, it is essential to capture their food-related enjoyment using a
Preferential Personal Food Model and their biological response to food using
their Biological Personal Food Model. Inspired by the power of 3-dimensional
color models for visual processing, we introduce a 6-dimensional taste-space
for capturing culinary characteristics as well as personal preferences. We use
event mining approaches to relate food with other life and biological events to
build a predictive model that could also be used effectively in emerging food
recommendation systems.
Related papers
- FoodSky: A Food-oriented Large Language Model that Passes the Chef and Dietetic Examination [37.11551779015218]
We introduce Food-oriented Large Language Models (LLMs) to comprehend food data through perception and reasoning.
Considering the complexity and typicality of Chinese cuisine, we first construct one comprehensive Chinese food corpus FoodEarth.
We then propose Topic-based Selective State Space Model (TS3M) and the Hierarchical Topic Retrieval Augmented Generation (HTRAG) mechanism to enhance FoodSky.
arXiv Detail & Related papers (2024-06-11T01:27:00Z) - ChatDiet: Empowering Personalized Nutrition-Oriented Food Recommender Chatbots through an LLM-Augmented Framework [2.3221599497640915]
ChatDiet is a novel framework designed specifically for personalized nutrition-oriented food recommendation chatbots.
ChatDiet integrates personal and population models, complemented by an orchestrator, to seamlessly retrieve and process pertinent information.
Our evaluation of ChatDiet includes a compelling case study, where we establish a causal personal model to estimate individual nutrition effects.
arXiv Detail & Related papers (2024-02-18T06:07:17Z) - FoodFusion: A Latent Diffusion Model for Realistic Food Image Generation [69.91401809979709]
Current state-of-the-art image generation models such as Latent Diffusion Models (LDMs) have demonstrated the capacity to produce visually striking food-related images.
We introduce FoodFusion, a Latent Diffusion model engineered specifically for the faithful synthesis of realistic food images from textual descriptions.
The development of the FoodFusion model involves harnessing an extensive array of open-source food datasets, resulting in over 300,000 curated image-caption pairs.
arXiv Detail & Related papers (2023-12-06T15:07:12Z) - NutritionVerse-Real: An Open Access Manually Collected 2D Food Scene
Dataset for Dietary Intake Estimation [68.49526750115429]
We introduce NutritionVerse-Real, an open access manually collected 2D food scene dataset for dietary intake estimation.
The NutritionVerse-Real dataset was created by manually collecting images of food scenes in real life, measuring the weight of every ingredient and computing the associated dietary content of each dish.
arXiv Detail & Related papers (2023-11-20T11:05:20Z) - NutritionVerse: Empirical Study of Various Dietary Intake Estimation Approaches [59.38343165508926]
Accurate dietary intake estimation is critical for informing policies and programs to support healthy eating.
Recent work has focused on using computer vision and machine learning to automatically estimate dietary intake from food images.
We introduce NutritionVerse- Synth, the first large-scale dataset of 84,984 synthetic 2D food images with associated dietary information.
We also collect a real image dataset, NutritionVerse-Real, containing 889 images of 251 dishes to evaluate realism.
arXiv Detail & Related papers (2023-09-14T13:29:41Z) - UMDFood: Vision-language models boost food composition compilation [26.5694236976957]
We propose a novel vision-language model, UMDFood-VL, using front-of-package labeling and product images to accurately estimate food composition profiles.
Up to 82.2% of selected products' estimated error between chemical analysis results and model estimation results are less than 10%.
This performance sheds light on generalization towards other food and nutrition-related data compilation and catalyzation.
arXiv Detail & Related papers (2023-05-18T03:18:12Z) - NutritionVerse-3D: A 3D Food Model Dataset for Nutritional Intake
Estimation [65.47310907481042]
One in four older adults are malnourished.
Machine learning and computer vision show promise of automated nutrition tracking methods of food.
NutritionVerse-3D is a large-scale high-resolution dataset of 105 3D food models.
arXiv Detail & Related papers (2023-04-12T05:27:30Z) - Towards the Creation of a Nutrition and Food Group Based Image Database [58.429385707376554]
We propose a framework to create a nutrition and food group based image database.
We design a protocol for linking food group based food codes in the U.S. Department of Agriculture's (USDA) Food and Nutrient Database for Dietary Studies (FNDDS)
Our proposed method is used to build a nutrition and food group based image database including 16,114 food datasets.
arXiv Detail & Related papers (2022-06-05T02:41:44Z) - A Mobile Food Recognition System for Dietary Assessment [6.982738885923204]
We focus on developing a mobile friendly, Middle Eastern cuisine focused food recognition application for assisted living purposes.
Using Mobilenet-v2 architecture for this task is beneficial in terms of both accuracy and the memory usage.
The developed mobile application has potential to serve the visually impaired in automatic food recognition via images.
arXiv Detail & Related papers (2022-04-20T12:49:36Z) - Towards Building a Food Knowledge Graph for Internet of Food [66.57235827087092]
We review the evolution of food knowledge organization, from food classification to food to food knowledge graphs.
Food knowledge graphs play an important role in food search and Question Answering (QA), personalized dietary recommendation, food analysis and visualization.
Future directions for food knowledge graphs cover several fields such as multimodal food knowledge graphs and food intelligence.
arXiv Detail & Related papers (2021-07-13T06:26:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.