Food4All: A Multi-Agent Framework for Real-time Free Food Discovery with Integrated Nutritional Metadata
- URL: http://arxiv.org/abs/2510.18289v1
- Date: Tue, 21 Oct 2025 04:35:02 GMT
- Title: Food4All: A Multi-Agent Framework for Real-time Free Food Discovery with Integrated Nutritional Metadata
- Authors: Zhengqing Yuan, Yiyang Li, Weixiang Sun, Zheyuan Zhang, Kaiwen Shi, Keerthiram Murugesan, Yanfang Ye,
- Abstract summary: Food4All is the first multi-agent framework explicitly designed for real-time, context-aware free food retrieval.<n>Food4All unifies three innovations: 1) heterogeneous data aggregation across official databases, community platforms, and social media to provide a continuously updated pool of food resources; 2) a lightweight reinforcement learning algorithm trained on curated cases to optimize for both geographic accessibility and nutritional correctness; and 3) an online feedback loop that dynamically adapts retrieval policies to evolving user needs.
- Score: 27.735512297142623
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Food insecurity remains a persistent public health emergency in the United States, tightly interwoven with chronic disease, mental illness, and opioid misuse. Yet despite the existence of thousands of food banks and pantries, access remains fragmented: 1) current retrieval systems depend on static directories or generic search engines, which provide incomplete and geographically irrelevant results; 2) LLM-based chatbots offer only vague nutritional suggestions and fail to adapt to real-world constraints such as time, mobility, and transportation; and 3) existing food recommendation systems optimize for culinary diversity but overlook survival-critical needs of food-insecure populations, including immediate proximity, verified availability, and contextual barriers. These limitations risk leaving the most vulnerable individuals, those experiencing homelessness, addiction, or digital illiteracy, unable to access urgently needed resources. To address this, we introduce Food4All, the first multi-agent framework explicitly designed for real-time, context-aware free food retrieval. Food4All unifies three innovations: 1) heterogeneous data aggregation across official databases, community platforms, and social media to provide a continuously updated pool of food resources; 2) a lightweight reinforcement learning algorithm trained on curated cases to optimize for both geographic accessibility and nutritional correctness; and 3) an online feedback loop that dynamically adapts retrieval policies to evolving user needs. By bridging information acquisition, semantic analysis, and decision support, Food4All delivers nutritionally annotated and guidance at the point of need. This framework establishes an urgent step toward scalable, equitable, and intelligent systems that directly support populations facing food insecurity and its compounding health risks.
Related papers
- GLEN-Bench: A Graph-Language based Benchmark for Nutritional Health [48.94971812317643]
We introduce GLEN-Bench, the first comprehensive graph-language based benchmark for nutritional health assessment.<n>GLEN-Bench includes three linked tasks: risk detection identifies at-risk individuals from dietary and socioeconomic patterns; recommendation suggests personalized foods that meet clinical needs within resource constraints.<n>Our analysis identifies clear dietary patterns linked to health risks, providing insights that can guide practical interventions.
arXiv Detail & Related papers (2026-01-26T03:32:46Z) - An Integrated Framework for Contextual Personalized LLM-Based Food Recommendation [1.4957306171002251]
This thesis identifies and analyzes the essential components for effective Food-RecSys.<n>We introduce two key innovations: a multimedia food logging platform for rich contextual data acquisition and the World Food Atlas.<n>We pioneer the Food Recommendation as Language Processing framework - a novel, integrated approach specifically architected for the food domain.
arXiv Detail & Related papers (2025-04-25T22:20:33Z) - A monthly sub-national Harmonized Food Insecurity Dataset for comprehensive analysis and predictive modeling [0.11292693568898363]
This paper introduces the Harmonized Food Insecurity dataset (HFID), an open-source resource consolidating four key data sources.<n>The HFID serves as a vital tool for food security experts and humanitarian agencies, providing a unified resource for analyzing food security conditions.<n>The scientific community can also leverage the HFID to develop data-driven predictive models, enhancing the capacity to forecast and prevent future food crises.
arXiv Detail & Related papers (2025-01-10T16:13:57Z) - MOPI-HFRS: A Multi-objective Personalized Health-aware Food Recommendation System with LLM-enhanced Interpretation [50.309987904297415]
Major food recommendation platforms such as Yelp prioritize users' dietary preferences over the healthiness of their choices.<n>We develop a novel framework, Multi-Objective Personalized Interpretable Health-aware Food Recommendation System (MOPI-HFRS)<n>It provides food recommendations by jointly optimizing the three objectives: user preference, personalized healthiness and nutritional diversity, along with a large language model (LLM)-enhanced reasoning module.
arXiv Detail & Related papers (2024-12-12T01:02:09Z) - RoDE: Linear Rectified Mixture of Diverse Experts for Food Large Multi-Modal Models [96.43285670458803]
Uni-Food is a unified food dataset that comprises over 100,000 images with various food labels.<n>Uni-Food is designed to provide a more holistic approach to food data analysis.<n>We introduce a novel Linear Rectification Mixture of Diverse Experts (RoDE) approach to address the inherent challenges of food-related multitasking.
arXiv Detail & Related papers (2024-07-17T16:49:34Z) - NutritionVerse-Direct: Exploring Deep Neural Networks for Multitask Nutrition Prediction from Food Images [63.314702537010355]
Self-reporting methods are often inaccurate and suffer from substantial bias.
Recent work has explored using computer vision prediction systems to predict nutritional information from food images.
This paper aims to enhance the efficacy of dietary intake estimation by leveraging various neural network architectures.
arXiv Detail & Related papers (2024-05-13T14:56:55Z) - Revolutionizing Global Food Security: Empowering Resilience through
Integrated AI Foundation Models and Data-Driven Solutions [8.017557640367938]
This paper explores the integration of AI foundation models across various food security applications.
We investigate their utilization in crop type mapping, cropland mapping, field delineation and crop yield prediction.
arXiv Detail & Related papers (2023-10-31T09:15:35Z) - Fine-grained prediction of food insecurity using news streams [9.04748106111465]
We leverage recent advances in deep learning to extract high-frequency precursors to food crises from news articles published between 1980 and 2020.
Our text features are causally grounded, interpretable, validated by existing data, and allow us to predict 32% more food crises than existing models.
arXiv Detail & Related papers (2021-11-17T17:35:00Z) - Vision-Based Food Analysis for Automatic Dietary Assessment [49.32348549508578]
This review presents one unified Vision-Based Dietary Assessment (VBDA) framework, which generally consists of three stages: food image analysis, volume estimation and nutrient derivation.
Deep learning makes VBDA gradually move to an end-to-end implementation, which applies food images to a single network to directly estimate the nutrition.
arXiv Detail & Related papers (2021-08-06T05:46:01Z) - Cross-Modal Food Retrieval: Learning a Joint Embedding of Food Images
and Recipes with Semantic Consistency and Attention Mechanism [70.85894675131624]
We learn an embedding of images and recipes in a common feature space, such that the corresponding image-recipe embeddings lie close to one another.
We propose Semantic-Consistent and Attention-based Networks (SCAN), which regularize the embeddings of the two modalities through aligning output semantic probabilities.
We show that we can outperform several state-of-the-art cross-modal retrieval strategies for food images and cooking recipes by a significant margin.
arXiv Detail & Related papers (2020-03-09T07:41:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.