Modeling Urban Food Insecurity with Google Street View Images
- URL: http://arxiv.org/abs/2507.02924v1
- Date: Wed, 25 Jun 2025 21:42:21 GMT
- Title: Modeling Urban Food Insecurity with Google Street View Images
- Authors: David Li,
- Abstract summary: Existing approaches to identifying food insecurity rely primarily on qualitative and quantitative survey data.<n>This project seeks to explore the effectiveness of using street-level images in modeling food insecurity at the census tract level.
- Score: 2.6563139755809364
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Food insecurity is a significant social and public health issue that plagues many urban metropolitan areas around the world. Existing approaches to identifying food insecurity rely primarily on qualitative and quantitative survey data, which is difficult to scale. This project seeks to explore the effectiveness of using street-level images in modeling food insecurity at the census tract level. To do so, we propose a two-step process of feature extraction and gated attention for image aggregation. We evaluate the effectiveness of our model by comparing against other model architectures, interpreting our learned weights, and performing a case study. While our model falls slightly short in terms of its predictive power, we believe our approach still has the potential to supplement existing methods of identifying food insecurity for urban planners and policymakers.
Related papers
- Anticipatory Understanding of Resilient Agriculture to Climate [66.008020515555]
We present a framework to better identify food security hotspots using a combination of remote sensing, deep learning, crop yield modeling, and causal modeling of the food distribution system.
We focus our analysis on the wheat breadbasket of northern India, which supplies a large percentage of the world's population.
arXiv Detail & Related papers (2024-11-07T22:29:05Z) - How Much You Ate? Food Portion Estimation on Spoons [63.611551981684244]
Current image-based food portion estimation algorithms assume that users take images of their meals one or two times.
We introduce an innovative solution that utilizes stationary user-facing cameras to track food items on utensils.
The system is reliable for estimation of nutritional content of liquid-solid heterogeneous mixtures such as soups and stews.
arXiv Detail & Related papers (2024-05-12T00:16:02Z) - From Canteen Food to Daily Meals: Generalizing Food Recognition to More
Practical Scenarios [92.58097090916166]
We present two new benchmarks, namely DailyFood-172 and DailyFood-16, designed to curate food images from everyday meals.
These two datasets are used to evaluate the transferability of approaches from the well-curated food image domain to the everyday-life food image domain.
arXiv Detail & Related papers (2024-03-12T08:32:23Z) - Forecasting trends in food security with real time data [0.0]
We present a quantitative methodology to forecast levels of food consumption for 60 consecutive days, at the sub-national level, in four countries: Mali, Nigeria, Syria, and Yemen.
The methodology is built on publicly available data from the World Food Programme's global hunger monitoring system.
arXiv Detail & Related papers (2023-12-01T14:42:37Z) - Revolutionizing Global Food Security: Empowering Resilience through
Integrated AI Foundation Models and Data-Driven Solutions [8.017557640367938]
This paper explores the integration of AI foundation models across various food security applications.
We investigate their utilization in crop type mapping, cropland mapping, field delineation and crop yield prediction.
arXiv Detail & Related papers (2023-10-31T09:15:35Z) - Diffusion Model with Clustering-based Conditioning for Food Image
Generation [22.154182296023404]
Deep learning-based techniques are commonly used to perform image analysis such as food classification, segmentation, and portion size estimation.
One potential solution is to use synthetic food images for data augmentation.
In this paper, we propose an effective clustering-based training framework, named ClusDiff, for generating high-quality and representative food images.
arXiv Detail & Related papers (2023-09-01T01:40:39Z) - Transferring Knowledge for Food Image Segmentation using Transformers
and Convolutions [65.50975507723827]
Food image segmentation is an important task that has ubiquitous applications, such as estimating the nutritional value of a plate of food.
One challenge is that food items can overlap and mix, making them difficult to distinguish.
Two models are trained and compared, one based on convolutional neural networks and the other on Bidirectional representation for Image Transformers (BEiT)
The BEiT model outperforms the previous state-of-the-art model by achieving a mean intersection over union of 49.4 on FoodSeg103.
arXiv Detail & Related papers (2023-06-15T15:38:10Z) - A Framework for Evaluating the Impact of Food Security Scenarios [0.0]
The case study is based on a proprietary time series food security database created using data from the Food and Agriculture Organization of the United Nations (FAOSTAT), the World Bank, and the United States Department of Agriculture (USDA)
The proposed approach can be used to predict the potential impacts of scenarios on food security and the proprietary time series food security database can be used to support this approach.
arXiv Detail & Related papers (2023-01-23T08:41:46Z) - Simulating Personal Food Consumption Patterns using a Modified Markov
Chain [5.874935571318868]
We propose a novel framework to simulate personal food consumption data patterns, leveraging the use of a modified Markov chain model and self-supervised learning.
Our experimental results demonstrate promising performance compared with random simulation and the original Markov chain method.
arXiv Detail & Related papers (2022-08-13T18:50:23Z) - Predicting Livelihood Indicators from Community-Generated Street-Level
Imagery [70.5081240396352]
We propose an inexpensive, scalable, and interpretable approach to predict key livelihood indicators from public crowd-sourced street-level imagery.
By comparing our results against ground data collected in nationally-representative household surveys, we demonstrate the performance of our approach in accurately predicting indicators of poverty, population, and health.
arXiv Detail & Related papers (2020-06-15T18:12:12Z) - Cross-Modal Food Retrieval: Learning a Joint Embedding of Food Images
and Recipes with Semantic Consistency and Attention Mechanism [70.85894675131624]
We learn an embedding of images and recipes in a common feature space, such that the corresponding image-recipe embeddings lie close to one another.
We propose Semantic-Consistent and Attention-based Networks (SCAN), which regularize the embeddings of the two modalities through aligning output semantic probabilities.
We show that we can outperform several state-of-the-art cross-modal retrieval strategies for food images and cooking recipes by a significant margin.
arXiv Detail & Related papers (2020-03-09T07:41:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.