CookingSense: A Culinary Knowledgebase with Multidisciplinary Assertions
- URL: http://arxiv.org/abs/2405.00523v1
- Date: Wed, 1 May 2024 13:58:09 GMT
- Title: CookingSense: A Culinary Knowledgebase with Multidisciplinary Assertions
- Authors: Donghee Choi, Mogan Gim, Donghyeon Park, Mujeen Sung, Hyunjae Kim, Jaewoo Kang, Jihun Choi,
- Abstract summary: CookingSense is a descriptive collection of knowledge assertions in the culinary domain extracted from various sources.
CookingSense is constructed through a series of dictionary-based filtering and language model-based semantic filtering techniques.
We present FoodBench, a novel benchmark to evaluate culinary decision support systems.
- Score: 23.21190348451355
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces CookingSense, a descriptive collection of knowledge assertions in the culinary domain extracted from various sources, including web data, scientific papers, and recipes, from which knowledge covering a broad range of aspects is acquired. CookingSense is constructed through a series of dictionary-based filtering and language model-based semantic filtering techniques, which results in a rich knowledgebase of multidisciplinary food-related assertions. Additionally, we present FoodBench, a novel benchmark to evaluate culinary decision support systems. From evaluations with FoodBench, we empirically prove that CookingSense improves the performance of retrieval augmented language models. We also validate the quality and variety of assertions in CookingSense through qualitative analysis.
Related papers
- Building FKG.in: a Knowledge Graph for Indian Food [2.339288371903242]
We build an automated system for assimilating culinary information for Indian food in the form of a knowledge graph.
We present a novel workflow that uses AI, LLM, and language technology to curate information from recipe blog sites in the public domain.
The design is application-agnostic and can be used for AI-driven smart analysis, building recommendation systems for Personalized Digital Health.
arXiv Detail & Related papers (2024-09-01T20:18:36Z) - A topological analysis of the space of recipes [0.0]
We introduce the use of topological data analysis, especially persistent homology, in order to study the space of culinary recipes.
In particular, persistent homology analysis provides a set of recipes surrounding the multiscale "holes" in the space of existing recipes.
arXiv Detail & Related papers (2024-06-12T01:28:16Z) - FoodLMM: A Versatile Food Assistant using Large Multi-modal Model [96.76271649854542]
Large Multi-modal Models (LMMs) have made impressive progress in many vision-language tasks.
This paper proposes FoodLMM, a versatile food assistant based on LMMs with various capabilities.
We introduce a series of novel task-specific tokens and heads, enabling the model to predict food nutritional values and multiple segmentation masks.
arXiv Detail & Related papers (2023-12-22T11:56:22Z) - FIRE: Food Image to REcipe generation [10.45344523054623]
Food computing aims to develop end-to-end intelligent systems capable of autonomously producing recipe information for a food image.
This paper proposes FIRE, a novel methodology tailored to recipe generation in the food computing domain.
We showcase two practical applications that can benefit from integrating FIRE with large language model prompting.
arXiv Detail & Related papers (2023-08-28T08:14:20Z) - Food Ingredients Recognition through Multi-label Learning [0.0]
The ability to recognize various food-items in a generic food plate is a key determinant for an automated diet assessment system.
We employ a deep multi-label learning approach and evaluate several state-of-the-art neural networks for their ability to detect an arbitrary number of ingredients in a dish image.
arXiv Detail & Related papers (2022-10-24T10:18:26Z) - Learning Structural Representations for Recipe Generation and Food
Retrieval [101.97397967958722]
We propose a novel framework of Structure-aware Generation Network (SGN) to tackle the food recipe generation task.
Our proposed model can produce high-quality and coherent recipes, and achieve the state-of-the-art performance on the benchmark Recipe1M dataset.
arXiv Detail & Related papers (2021-10-04T06:36:31Z) - Towards Building a Food Knowledge Graph for Internet of Food [66.57235827087092]
We review the evolution of food knowledge organization, from food classification to food to food knowledge graphs.
Food knowledge graphs play an important role in food search and Question Answering (QA), personalized dietary recommendation, food analysis and visualization.
Future directions for food knowledge graphs cover several fields such as multimodal food knowledge graphs and food intelligence.
arXiv Detail & Related papers (2021-07-13T06:26:53Z) - Multi-modal Cooking Workflow Construction for Food Recipes [147.4435186953995]
We build MM-ReS, the first large-scale dataset for cooking workflow construction.
We propose a neural encoder-decoder model that utilizes both visual and textual information to construct the cooking workflow.
arXiv Detail & Related papers (2020-08-20T18:31:25Z) - Decomposing Generation Networks with Structure Prediction for Recipe
Generation [142.047662926209]
We propose a novel framework: Decomposing Generation Networks (DGN) with structure prediction.
Specifically, we split each cooking instruction into several phases, and assign different sub-generators to each phase.
Our approach includes two novel ideas: (i) learning the recipe structures with the global structure prediction component and (ii) producing recipe phases in the sub-generator output component based on the predicted structure.
arXiv Detail & Related papers (2020-07-27T08:47:50Z) - Classification of Cuisines from Sequentially Structured Recipes [8.696042114987966]
classification of cuisines based on their culinary features is an outstanding problem.
We have implemented a range of classification techniques by accounting for this information on the RecipeDB dataset.
The state-of-the-art RoBERTa model presented the highest accuracy of 73.30% among a range of classification models.
arXiv Detail & Related papers (2020-04-26T05:40:36Z) - Cross-Modal Food Retrieval: Learning a Joint Embedding of Food Images
and Recipes with Semantic Consistency and Attention Mechanism [70.85894675131624]
We learn an embedding of images and recipes in a common feature space, such that the corresponding image-recipe embeddings lie close to one another.
We propose Semantic-Consistent and Attention-based Networks (SCAN), which regularize the embeddings of the two modalities through aligning output semantic probabilities.
We show that we can outperform several state-of-the-art cross-modal retrieval strategies for food images and cooking recipes by a significant margin.
arXiv Detail & Related papers (2020-03-09T07:41:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.