RecipeMind: Guiding Ingredient Choices from Food Pairing to Recipe
Completion using Cascaded Set Transformer
- URL: http://arxiv.org/abs/2210.10628v1
- Date: Fri, 14 Oct 2022 06:35:49 GMT
- Title: RecipeMind: Guiding Ingredient Choices from Food Pairing to Recipe
Completion using Cascaded Set Transformer
- Authors: Mogan Gim, Donghee Choi, Kana Maruyama, Jihun Choi, Hajung Kim,
Donghyeon Park and Jaewoo Kang
- Abstract summary: RecipeMind is a food affinity score prediction model that quantifies the suitability of adding an ingredient to set of other ingredients.
We constructed a large-scale dataset containing ingredient co-occurrence based scores to train and evaluate RecipeMind on food affinity score prediction.
- Score: 15.170251924099807
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a computational approach for recipe ideation, a downstream task
that helps users select and gather ingredients for creating dishes. To perform
this task, we developed RecipeMind, a food affinity score prediction model that
quantifies the suitability of adding an ingredient to set of other ingredients.
We constructed a large-scale dataset containing ingredient co-occurrence based
scores to train and evaluate RecipeMind on food affinity score prediction.
Deployed in recipe ideation, RecipeMind helps the user expand an initial set of
ingredients by suggesting additional ingredients. Experiments and qualitative
analysis show RecipeMind's potential in fulfilling its assistive role in
cuisine domain.
Related papers
- KitchenScale: Learning to predict ingredient quantities from recipe
contexts [13.001618172288198]
KitchenScale is a model that predicts a target ingredient's quantity and measurement unit given its recipe context.
We formulate an ingredient quantity prediction task that consists of three sub-tasks which are ingredient measurement type classification, unit classification, and quantity regression task.
Experiments with our newly constructed dataset and recommendation examples demonstrate KitchenScale's understanding of various recipe contexts.
arXiv Detail & Related papers (2023-04-21T04:28:16Z) - Learning to Substitute Ingredients in Recipes [15.552549060863523]
Recipe personalization through ingredient substitution has the potential to help people meet their dietary needs and preferences, avoid potential allergens, and ease culinary exploration in everyone's kitchen.
We build a benchmark, composed of a dataset of substitution pairs with standardized splits, evaluation metrics, and baselines.
We introduce Graph-based Ingredient Substitution Module (GISMo), a novel model that leverages the context of a recipe as well as generic ingredient relational information encoded within a graph to rank plausible substitutions.
We show through comprehensive experimental validation that GISMo surpasses the best performing baseline by a large margin in terms of mean reciprocal rank.
arXiv Detail & Related papers (2023-02-15T21:49:23Z) - Counterfactual Recipe Generation: Exploring Compositional Generalization
in a Realistic Scenario [60.20197771545983]
We design the counterfactual recipe generation task, which asks models to modify a base recipe according to the change of an ingredient.
We collect a large-scale recipe dataset in Chinese for models to learn culinary knowledge.
Results show that existing models have difficulties in modifying the ingredients while preserving the original text style, and often miss actions that need to be adjusted.
arXiv Detail & Related papers (2022-10-20T17:21:46Z) - Cross-lingual Adaptation for Recipe Retrieval with Mixup [56.79360103639741]
Cross-modal recipe retrieval has attracted research attention in recent years, thanks to the availability of large-scale paired data for training.
This paper studies unsupervised domain adaptation for image-to-recipe retrieval, where recipes in source and target domains are in different languages.
A novel recipe mixup method is proposed to learn transferable embedding features between the two domains.
arXiv Detail & Related papers (2022-05-08T15:04:39Z) - Assistive Recipe Editing through Critiquing [34.1050269670062]
RecipeCrit is a hierarchical denoising auto-encoder that edits recipes given ingredient-level critiques.
Our work's main innovation is our unsupervised critiquing module that allows users to edit recipes by interacting with the predicted ingredients.
arXiv Detail & Related papers (2022-05-05T05:52:27Z) - Learning Structural Representations for Recipe Generation and Food
Retrieval [101.97397967958722]
We propose a novel framework of Structure-aware Generation Network (SGN) to tackle the food recipe generation task.
Our proposed model can produce high-quality and coherent recipes, and achieve the state-of-the-art performance on the benchmark Recipe1M dataset.
arXiv Detail & Related papers (2021-10-04T06:36:31Z) - SHARE: a System for Hierarchical Assistive Recipe Editing [5.508365014509761]
We introduce SHARE: a System for Hierarchical Assistive Recipe Editing to assist home cooks with dietary restrictions.
Our hierarchical recipe editor makes necessary substitutions to a recipe's ingredients list and re-writes the directions to make use of the new ingredients.
We introduce the novel RecipePairs dataset of 84K pairs of similar recipes in which one recipe satisfies one of seven dietary constraints.
arXiv Detail & Related papers (2021-05-17T22:38:07Z) - Revamping Cross-Modal Recipe Retrieval with Hierarchical Transformers
and Self-supervised Learning [17.42688184238741]
Cross-modal recipe retrieval has recently gained substantial attention due to the importance of food in people's lives.
We propose a simplified end-to-end model based on well established and high performing encoders for text and images.
Our proposed method achieves state-of-the-art performance in the cross-modal recipe retrieval task on the Recipe1M dataset.
arXiv Detail & Related papers (2021-03-24T10:17:09Z) - Multi-modal Cooking Workflow Construction for Food Recipes [147.4435186953995]
We build MM-ReS, the first large-scale dataset for cooking workflow construction.
We propose a neural encoder-decoder model that utilizes both visual and textual information to construct the cooking workflow.
arXiv Detail & Related papers (2020-08-20T18:31:25Z) - Decomposing Generation Networks with Structure Prediction for Recipe
Generation [142.047662926209]
We propose a novel framework: Decomposing Generation Networks (DGN) with structure prediction.
Specifically, we split each cooking instruction into several phases, and assign different sub-generators to each phase.
Our approach includes two novel ideas: (i) learning the recipe structures with the global structure prediction component and (ii) producing recipe phases in the sub-generator output component based on the predicted structure.
arXiv Detail & Related papers (2020-07-27T08:47:50Z) - Cross-Modal Food Retrieval: Learning a Joint Embedding of Food Images
and Recipes with Semantic Consistency and Attention Mechanism [70.85894675131624]
We learn an embedding of images and recipes in a common feature space, such that the corresponding image-recipe embeddings lie close to one another.
We propose Semantic-Consistent and Attention-based Networks (SCAN), which regularize the embeddings of the two modalities through aligning output semantic probabilities.
We show that we can outperform several state-of-the-art cross-modal retrieval strategies for food images and cooking recipes by a significant margin.
arXiv Detail & Related papers (2020-03-09T07:41:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.