PizzaCommonSense: Learning to Model Commonsense Reasoning about Intermediate Steps in Cooking Recipes
- URL: http://arxiv.org/abs/2401.06930v2
- Date: Thu, 10 Oct 2024 11:03:20 GMT
- Title: PizzaCommonSense: Learning to Model Commonsense Reasoning about Intermediate Steps in Cooking Recipes
- Authors: Aissatou Diallo, Antonis Bikakis, Luke Dickens, Anthony Hunter, Rob Miller,
- Abstract summary: A model to effectively reason about cooking recipes must accurately discern and understand the inputs and outputs of intermediate steps within the recipe.
We present a new corpus of cooking recipes enriched with descriptions of intermediate steps that describe the input and output for each step.
- Score: 7.839338724237275
- License:
- Abstract: Understanding procedural texts, such as cooking recipes, is essential for enabling machines to follow instructions and reason about tasks, a key aspect of intelligent reasoning. In cooking, these instructions can be interpreted as a series of modifications to a food preparation. For a model to effectively reason about cooking recipes, it must accurately discern and understand the inputs and outputs of intermediate steps within the recipe. We present a new corpus of cooking recipes enriched with descriptions of intermediate steps that describe the input and output for each step. PizzaCommonsense serves as a benchmark for the reasoning capabilities of LLMs because it demands rigorous explicit input-output descriptions to demonstrate the acquisition of implicit commonsense knowledge, which is unlikely to be easily memorized. GPT-4 achieves only 26\% human-evaluated preference for generations, leaving room for future improvements.
Related papers
- Large Language Models as Sous Chefs: Revising Recipes with GPT-3 [56.7155146252028]
We focus on recipes as an example of complex, diverse, and widely used instructions.
We develop a prompt grounded in the original recipe and ingredients list that breaks recipes down into simpler steps.
We also contribute an Amazon Mechanical Turk task that is carefully designed to reduce fatigue while collecting human judgment of the quality of recipe revisions.
arXiv Detail & Related papers (2023-06-24T14:42:43Z) - A Graphical Formalism for Commonsense Reasoning with Recipes [3.271550784789976]
We propose a graphical formalization that captures the comestibles (ingredients, intermediate food items, and final products)
We then propose formal definitions for comparing recipes, for composing recipes from subrecipes, and for deconstructing recipes into subrecipes.
We also introduce and compare two formal definitions for substitution into recipes which are required when there are missing ingredients, or some actions are not possible, or because there is a need to change the final product somehow.
arXiv Detail & Related papers (2023-06-15T11:04:30Z) - KitchenScale: Learning to predict ingredient quantities from recipe
contexts [13.001618172288198]
KitchenScale is a model that predicts a target ingredient's quantity and measurement unit given its recipe context.
We formulate an ingredient quantity prediction task that consists of three sub-tasks which are ingredient measurement type classification, unit classification, and quantity regression task.
Experiments with our newly constructed dataset and recommendation examples demonstrate KitchenScale's understanding of various recipe contexts.
arXiv Detail & Related papers (2023-04-21T04:28:16Z) - Counterfactual Recipe Generation: Exploring Compositional Generalization
in a Realistic Scenario [60.20197771545983]
We design the counterfactual recipe generation task, which asks models to modify a base recipe according to the change of an ingredient.
We collect a large-scale recipe dataset in Chinese for models to learn culinary knowledge.
Results show that existing models have difficulties in modifying the ingredients while preserving the original text style, and often miss actions that need to be adjusted.
arXiv Detail & Related papers (2022-10-20T17:21:46Z) - A Rich Recipe Representation as Plan to Support Expressive Multi Modal
Queries on Recipe Content and Preparation Process [24.94173789568803]
We discuss the construction of a machine-understandable rich recipe representation (R3)
R3 is infused with additional knowledge such as information about allergens and images of ingredients.
We also present TREAT, a tool for recipe retrieval which uses R3 to perform multi-modal reasoning on the recipe's content.
arXiv Detail & Related papers (2022-03-31T15:29:38Z) - Multi-modal Cooking Workflow Construction for Food Recipes [147.4435186953995]
We build MM-ReS, the first large-scale dataset for cooking workflow construction.
We propose a neural encoder-decoder model that utilizes both visual and textual information to construct the cooking workflow.
arXiv Detail & Related papers (2020-08-20T18:31:25Z) - A Recipe for Creating Multimodal Aligned Datasets for Sequential Tasks [48.39191088844315]
In the cooking domain, the web offers many partially-overlapping text and video recipes that describe how to make the same dish.
We use an unsupervised alignment algorithm that learns pairwise alignments between instructions of different recipes for the same dish.
We then use a graph algorithm to derive a joint alignment between multiple text and multiple video recipes for the same dish.
arXiv Detail & Related papers (2020-05-19T17:27:00Z) - A Benchmark for Structured Procedural Knowledge Extraction from Cooking
Videos [126.66212285239624]
We propose a benchmark of structured procedural knowledge extracted from cooking videos.
Our manually annotated open-vocabulary resource includes 356 instructional cooking videos and 15,523 video clip/sentence-level annotations.
arXiv Detail & Related papers (2020-05-02T05:15:20Z) - Cross-Modal Food Retrieval: Learning a Joint Embedding of Food Images
and Recipes with Semantic Consistency and Attention Mechanism [70.85894675131624]
We learn an embedding of images and recipes in a common feature space, such that the corresponding image-recipe embeddings lie close to one another.
We propose Semantic-Consistent and Attention-based Networks (SCAN), which regularize the embeddings of the two modalities through aligning output semantic probabilities.
We show that we can outperform several state-of-the-art cross-modal retrieval strategies for food images and cooking recipes by a significant margin.
arXiv Detail & Related papers (2020-03-09T07:41:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.