Food Recipe Recommendation Based on Ingredients Detection Using Deep
Learning
- URL: http://arxiv.org/abs/2203.06721v1
- Date: Sun, 13 Mar 2022 17:42:38 GMT
- Title: Food Recipe Recommendation Based on Ingredients Detection Using Deep
Learning
- Authors: Md. Shafaat Jamil Rokon, Md Kishor Morol, Ishra Binte Hasan, A. M.
Saif, and Rafid Hussain Khan
- Abstract summary: Knowing which ingredients can be mixed to make a delicious food recipe is essential.
We implemented a model for food ingredients recognition and designed an algorithm for recommending recipes based on recognised ingredients.
We achieved an accuracy of 94 percent, which is quite impressive.
- Score: 0.0
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Food is essential for human survival, and people always try to taste
different types of delicious recipes. Frequently, people choose food
ingredients without even knowing their names or pick up some food ingredients
that are not obvious to them from a grocery store. Knowing which ingredients
can be mixed to make a delicious food recipe is essential. Selecting the right
recipe by choosing a list of ingredients is very difficult for a beginner cook.
However, it can be a problem even for experts. One such example is recognising
objects through image processing. Although this process is complex due to
different food ingredients, traditional approaches will lead to an inaccuracy
rate. These problems can be solved by machine learning and deep learning
approaches. In this paper, we implemented a model for food ingredients
recognition and designed an algorithm for recommending recipes based on
recognised ingredients. We made a custom dataset consisting of 9856 images
belonging to 32 different food ingredients classes. Convolution Neural Network
(CNN) model was used to identify food ingredients, and for recipe
recommendations, we have used machine learning. We achieved an accuracy of 94
percent, which is quite impressive.
Related papers
- Large Language Models as Sous Chefs: Revising Recipes with GPT-3 [56.7155146252028]
We focus on recipes as an example of complex, diverse, and widely used instructions.
We develop a prompt grounded in the original recipe and ingredients list that breaks recipes down into simpler steps.
We also contribute an Amazon Mechanical Turk task that is carefully designed to reduce fatigue while collecting human judgment of the quality of recipe revisions.
arXiv Detail & Related papers (2023-06-24T14:42:43Z) - Assorted, Archetypal and Annotated Two Million (3A2M) Cooking Recipes
Dataset based on Active Learning [2.40907745415345]
We present a novel dataset of two million culinary recipes labeled in respective categories.
To construct the dataset, we collect the recipes from the RecipeNLG dataset.
There are more than two million recipes in our dataset, each of which is categorized and has a confidence score linked with it.
arXiv Detail & Related papers (2023-03-27T07:53:18Z) - Counterfactual Recipe Generation: Exploring Compositional Generalization
in a Realistic Scenario [60.20197771545983]
We design the counterfactual recipe generation task, which asks models to modify a base recipe according to the change of an ingredient.
We collect a large-scale recipe dataset in Chinese for models to learn culinary knowledge.
Results show that existing models have difficulties in modifying the ingredients while preserving the original text style, and often miss actions that need to be adjusted.
arXiv Detail & Related papers (2022-10-20T17:21:46Z) - Assistive Recipe Editing through Critiquing [34.1050269670062]
RecipeCrit is a hierarchical denoising auto-encoder that edits recipes given ingredient-level critiques.
Our work's main innovation is our unsupervised critiquing module that allows users to edit recipes by interacting with the predicted ingredients.
arXiv Detail & Related papers (2022-05-05T05:52:27Z) - SHARE: a System for Hierarchical Assistive Recipe Editing [5.508365014509761]
We introduce SHARE: a System for Hierarchical Assistive Recipe Editing to assist home cooks with dietary restrictions.
Our hierarchical recipe editor makes necessary substitutions to a recipe's ingredients list and re-writes the directions to make use of the new ingredients.
We introduce the novel RecipePairs dataset of 84K pairs of similar recipes in which one recipe satisfies one of seven dietary constraints.
arXiv Detail & Related papers (2021-05-17T22:38:07Z) - A Large-Scale Benchmark for Food Image Segmentation [62.28029856051079]
We build a new food image dataset FoodSeg103 (and its extension FoodSeg154) containing 9,490 images.
We annotate these images with 154 ingredient classes and each image has an average of 6 ingredient labels and pixel-wise masks.
We propose a multi-modality pre-training approach called ReLeM that explicitly equips a segmentation model with rich and semantic food knowledge.
arXiv Detail & Related papers (2021-05-12T03:00:07Z) - Revamping Cross-Modal Recipe Retrieval with Hierarchical Transformers
and Self-supervised Learning [17.42688184238741]
Cross-modal recipe retrieval has recently gained substantial attention due to the importance of food in people's lives.
We propose a simplified end-to-end model based on well established and high performing encoders for text and images.
Our proposed method achieves state-of-the-art performance in the cross-modal recipe retrieval task on the Recipe1M dataset.
arXiv Detail & Related papers (2021-03-24T10:17:09Z) - Structure-Aware Generation Network for Recipe Generation from Images [142.047662926209]
We investigate an open research task of generating cooking instructions based on only food images and ingredients.
Target recipes are long-length paragraphs and do not have annotations on structure information.
We propose a novel framework of Structure-aware Generation Network (SGN) to tackle the food recipe generation task.
arXiv Detail & Related papers (2020-09-02T10:54:25Z) - Multi-modal Cooking Workflow Construction for Food Recipes [147.4435186953995]
We build MM-ReS, the first large-scale dataset for cooking workflow construction.
We propose a neural encoder-decoder model that utilizes both visual and textual information to construct the cooking workflow.
arXiv Detail & Related papers (2020-08-20T18:31:25Z) - A Named Entity Based Approach to Model Recipes [9.18959130745234]
We propose a structure that can accurately represent the recipe as well as a pipeline to infer the best representation of the recipe in this uniform structure.
Ingredients section in a recipe typically lists down the ingredients required and corresponding attributes such as quantity, temperature, and processing state.
The instruction section lists down a series of events in which a cooking technique or process is applied upon these utensils and ingredients.
arXiv Detail & Related papers (2020-04-25T16:37:26Z) - Cross-Modal Food Retrieval: Learning a Joint Embedding of Food Images
and Recipes with Semantic Consistency and Attention Mechanism [70.85894675131624]
We learn an embedding of images and recipes in a common feature space, such that the corresponding image-recipe embeddings lie close to one another.
We propose Semantic-Consistent and Attention-based Networks (SCAN), which regularize the embeddings of the two modalities through aligning output semantic probabilities.
We show that we can outperform several state-of-the-art cross-modal retrieval strategies for food images and cooking recipes by a significant margin.
arXiv Detail & Related papers (2020-03-09T07:41:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.