Online Continual Learning For Visual Food Classification
- URL: http://arxiv.org/abs/2108.06781v1
- Date: Sun, 15 Aug 2021 17:48:03 GMT
- Title: Online Continual Learning For Visual Food Classification
- Authors: Jiangpeng He and Fengqing Zhu
- Abstract summary: Existing methods require static datasets for training and are not capable of learning from sequentially available new food images.
We introduce a novel clustering based exemplar selection algorithm to store the most representative data belonging to each learned food.
Our results show significant improvements compared with existing state-of-the-art online continual learning methods.
- Score: 7.704949298975352
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Food image classification is challenging for real-world applications since
existing methods require static datasets for training and are not capable of
learning from sequentially available new food images. Online continual learning
aims to learn new classes from data stream by using each new data only once
without forgetting the previously learned knowledge. However, none of the
existing works target food image analysis, which is more difficult to learn
incrementally due to its high intra-class variation with the unbalanced and
unpredictable characteristics of future food class distribution. In this paper,
we address these issues by introducing (1) a novel clustering based exemplar
selection algorithm to store the most representative data belonging to each
learned food for knowledge replay, and (2) an effective online learning regime
using balanced training batch along with the knowledge distillation on
augmented exemplars to maintain the model performance on all learned classes.
Our method is evaluated on a challenging large scale food image database,
Food-1K, by varying the number of newly added food classes. Our results show
significant improvements compared with existing state-of-the-art online
continual learning methods, showing great potential to achieve lifelong
learning for food image classification in real world.
Related papers
- Learning to Classify New Foods Incrementally Via Compressed Exemplars [8.277136664415513]
Food image classification systems play a crucial role in health monitoring and diet tracking through image-based dietary assessment techniques.
Existing food recognition systems rely on static datasets characterized by a pre-defined fixed number of food classes.
We introduce the concept of continuously learning a neural compression model to adaptively improve the quality of compressed data.
arXiv Detail & Related papers (2024-04-11T06:55:44Z) - From Canteen Food to Daily Meals: Generalizing Food Recognition to More
Practical Scenarios [92.58097090916166]
We present two new benchmarks, namely DailyFood-172 and DailyFood-16, designed to curate food images from everyday meals.
These two datasets are used to evaluate the transferability of approaches from the well-curated food image domain to the everyday-life food image domain.
arXiv Detail & Related papers (2024-03-12T08:32:23Z) - Food Image Classification and Segmentation with Attention-based Multiple
Instance Learning [51.279800092581844]
The paper presents a weakly supervised methodology for training food image classification and semantic segmentation models.
The proposed methodology is based on a multiple instance learning approach in combination with an attention-based mechanism.
We conduct experiments on two meta-classes within the FoodSeg103 data set to verify the feasibility of the proposed approach.
arXiv Detail & Related papers (2023-08-22T13:59:47Z) - Long-Tailed Continual Learning For Visual Food Recognition [5.377869029561348]
The distribution of food images in real life is usually long-tailed as a small number of popular food types are consumed more frequently than others.
We propose a novel end-to-end framework for long-tailed continual learning, which effectively addresses the catastrophic forgetting.
We also introduce a novel data augmentation technique by integrating class-activation-map (CAM) and CutMix.
arXiv Detail & Related papers (2023-07-01T00:55:05Z) - Transferring Knowledge for Food Image Segmentation using Transformers
and Convolutions [65.50975507723827]
Food image segmentation is an important task that has ubiquitous applications, such as estimating the nutritional value of a plate of food.
One challenge is that food items can overlap and mix, making them difficult to distinguish.
Two models are trained and compared, one based on convolutional neural networks and the other on Bidirectional representation for Image Transformers (BEiT)
The BEiT model outperforms the previous state-of-the-art model by achieving a mean intersection over union of 49.4 on FoodSeg103.
arXiv Detail & Related papers (2023-06-15T15:38:10Z) - Harnessing the Power of Text-image Contrastive Models for Automatic
Detection of Online Misinformation [50.46219766161111]
We develop a self-learning model to explore the constrastive learning in the domain of misinformation identification.
Our model shows the superior performance of non-matched image-text pair detection when the training data is insufficient.
arXiv Detail & Related papers (2023-04-19T02:53:59Z) - Self-Supervised Visual Representation Learning on Food Images [6.602838826255494]
Existing deep learning-based methods learn the visual representation for downstream tasks based on human annotation of each food image.
Most food images in real life are obtained without labels, and data annotation requires plenty of time and human effort.
In this paper, we focus on the implementation and analysis of existing representative self-supervised learning methods on food images.
arXiv Detail & Related papers (2023-03-16T02:31:51Z) - Online Class-Incremental Learning For Real-World Food Image
Classification [8.438092346233054]
Real-world food consumption patterns, shaped by cultural, economic, and personal influences, involve dynamic and evolving data.
Online Class Incremental Learning (OCIL) addresses the challenge of learning continuously from a single-pass data stream.
We present an attachable Dynamic Model Update (DMU) module designed for existing ER methods, which enables the selection of relevant images for model training.
arXiv Detail & Related papers (2023-01-12T19:00:27Z) - Long-tailed Food Classification [5.874935571318868]
We introduce two new benchmark datasets for long-tailed food classification including Food101-LT and VFN-LT.
We propose a novel 2-Phase framework to address the problem of class-imbalance by (1) under the head classes to remove redundant samples along with maintaining the learned information through knowledge distillation.
We show the effectiveness of our method by comparing with existing state-of-the-art long-tailed classification methods and show improved performance on both Food101-LT and VFN-LT benchmarks.
arXiv Detail & Related papers (2022-10-26T14:29:30Z) - BERT WEAVER: Using WEight AVERaging to enable lifelong learning for
transformer-based models in biomedical semantic search engines [49.75878234192369]
We present WEAVER, a simple, yet efficient post-processing method that infuses old knowledge into the new model.
We show that applying WEAVER in a sequential manner results in similar word embedding distributions as doing a combined training on all data at once.
arXiv Detail & Related papers (2022-02-21T10:34:41Z) - Online Continual Learning with Natural Distribution Shifts: An Empirical
Study with Visual Data [101.6195176510611]
"Online" continual learning enables evaluating both information retention and online learning efficacy.
In online continual learning, each incoming small batch of data is first used for testing and then added to the training set, making the problem truly online.
We introduce a new benchmark for online continual visual learning that exhibits large scale and natural distribution shifts.
arXiv Detail & Related papers (2021-08-20T06:17:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.