Incremental Learning on Food Instance Segmentation
- URL: http://arxiv.org/abs/2306.15910v1
- Date: Wed, 28 Jun 2023 04:17:16 GMT
- Title: Incremental Learning on Food Instance Segmentation
- Authors: Huu-Thanh Nguyen, Yu Cao, Chong-Wah Ngo, Wing-Kwong Chan
- Abstract summary: This paper proposes an incremental learning framework to optimize the model performance given a limited data labelling budget.
The power of the framework is a novel difficulty assessment model, which forecasts how challenging an unlabelled sample is to the latest trained instance segmentation model.
On four large-scale food datasets, our proposed framework outperforms current incremental learning benchmarks and achieves competitive performance with the model trained on fully annotated samples.
- Score: 31.60448022949561
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Food instance segmentation is essential to estimate the serving size of
dishes in a food image. The recent cutting-edge techniques for instance
segmentation are deep learning networks with impressive segmentation quality
and fast computation. Nonetheless, they are hungry for data and expensive for
annotation. This paper proposes an incremental learning framework to optimize
the model performance given a limited data labelling budget. The power of the
framework is a novel difficulty assessment model, which forecasts how
challenging an unlabelled sample is to the latest trained instance segmentation
model. The data collection procedure is divided into several stages, each in
which a new sample package is collected. The framework allocates the labelling
budget to the most difficult samples. The unlabelled samples that meet a
certain qualification from the assessment model are used to generate
pseudo-labels. Eventually, the manual labels and pseudo-labels are sent to the
training data to improve the instance segmentation model. On four large-scale
food datasets, our proposed framework outperforms current incremental learning
benchmarks and achieves competitive performance with the model trained on fully
annotated samples.
Related papers
- Integrated Image-Text Based on Semi-supervised Learning for Small Sample Instance Segmentation [1.3157419797035321]
The article proposes a novel small sample instance segmentation solution from the perspective of maximizing the utilization of existing information.
First, it helps the model fully utilize unlabeled data by learning to generate pseudo labels, increasing the number of available samples.
Second, by integrating the features of text and image, more accurate classification results can be obtained.
arXiv Detail & Related papers (2024-10-21T14:44:08Z) - Scribbles for All: Benchmarking Scribble Supervised Segmentation Across Datasets [51.74296438621836]
We introduce Scribbles for All, a label and training data generation algorithm for semantic segmentation trained on scribble labels.
The main limitation of scribbles as source for weak supervision is the lack of challenging datasets for scribble segmentation.
Scribbles for All provides scribble labels for several popular segmentation datasets and provides an algorithm to automatically generate scribble labels for any dataset with dense annotations.
arXiv Detail & Related papers (2024-08-22T15:29:08Z) - Self-Evolution Learning for Mixup: Enhance Data Augmentation on Few-Shot
Text Classification Tasks [75.42002070547267]
We propose a self evolution learning (SE) based mixup approach for data augmentation in text classification.
We introduce a novel instance specific label smoothing approach, which linearly interpolates the model's output and one hot labels of the original samples to generate new soft for label mixing up.
arXiv Detail & Related papers (2023-05-22T23:43:23Z) - Active Pointly-Supervised Instance Segmentation [106.38955769817747]
We present an economic active learning setting, named active pointly-supervised instance segmentation (APIS)
APIS starts with box-level annotations and iteratively samples a point within the box and asks if it falls on the object.
The model developed with these strategies yields consistent performance gain on the challenging MS-COCO dataset.
arXiv Detail & Related papers (2022-07-23T11:25:24Z) - Mask-guided sample selection for Semi-Supervised Instance Segmentation [13.091166009687058]
We propose a sample selection approach to decide which samples to annotate for semi-supervised instance segmentation.
Our method consists in first predicting pseudo-masks for the unlabeled pool of samples, together with a score predicting the quality of the mask.
We study which samples are better to annotate given the quality score, and show how our approach outperforms a random selection.
arXiv Detail & Related papers (2020-08-25T14:44:58Z) - The Devil is in Classification: A Simple Framework for Long-tail Object
Detection and Instance Segmentation [93.17367076148348]
We investigate performance drop of the state-of-the-art two-stage instance segmentation model Mask R-CNN on the recent long-tail LVIS dataset.
We unveil that a major cause is the inaccurate classification of object proposals.
We propose a simple calibration framework to more effectively alleviate classification head bias with a bi-level class balanced sampling approach.
arXiv Detail & Related papers (2020-07-23T12:49:07Z) - Deep Semi-supervised Knowledge Distillation for Overlapping Cervical
Cell Instance Segmentation [54.49894381464853]
We propose to leverage both labeled and unlabeled data for instance segmentation with improved accuracy by knowledge distillation.
We propose a novel Mask-guided Mean Teacher framework with Perturbation-sensitive Sample Mining.
Experiments show that the proposed method improves the performance significantly compared with the supervised method learned from labeled data only.
arXiv Detail & Related papers (2020-07-21T13:27:09Z) - UniT: Unified Knowledge Transfer for Any-shot Object Detection and
Segmentation [52.487469544343305]
Methods for object detection and segmentation rely on large scale instance-level annotations for training.
We propose an intuitive and unified semi-supervised model that is applicable to a range of supervision.
arXiv Detail & Related papers (2020-06-12T22:45:47Z) - Efficient Deep Representation Learning by Adaptive Latent Space Sampling [16.320898678521843]
Supervised deep learning requires a large amount of training samples with annotations, which are expensive and time-consuming to obtain.
We propose a novel training framework which adaptively selects informative samples that are fed to the training process.
arXiv Detail & Related papers (2020-03-19T22:17:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.