Towards Label-Efficient Incremental Learning: A Survey
- URL: http://arxiv.org/abs/2302.00353v2
- Date: Thu, 2 Feb 2023 08:57:01 GMT
- Title: Towards Label-Efficient Incremental Learning: A Survey
- Authors: Mert Kilickaya, Joost van de Weijer and Yuki M. Asano
- Abstract summary: We study incremental learning, where a learner is required to adapt to an incoming stream of data with a varying distribution.
We identify three subdivisions, namely semi-, few-shot- and self-supervised learning to reduce labeling efforts.
- Score: 42.603603392991715
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The current dominant paradigm when building a machine learning model is to
iterate over a dataset over and over until convergence. Such an approach is
non-incremental, as it assumes access to all images of all categories at once.
However, for many applications, non-incremental learning is unrealistic. To
that end, researchers study incremental learning, where a learner is required
to adapt to an incoming stream of data with a varying distribution while
preventing forgetting of past knowledge. Significant progress has been made,
however, the vast majority of works focus on the fully supervised setting,
making these algorithms label-hungry thus limiting their real-life deployment.
To that end, in this paper, we make the first attempt to survey recently
growing interest in label-efficient incremental learning. We identify three
subdivisions, namely semi-, few-shot- and self-supervised learning to reduce
labeling efforts. Finally, we identify novel directions that can further
enhance label-efficiency and improve incremental learning scalability. Project
website: https://github.com/kilickaya/label-efficient-il.
Related papers
- Enhancing Visual Continual Learning with Language-Guided Supervision [76.38481740848434]
Continual learning aims to empower models to learn new tasks without forgetting previously acquired knowledge.
We argue that the scarce semantic information conveyed by the one-hot labels hampers the effective knowledge transfer across tasks.
Specifically, we use PLMs to generate semantic targets for each class, which are frozen and serve as supervision signals.
arXiv Detail & Related papers (2024-03-24T12:41:58Z) - COOLer: Class-Incremental Learning for Appearance-Based Multiple Object
Tracking [32.47215340215641]
This paper extends the scope of continual learning research to class-incremental learning for multiple object tracking (MOT)
Previous solutions for continual learning of object detectors do not address the data association stage of appearance-based trackers.
We introduce COOLer, a COntrastive- and cOntinual-Learning-based tracker, which incrementally learns to track new categories while preserving past knowledge.
arXiv Detail & Related papers (2023-10-04T17:49:48Z) - A Survey of Label-Efficient Deep Learning for 3D Point Clouds [109.07889215814589]
This paper presents the first comprehensive survey of label-efficient learning of point clouds.
We propose a taxonomy that organizes label-efficient learning methods based on the data prerequisites provided by different types of labels.
For each approach, we outline the problem setup and provide an extensive literature review that showcases relevant progress and challenges.
arXiv Detail & Related papers (2023-05-31T12:54:51Z) - Adversarial Auto-Augment with Label Preservation: A Representation
Learning Principle Guided Approach [95.74102207187545]
We show that a prior-free autonomous data augmentation's objective can be derived from a representation learning principle.
We then propose a practical surrogate to the objective that can be efficiently optimized and integrated seamlessly into existing methods.
arXiv Detail & Related papers (2022-11-02T02:02:51Z) - From Weakly Supervised Learning to Active Learning [1.52292571922932]
This thesis is motivated by the question: can we derive a more generic framework than the one of supervised learning?
We model weak supervision as giving, rather than a unique target, a set of target candidates.
We argue that one should look for an optimistic'' function that matches most of the observations. This allows us to derive a principle to disambiguate partial labels.
arXiv Detail & Related papers (2022-09-23T14:55:43Z) - Learning to Predict Gradients for Semi-Supervised Continual Learning [36.715712711431856]
Key challenge for machine intelligence is to learn new visual concepts without forgetting the previously acquired knowledge.
There is a gap between existing supervised continual learning and human-like intelligence, where human is able to learn from both labeled and unlabeled data.
We formulate a new semi-supervised continual learning method, which can be generically applied to existing continual learning models.
arXiv Detail & Related papers (2022-01-23T06:45:47Z) - Understanding the World Through Action [91.3755431537592]
I will argue that a general, principled, and powerful framework for utilizing unlabeled data can be derived from reinforcement learning.
I will discuss how such a procedure is more closely aligned with potential downstream tasks.
arXiv Detail & Related papers (2021-10-24T22:33:52Z) - Incremental Object Detection via Meta-Learning [77.55310507917012]
We propose a meta-learning approach that learns to reshape model gradients, such that information across incremental tasks is optimally shared.
In comparison to existing meta-learning methods, our approach is task-agnostic, allows incremental addition of new-classes and scales to high-capacity models for object detection.
arXiv Detail & Related papers (2020-03-17T13:40:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.