Online Continual Learning on Hierarchical Label Expansion
- URL: http://arxiv.org/abs/2308.14374v1
- Date: Mon, 28 Aug 2023 07:42:26 GMT
- Title: Online Continual Learning on Hierarchical Label Expansion
- Authors: Byung Hyun Lee, Okchul Jung, Jonghyun Choi, Se Young Chun
- Abstract summary: We propose a novel multi-level hierarchical class incremental task configuration with an online learning constraint, called hierarchical label expansion (HLE)
Our configuration allows a network to first learn coarse-grained classes, with data labels continually expanding to more fine-grained classes in various hierarchy depths.
Our experiments demonstrate that our proposed method can effectively use hierarchy on our HLE setup to improve classification accuracy across all levels of hierarchies.
- Score: 28.171890301966616
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Continual learning (CL) enables models to adapt to new tasks and environments
without forgetting previously learned knowledge. While current CL setups have
ignored the relationship between labels in the past task and the new task with
or without small task overlaps, real-world scenarios often involve hierarchical
relationships between old and new tasks, posing another challenge for
traditional CL approaches. To address this challenge, we propose a novel
multi-level hierarchical class incremental task configuration with an online
learning constraint, called hierarchical label expansion (HLE). Our
configuration allows a network to first learn coarse-grained classes, with data
labels continually expanding to more fine-grained classes in various hierarchy
depths. To tackle this new setup, we propose a rehearsal-based method that
utilizes hierarchy-aware pseudo-labeling to incorporate hierarchical class
information. Additionally, we propose a simple yet effective memory management
and sampling strategy that selectively adopts samples of newly encountered
classes. Our experiments demonstrate that our proposed method can effectively
use hierarchy on our HLE setup to improve classification accuracy across all
levels of hierarchies, regardless of depth and class imbalance ratio,
outperforming prior state-of-the-art works by significant margins while also
outperforming them on the conventional disjoint, blurry and i-Blurry CL setups.
Related papers
- Hierarchical Semantic Tree Anchoring for CLIP-Based Class-Incremental Learning [11.82771798674077]
Class-Incremental Learning (CIL) enables models to learn new classes continually while preserving past knowledge.<n>But real-world visual and linguistic concepts are inherently hierarchical.<n>We propose HASTEN that anchors hierarchical information into CIL to reduce catastrophic forgetting.
arXiv Detail & Related papers (2025-11-19T17:14:47Z) - Minimizing Hyperbolic Embedding Distortion with LLM-Guided Hierarchy Restructuring [19.895748346987435]
The quality of hyperbolic embeddings is tightly coupled to the structure of the input hierarchy.<n>This paper investigates whether Large Language Models (LLMs) have the ability to automatically restructure hierarchies to meet these criteria.<n> Experiments on 16 diverse hierarchies show that LLM-restructured hierarchies consistently yield higher-quality hyperbolic embeddings.
arXiv Detail & Related papers (2025-11-16T18:10:20Z) - Teaching Prompts to Coordinate: Hierarchical Layer-Grouped Prompt Tuning for Continual Learning [69.17264556340244]
We propose a novel hierarchical layer-grouped prompt tuning method for continual learning.<n>It improves model stability in two ways: (i) Layers in the same group share roughly the same prompts, which are adjusted by position encoding.<n>It utilizes a single task-specific root prompt to learn to generate sub-prompts for each layer group.
arXiv Detail & Related papers (2025-11-15T08:15:51Z) - A class-driven hierarchical ResNet for classification of multispectral remote sensing images [12.282079123411947]
We present a class-driven hierarchical Residual Neural Network (ResNet) for modelling the classification of Time Series (TS) of multispectral images at different semantical class levels.<n>We leverage on hierarchy-penalty maps to discourage incoherent hierarchical transitions within the classification.<n>The experimental results, obtained on two tiles of the Amazonian Forest on 12 monthly composites of Sentinel 2 images, demonstrate the effectiveness of the hierarchical approach.
arXiv Detail & Related papers (2025-10-09T10:47:52Z) - Hierarchical Representation Matching for CLIP-based Class-Incremental Learning [80.2317078787969]
Class-Incremental Learning (CIL) aims to endow models with the ability to continuously adapt to evolving data streams.<n>Recent advances in pre-trained vision-language models (e.g., CLIP) provide a powerful foundation for this task.<n>We introduce HiErarchical Representation MAtchiNg (HERMAN) for CLIP-based CIL.
arXiv Detail & Related papers (2025-09-26T17:59:51Z) - Hierarchical Multi-Label Generation with Probabilistic Level-Constraint [3.1427813443719868]
Hierarchical Extreme Multi-Label Classification poses greater difficulties compared to traditional multi-label classification.<n>We employ a generative framework with Probabilistic Level Constraints (PLC) to generate hierarchical labels within a specific taxonomy.<n>Our approach achieves a new SOTA performance in the HMG task, but also has a much better performance in constrained the output of model than previous research work.
arXiv Detail & Related papers (2025-04-30T07:56:53Z) - Class-Independent Increment: An Efficient Approach for Multi-label Class-Incremental Learning [49.65841002338575]
This paper focuses on the challenging yet practical multi-label class-incremental learning (MLCIL) problem.
We propose a novel class-independent incremental network (CINet) to extract multiple class-level embeddings for multi-label samples.
It learns and preserves the knowledge of different classes by constructing class-specific tokens.
arXiv Detail & Related papers (2025-03-01T14:40:52Z) - Versatile Incremental Learning: Towards Class and Domain-Agnostic Incremental Learning [16.318126586825734]
Incremental Learning (IL) aims to accumulate knowledge from sequential input tasks.
We consider a more challenging and realistic but under-explored IL scenario, named Versatile Incremental Learning (VIL)
We propose a simple yet effective IL framework, named Incremental with Shift cONtrol (ICON)
arXiv Detail & Related papers (2024-09-17T07:44:28Z) - Low-Rank Mixture-of-Experts for Continual Medical Image Segmentation [18.984447545932706]
"catastrophic forgetting" problem occurs when model forgets previously learned features when it is extended to new categories or tasks.
We propose a network by introducing the data-specific Mixture of Experts structure to handle the new tasks or categories.
We validate our method on both class-level and task-level continual learning challenges.
arXiv Detail & Related papers (2024-06-19T14:19:50Z) - Expandable Subspace Ensemble for Pre-Trained Model-Based Class-Incremental Learning [65.57123249246358]
We propose ExpAndable Subspace Ensemble (EASE) for PTM-based CIL.
We train a distinct lightweight adapter module for each new task, aiming to create task-specific subspaces.
Our prototype complement strategy synthesizes old classes' new features without using any old class instance.
arXiv Detail & Related papers (2024-03-18T17:58:13Z) - Reinforcement Learning with Options and State Representation [105.82346211739433]
This thesis aims to explore the reinforcement learning field and build on existing methods to produce improved ones.
It addresses such goals by decomposing learning tasks in a hierarchical fashion known as Hierarchical Reinforcement Learning.
arXiv Detail & Related papers (2024-03-16T08:30:55Z) - A Multi-label Continual Learning Framework to Scale Deep Learning
Approaches for Packaging Equipment Monitoring [57.5099555438223]
We study multi-label classification in the continual scenario for the first time.
We propose an efficient approach that has a logarithmic complexity with regard to the number of tasks.
We validate our approach on a real-world multi-label Forecasting problem from the packaging industry.
arXiv Detail & Related papers (2022-08-08T15:58:39Z) - Use All The Labels: A Hierarchical Multi-Label Contrastive Learning
Framework [75.79736930414715]
We present a hierarchical multi-label representation learning framework that can leverage all available labels and preserve the hierarchical relationship between classes.
We introduce novel hierarchy preserving losses, which jointly apply a hierarchical penalty to the contrastive loss, and enforce the hierarchy constraint.
arXiv Detail & Related papers (2022-04-27T21:41:44Z) - Few-Shot Class-Incremental Learning by Sampling Multi-Phase Tasks [59.12108527904171]
A model should recognize new classes and maintain discriminability over old classes.
The task of recognizing few-shot new classes without forgetting old classes is called few-shot class-incremental learning (FSCIL)
We propose a new paradigm for FSCIL based on meta-learning by LearnIng Multi-phase Incremental Tasks (LIMIT)
arXiv Detail & Related papers (2022-03-31T13:46:41Z) - Adversarial Continual Learning [99.56738010842301]
We propose a hybrid continual learning framework that learns a disjoint representation for task-invariant and task-specific features.
Our model combines architecture growth to prevent forgetting of task-specific skills and an experience replay approach to preserve shared skills.
arXiv Detail & Related papers (2020-03-21T02:08:17Z) - Learning Functionally Decomposed Hierarchies for Continuous Control
Tasks with Path Planning [36.050432925402845]
We present HiDe, a novel hierarchical reinforcement learning architecture that successfully solves long horizon control tasks.
We experimentally show that our method generalizes across unseen test environments and can scale to 3x horizon length compared to both learning and non-learning based methods.
arXiv Detail & Related papers (2020-02-14T10:19:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.