Online Boosting Adaptive Learning under Concept Drift for Multistream
Classification
- URL: http://arxiv.org/abs/2312.10841v2
- Date: Mon, 1 Jan 2024 09:39:03 GMT
- Title: Online Boosting Adaptive Learning under Concept Drift for Multistream
Classification
- Authors: En Yu, Jie Lu, Bin Zhang, Guangquan Zhang
- Abstract summary: Multistream classification poses significant challenges due to the necessity for rapid adaptation in dynamic streaming processes with concept drift.
We propose a novel Online Boosting Adaptive Learning (OBAL) method that adaptively learns the dynamic correlation among different streams.
- Score: 34.64751041290346
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multistream classification poses significant challenges due to the necessity
for rapid adaptation in dynamic streaming processes with concept drift. Despite
the growing research outcomes in this area, there has been a notable oversight
regarding the temporal dynamic relationships between these streams, leading to
the issue of negative transfer arising from irrelevant data. In this paper, we
propose a novel Online Boosting Adaptive Learning (OBAL) method that
effectively addresses this limitation by adaptively learning the dynamic
correlation among different streams. Specifically, OBAL operates in a
dual-phase mechanism, in the first of which we design an Adaptive COvariate
Shift Adaptation (AdaCOSA) algorithm to construct an initialized ensemble model
using archived data from various source streams, thus mitigating the covariate
shift while learning the dynamic correlations via an adaptive re-weighting
strategy. During the online process, we employ a Gaussian Mixture Model-based
weighting mechanism, which is seamlessly integrated with the acquired
correlations via AdaCOSA to effectively handle asynchronous drift. This
approach significantly improves the predictive performance and stability of the
target stream. We conduct comprehensive experiments on several synthetic and
real-world data streams, encompassing various drifting scenarios and types. The
results clearly demonstrate that OBAL achieves remarkable advancements in
addressing multistream classification problems by effectively leveraging
positive knowledge derived from multiple sources.
Related papers
- Online Relational Inference for Evolving Multi-agent Interacting Systems [14.275434303742328]
Online Inference (ORI) is designed to efficiently identify hidden interaction graphs in evolving multi-agent interacting systems.
Unlike traditional offline methods that rely on a fixed training set, ORI employs online backpropagation, updating the model with each new data point.
A key innovation is the use of an adjacency matrix as a trainable parameter, optimized through a new adaptive learning technique called AdaRelation.
arXiv Detail & Related papers (2024-11-03T05:43:55Z) - AiGAS-dEVL: An Adaptive Incremental Neural Gas Model for Drifting Data Streams under Extreme Verification Latency [6.7236795813629]
In streaming setups data flows are affected by factors that yield non-stationarities in the patterns (concept drift)
We propose a novel approach, AiGAS-dEVL, which relies on growing neural gas to characterize the distributions of all concepts detected within the stream over time.
Our approach exposes that the online analysis of the behavior of these points over time facilitates the definition of the evolution of concepts in the feature space.
arXiv Detail & Related papers (2024-07-07T14:04:57Z) - Taxonomy Adaptive Cross-Domain Adaptation in Medical Imaging via
Optimization Trajectory Distillation [73.83178465971552]
The success of automated medical image analysis depends on large-scale and expert-annotated training sets.
Unsupervised domain adaptation (UDA) has been raised as a promising approach to alleviate the burden of labeled data collection.
We propose optimization trajectory distillation, a unified approach to address the two technical challenges from a new perspective.
arXiv Detail & Related papers (2023-07-27T08:58:05Z) - Latent Variable Representation for Reinforcement Learning [131.03944557979725]
It remains unclear theoretically and empirically how latent variable models may facilitate learning, planning, and exploration to improve the sample efficiency of model-based reinforcement learning.
We provide a representation view of the latent variable models for state-action value functions, which allows both tractable variational learning algorithm and effective implementation of the optimism/pessimism principle.
In particular, we propose a computationally efficient planning algorithm with UCB exploration by incorporating kernel embeddings of latent variable models.
arXiv Detail & Related papers (2022-12-17T00:26:31Z) - Continual Learning with Optimal Transport based Mixture Model [17.398605698033656]
We propose an online mixture model learning approach based on nice properties of the mature optimal transport theory (OT-MM)
Our proposed method can significantly outperform the current state-of-the-art baselines.
arXiv Detail & Related papers (2022-11-30T06:40:29Z) - Bilevel Online Deep Learning in Non-stationary Environment [4.565872584112864]
Bilevel Online Deep Learning (BODL) framework combines bilevel optimization strategy and online ensemble classifier.
When the concept drift is detected, our BODL algorithm can adaptively update the model parameters via bilevel optimization and then circumvent the large drift and encourage positive transfer.
arXiv Detail & Related papers (2022-01-25T11:05:51Z) - Contrastively Disentangled Sequential Variational Autoencoder [20.75922928324671]
We propose a novel sequence representation learning method, named Contrastively Disentangled Sequential Variational Autoencoder (C-DSVAE)
We use a novel evidence lower bound which maximizes the mutual information between the input and the latent factors, while penalizes the mutual information between the static and dynamic factors.
Our experiments show that C-DSVAE significantly outperforms the previous state-of-the-art methods on multiple metrics.
arXiv Detail & Related papers (2021-10-22T23:00:32Z) - Contrastive Self-supervised Sequential Recommendation with Robust
Augmentation [101.25762166231904]
Sequential Recommendationdescribes a set of techniques to model dynamic user behavior in order to predict future interactions in sequential user data.
Old and new issues remain, including data-sparsity and noisy data.
We propose Contrastive Self-Supervised Learning for sequential Recommendation (CoSeRec)
arXiv Detail & Related papers (2021-08-14T07:15:25Z) - Adaptive Gradient Method with Resilience and Momentum [120.83046824742455]
We propose an Adaptive Gradient Method with Resilience and Momentum (AdaRem)
AdaRem adjusts the parameter-wise learning rate according to whether the direction of one parameter changes in the past is aligned with the direction of the current gradient.
Our method outperforms previous adaptive learning rate-based algorithms in terms of the training speed and the test error.
arXiv Detail & Related papers (2020-10-21T14:49:00Z) - An Ode to an ODE [78.97367880223254]
We present a new paradigm for Neural ODE algorithms, called ODEtoODE, where time-dependent parameters of the main flow evolve according to a matrix flow on the group O(d)
This nested system of two flows provides stability and effectiveness of training and provably solves the gradient vanishing-explosion problem.
arXiv Detail & Related papers (2020-06-19T22:05:19Z) - Dynamic Federated Learning [57.14673504239551]
Federated learning has emerged as an umbrella term for centralized coordination strategies in multi-agent environments.
We consider a federated learning model where at every iteration, a random subset of available agents perform local updates based on their data.
Under a non-stationary random walk model on the true minimizer for the aggregate optimization problem, we establish that the performance of the architecture is determined by three factors, namely, the data variability at each agent, the model variability across all agents, and a tracking term that is inversely proportional to the learning rate of the algorithm.
arXiv Detail & Related papers (2020-02-20T15:00:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.