Diverse Instances-Weighting Ensemble based on Region Drift Disagreement
for Concept Drift Adaptation
- URL: http://arxiv.org/abs/2004.05810v1
- Date: Mon, 13 Apr 2020 07:59:25 GMT
- Title: Diverse Instances-Weighting Ensemble based on Region Drift Disagreement
for Concept Drift Adaptation
- Authors: Anjin Liu, Jie Lu, Guangquan Zhang
- Abstract summary: We propose a diversity measurement based on whether the ensemble members agree on the probability of a regional distribution change.
An instance-based ensemble learning algorithm, called the diverse instance weighting ensemble (DiwE) is developed to address concept drift for data stream classification problems.
- Score: 40.77597229122878
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Concept drift refers to changes in the distribution of underlying data and is
an inherent property of evolving data streams. Ensemble learning, with dynamic
classifiers, has proved to be an efficient method of handling concept drift.
However, the best way to create and maintain ensemble diversity with evolving
streams is still a challenging problem. In contrast to estimating diversity via
inputs, outputs, or classifier parameters, we propose a diversity measurement
based on whether the ensemble members agree on the probability of a regional
distribution change. In our method, estimations over regional distribution
changes are used as instance weights. Constructing different region sets
through different schemes will lead to different drift estimation results,
thereby creating diversity. The classifiers that disagree the most are selected
to maximize diversity. Accordingly, an instance-based ensemble learning
algorithm, called the diverse instance weighting ensemble (DiwE), is developed
to address concept drift for data stream classification problems. Evaluations
of various synthetic and real-world data stream benchmarks show the
effectiveness and advantages of the proposed algorithm.
Related papers
- Classifier Clustering and Feature Alignment for Federated Learning under Distributed Concept Drift [5.566951183982973]
In this work, we focus on real drift, where the conditional distribution $P(Y|X)$ changes.
We propose FedCCFA, a federated learning framework with classifier clustering and feature alignment.
Our results demonstrate that FedCCFA significantly outperforms existing methods under various concept drift settings.
arXiv Detail & Related papers (2024-10-24T07:04:52Z) - Proxy Methods for Domain Adaptation [78.03254010884783]
proxy variables allow for adaptation to distribution shift without explicitly recovering or modeling latent variables.
We develop a two-stage kernel estimation approach to adapt to complex distribution shifts in both settings.
arXiv Detail & Related papers (2024-03-12T09:32:41Z) - Source-free Domain Adaptation Requires Penalized Diversity [60.04618512479438]
Source-free domain adaptation (SFDA) was introduced to address knowledge transfer between different domains in the absence of source data.
In unsupervised SFDA, the diversity is limited to learning a single hypothesis on the source or learning multiple hypotheses with a shared feature extractor.
We propose a novel unsupervised SFDA algorithm that promotes representational diversity through the use of separate feature extractors.
arXiv Detail & Related papers (2023-04-06T00:20:19Z) - Federated Variational Inference Methods for Structured Latent Variable
Models [1.0312968200748118]
Federated learning methods enable model training across distributed data sources without data leaving their original locations.
We present a general and elegant solution based on structured variational inference, widely used in Bayesian machine learning.
We also provide a communication-efficient variant analogous to the canonical FedAvg algorithm.
arXiv Detail & Related papers (2023-02-07T08:35:04Z) - Identifiable Latent Causal Content for Domain Adaptation under Latent Covariate Shift [82.14087963690561]
Multi-source domain adaptation (MSDA) addresses the challenge of learning a label prediction function for an unlabeled target domain.
We present an intricate causal generative model by introducing latent noises across domains, along with a latent content variable and a latent style variable.
The proposed approach showcases exceptional performance and efficacy on both simulated and real-world datasets.
arXiv Detail & Related papers (2022-08-30T11:25:15Z) - A Variational Bayesian Approach to Learning Latent Variables for
Acoustic Knowledge Transfer [55.20627066525205]
We propose a variational Bayesian (VB) approach to learning distributions of latent variables in deep neural network (DNN) models.
Our proposed VB approach can obtain good improvements on target devices, and consistently outperforms 13 state-of-the-art knowledge transfer algorithms.
arXiv Detail & Related papers (2021-10-16T15:54:01Z) - TDACNN: Target-domain-free Domain Adaptation Convolutional Neural
Network for Drift Compensation in Gas Sensors [6.451060076703026]
In this paper, deep learning based on a target-domain-free domain adaptation convolutional neural network (TDACNN) is proposed.
The main concept is that CNNs extract not only the domain-specific features of samples but also the domain-invariant features underlying both the source and target domains.
Experiments on two datasets drift under different settings demonstrate the superiority of TDACNN compared with several state-of-the-art methods.
arXiv Detail & Related papers (2021-10-14T16:30:17Z) - Semantic Change Detection with Asymmetric Siamese Networks [71.28665116793138]
Given two aerial images, semantic change detection aims to locate the land-cover variations and identify their change types with pixel-wise boundaries.
This problem is vital in many earth vision related tasks, such as precise urban planning and natural resource management.
We present an asymmetric siamese network (ASN) to locate and identify semantic changes through feature pairs obtained from modules of widely different structures.
arXiv Detail & Related papers (2020-10-12T13:26:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.