Concept Drift Detection: Dealing with MissingValues via Fuzzy Distance
Estimations
- URL: http://arxiv.org/abs/2008.03662v1
- Date: Sun, 9 Aug 2020 05:25:46 GMT
- Title: Concept Drift Detection: Dealing with MissingValues via Fuzzy Distance
Estimations
- Authors: Anjin Liu, Jie Lu, Guangquan Zhang
- Abstract summary: In data streams, the data distribution of arriving observations at different time points may change - a phenomenon called concept drift.
We show that missing values exert a profound impact on concept drift detection, but using fuzzy set theory to model observations can produce more reliable results than imputation.
- Score: 40.77597229122878
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In data streams, the data distribution of arriving observations at different
time points may change - a phenomenon called concept drift. While detecting
concept drift is a relatively mature area of study, solutions to the
uncertainty introduced by observations with missing values have only been
studied in isolation. No one has yet explored whether or how these solutions
might impact drift detection performance. We, however, believe that data
imputation methods may actually increase uncertainty in the data rather than
reducing it. We also conjecture that imputation can introduce bias into the
process of estimating distribution changes during drift detection, which can
make it more difficult to train a learning model. Our idea is to focus on
estimating the distance between observations rather than estimating the missing
values, and to define membership functions that allocate observations to
histogram bins according to the estimation errors. Our solution comprises a
novel masked distance learning (MDL) algorithm to reduce the cumulative errors
caused by iteratively estimating each missing value in an observation and a
fuzzy-weighted frequency (FWF) method for identifying discrepancies in the data
distribution. The concept drift detection algorithm proposed in this paper is a
singular and unified algorithm that can handle missing values, but not an
imputation algorithm combined with a concept drift detection algorithm.
Experiments on both synthetic and real-world data sets demonstrate the
advantages of this method and show its robustness in detecting drift in data
with missing values. These findings reveal that missing values exert a profound
impact on concept drift detection, but using fuzzy set theory to model
observations can produce more reliable results than imputation.
Related papers
- A Neighbor-Searching Discrepancy-based Drift Detection Scheme for Learning Evolving Data [40.00357483768265]
This work presents a novel real concept drift detection method based on Neighbor-Searching Discrepancy.
The proposed method is able to detect real concept drift with high accuracy while ignoring virtual drift.
It can also indicate the direction of the classification boundary change by identifying the invasion or retreat of a certain class.
arXiv Detail & Related papers (2024-05-23T04:03:36Z) - Uncovering the Missing Pattern: Unified Framework Towards Trajectory
Imputation and Prediction [60.60223171143206]
Trajectory prediction is a crucial undertaking in understanding entity movement or human behavior from observed sequences.
Current methods often assume that the observed sequences are complete while ignoring the potential for missing values.
This paper presents a unified framework, the Graph-based Conditional Variational Recurrent Neural Network (GC-VRNN), which can perform trajectory imputation and prediction simultaneously.
arXiv Detail & Related papers (2023-03-28T14:27:27Z) - CADM: Confusion Model-based Detection Method for Real-drift in Chunk
Data Stream [3.0885191226198785]
Concept drift detection has attracted considerable attention due to its importance in many real-world applications such as health monitoring and fault diagnosis.
We propose a new approach to detect real-drift in the chunk data stream with limited annotations based on concept confusion.
arXiv Detail & Related papers (2023-03-25T08:59:27Z) - Learning to Bound Counterfactual Inference in Structural Causal Models
from Observational and Randomised Data [64.96984404868411]
We derive a likelihood characterisation for the overall data that leads us to extend a previous EM-based algorithm.
The new algorithm learns to approximate the (unidentifiability) region of model parameters from such mixed data sources.
It delivers interval approximations to counterfactual results, which collapse to points in the identifiable case.
arXiv Detail & Related papers (2022-12-06T12:42:11Z) - Detecting Concept Drift in the Presence of Sparsity -- A Case Study of
Automated Change Risk Assessment System [0.8021979227281782]
Missing values, widely called as textitsparsity in literature, is a common characteristic of many real-world datasets.
We study different patterns of missing values, various statistical and ML based data imputation methods for different kinds of sparsity.
We then select the best concept drift detector given a dataset with missing values based on the different metrics.
arXiv Detail & Related papers (2022-07-27T04:27:49Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - Detecting Concept Drift With Neural Network Model Uncertainty [0.0]
Uncertainty Drift Detection (UDD) is able to detect drifts without access to true labels.
In contrast to input data-based drift detection, our approach considers the effects of the current input data on the properties of the prediction model.
We show that UDD outperforms other state-of-the-art strategies on two synthetic as well as ten real-world data sets for both regression and classification tasks.
arXiv Detail & Related papers (2021-07-05T08:56:36Z) - The Hidden Uncertainty in a Neural Networks Activations [105.4223982696279]
The distribution of a neural network's latent representations has been successfully used to detect out-of-distribution (OOD) data.
This work investigates whether this distribution correlates with a model's epistemic uncertainty, thus indicating its ability to generalise to novel inputs.
arXiv Detail & Related papers (2020-12-05T17:30:35Z) - Concept Drift Detection via Equal Intensity k-means Space Partitioning [40.77597229122878]
Cluster-based histogram called equal intensity k-means space partitioning (EI-kMeans)
Three algorithms are developed to implement concept drift detection, including a greedy centroids algorithm, a cluster amplify-shrink algorithm, and a drift detection algorithm.
Experiments on synthetic and real-world datasets demonstrate the advantages of EI-kMeans and show its efficacy in detecting concept drift.
arXiv Detail & Related papers (2020-04-24T08:00:16Z) - Uncertainty Estimation Using a Single Deep Deterministic Neural Network [66.26231423824089]
We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass.
We scale training in these with a novel loss function and centroid updating scheme and match the accuracy of softmax models.
arXiv Detail & Related papers (2020-03-04T12:27:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.