Anomaly Detection using Ensemble Classification and Evidence Theory
- URL: http://arxiv.org/abs/2212.12092v1
- Date: Fri, 23 Dec 2022 00:50:41 GMT
- Title: Anomaly Detection using Ensemble Classification and Evidence Theory
- Authors: Fernando Ar\'evalo, Tahasanul Ibrahim, Christian Alison M. Piolo,
Andreas Schwung
- Abstract summary: We present a novel approach for novel detection using ensemble classification and evidence theory.
A pool selection strategy is presented to build a solid ensemble classifier.
We use uncertainty for the anomaly detection approach.
- Score: 62.997667081978825
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Multi-class ensemble classification remains a popular focus of investigation
within the research community. The popularization of cloud services has sped up
their adoption due to the ease of deploying large-scale machine-learning
models. It has also drawn the attention of the industrial sector because of its
ability to identify common problems in production. However, there are
challenges to conform an ensemble classifier, namely a proper selection and
effective training of the pool of classifiers, the definition of a proper
architecture for multi-class classification, and uncertainty quantification of
the ensemble classifier. The robustness and effectiveness of the ensemble
classifier lie in the selection of the pool of classifiers, as well as in the
learning process. Hence, the selection and the training procedure of the pool
of classifiers play a crucial role. An (ensemble) classifier learns to detect
the classes that were used during the supervised training. However, when
injecting data with unknown conditions, the trained classifier will intend to
predict the classes learned during the training. To this end, the uncertainty
of the individual and ensemble classifier could be used to assess the learning
capability. We present a novel approach for novel detection using ensemble
classification and evidence theory. A pool selection strategy is presented to
build a solid ensemble classifier. We present an architecture for multi-class
ensemble classification and an approach to quantify the uncertainty of the
individual classifiers and the ensemble classifier. We use uncertainty for the
anomaly detection approach. Finally, we use the benchmark Tennessee Eastman to
perform experiments to test the ensemble classifier's prediction and anomaly
detection capabilities.
Related papers
- Feature selection simultaneously preserving both class and cluster
structures [5.5612170847190665]
We propose a neural network-based feature selection method that focuses both on class discrimination and structure preservation in an integrated manner.
Based on the results of the experiments, we may claim that the proposed feature/band selection can select a subset of features that is good for both classification and clustering.
arXiv Detail & Related papers (2023-07-08T04:49:51Z) - Parametric Classification for Generalized Category Discovery: A Baseline
Study [70.73212959385387]
Generalized Category Discovery (GCD) aims to discover novel categories in unlabelled datasets using knowledge learned from labelled samples.
We investigate the failure of parametric classifiers, verify the effectiveness of previous design choices when high-quality supervision is available, and identify unreliable pseudo-labels as a key problem.
We propose a simple yet effective parametric classification method that benefits from entropy regularisation, achieves state-of-the-art performance on multiple GCD benchmarks and shows strong robustness to unknown class numbers.
arXiv Detail & Related papers (2022-11-21T18:47:11Z) - Exploring Category-correlated Feature for Few-shot Image Classification [27.13708881431794]
We present a simple yet effective feature rectification method by exploring the category correlation between novel and base classes as the prior knowledge.
The proposed approach consistently obtains considerable performance gains on three widely used benchmarks.
arXiv Detail & Related papers (2021-12-14T08:25:24Z) - CAC: A Clustering Based Framework for Classification [20.372627144885158]
We design a simple, efficient, and generic framework called Classification Aware Clustering (CAC)
Our experiments on synthetic and real benchmark datasets demonstrate the efficacy of CAC over previous methods for combined clustering and classification.
arXiv Detail & Related papers (2021-02-23T18:59:39Z) - Binary Classification from Multiple Unlabeled Datasets via Surrogate Set
Classification [94.55805516167369]
We propose a new approach for binary classification from m U-sets for $mge2$.
Our key idea is to consider an auxiliary classification task called surrogate set classification (SSC)
arXiv Detail & Related papers (2021-02-01T07:36:38Z) - Active Hybrid Classification [79.02441914023811]
This paper shows how crowd and machines can support each other in tackling classification problems.
We propose an architecture that orchestrates active learning and crowd classification and combines them in a virtuous cycle.
arXiv Detail & Related papers (2021-01-21T21:09:07Z) - Learning and Evaluating Representations for Deep One-class
Classification [59.095144932794646]
We present a two-stage framework for deep one-class classification.
We first learn self-supervised representations from one-class data, and then build one-class classifiers on learned representations.
In experiments, we demonstrate state-of-the-art performance on visual domain one-class classification benchmarks.
arXiv Detail & Related papers (2020-11-04T23:33:41Z) - Predicting Classification Accuracy When Adding New Unobserved Classes [8.325327265120283]
We study how a classifier's performance can be used to extrapolate its expected accuracy on a larger, unobserved set of classes.
We formulate a robust neural-network-based algorithm, "CleaneX", which learns to estimate the accuracy of such classifiers on arbitrarily large sets of classes.
arXiv Detail & Related papers (2020-10-28T14:37:25Z) - Interpretable Sequence Classification via Discrete Optimization [26.899228003677138]
In many applications such as healthcare monitoring or intrusion detection, early classification is crucial to prompt intervention.
In this work, we learn sequence classifiers that favour early classification from an evolving observation trace.
Our classifiers are interpretable---supporting explanation, counterfactual reasoning, and human-in-the-loop modification.
arXiv Detail & Related papers (2020-10-06T15:31:07Z) - Certified Robustness to Label-Flipping Attacks via Randomized Smoothing [105.91827623768724]
Machine learning algorithms are susceptible to data poisoning attacks.
We present a unifying view of randomized smoothing over arbitrary functions.
We propose a new strategy for building classifiers that are pointwise-certifiably robust to general data poisoning attacks.
arXiv Detail & Related papers (2020-02-07T21:28:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.