Multi-class Classification Based Anomaly Detection of Insider Activities
- URL: http://arxiv.org/abs/2102.07277v1
- Date: Mon, 15 Feb 2021 00:08:39 GMT
- Title: Multi-class Classification Based Anomaly Detection of Insider Activities
- Authors: R G Gayathri, Atul Sajjanhar, Yong Xiang and Xingjun Ma
- Abstract summary: We propose an approach that combines generative model with supervised learning to perform multi-class classification using deep learning.
The generative adversarial network (GAN) based insider detection model introduces Conditional Generative Adversarial Network (CGAN) to enrich minority class samples.
The comprehensive experiments performed on the benchmark dataset demonstrates the effectiveness of introducing GAN derived synthetic data.
- Score: 18.739091829480234
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Insider threats are the cyber attacks from within the trusted entities of an
organization. Lack of real-world data and issue of data imbalance leave insider
threat analysis an understudied research area. To mitigate the effect of skewed
class distribution and prove the potential of multinomial classification
algorithms for insider threat detection, we propose an approach that combines
generative model with supervised learning to perform multi-class classification
using deep learning. The generative adversarial network (GAN) based insider
detection model introduces Conditional Generative Adversarial Network (CGAN) to
enrich minority class samples to provide data for multi-class anomaly
detection. The comprehensive experiments performed on the benchmark dataset
demonstrates the effectiveness of introducing GAN derived synthetic data and
the capability of multi-class anomaly detection in insider activity analysis.
Moreover, the method is compared with other existing methods against different
parameters and performance metrics.
Related papers
- Comprehensive Botnet Detection by Mitigating Adversarial Attacks, Navigating the Subtleties of Perturbation Distances and Fortifying Predictions with Conformal Layers [1.6001193161043425]
Botnets are computer networks controlled by malicious actors that present significant cybersecurity challenges.
This research addresses the sophisticated adversarial manipulations posed by attackers, aiming to undermine machine learning-based botnet detection systems.
We introduce a flow-based detection approach, leveraging machine learning and deep learning algorithms trained on the ISCX and ISOT datasets.
arXiv Detail & Related papers (2024-09-01T08:53:21Z) - Toward Multi-class Anomaly Detection: Exploring Class-aware Unified Model against Inter-class Interference [67.36605226797887]
We introduce a Multi-class Implicit Neural representation Transformer for unified Anomaly Detection (MINT-AD)
By learning the multi-class distributions, the model generates class-aware query embeddings for the transformer decoder.
MINT-AD can project category and position information into a feature embedding space, further supervised by classification and prior probability loss functions.
arXiv Detail & Related papers (2024-03-21T08:08:31Z) - Model Stealing Attack against Graph Classification with Authenticity, Uncertainty and Diversity [80.16488817177182]
GNNs are vulnerable to the model stealing attack, a nefarious endeavor geared towards duplicating the target model via query permissions.
We introduce three model stealing attacks to adapt to different actual scenarios.
arXiv Detail & Related papers (2023-12-18T05:42:31Z) - Leveraging a Probabilistic PCA Model to Understand the Multivariate
Statistical Network Monitoring Framework for Network Security Anomaly
Detection [64.1680666036655]
We revisit anomaly detection techniques based on PCA from a probabilistic generative model point of view.
We have evaluated the mathematical model using two different datasets.
arXiv Detail & Related papers (2023-02-02T13:41:18Z) - Learning to Detect: A Data-driven Approach for Network Intrusion
Detection [17.288512506016612]
We perform a comprehensive study on NSL-KDD, a network traffic dataset, by visualizing patterns and employing different learning-based models to detect cyber attacks.
Unlike previous shallow learning and deep learning models that use the single learning model approach for intrusion detection, we adopt a hierarchy strategy.
We demonstrate the advantage of the unsupervised representation learning model in binary intrusion detection tasks.
arXiv Detail & Related papers (2021-08-18T21:19:26Z) - Zero-sample surface defect detection and classification based on
semantic feedback neural network [13.796631421521765]
We propose an Ensemble Co-training algorithm, which adaptively reduces the prediction error in image tag embedding from multiple angles.
Various experiments conducted on the zero-shot dataset and the cylinder liner dataset in the industrial field provide competitive results.
arXiv Detail & Related papers (2021-06-15T08:26:36Z) - Anomaly Detection of Test-Time Evasion Attacks using Class-conditional
Generative Adversarial Networks [21.023722317810805]
We propose an attack detector based on classconditional Generative Adversaratives (GAN)
We model the distribution of clean data conditioned on a predicted class label by an Auxiliary GAN (ACGAN)
Experiments on image classification datasets under different TTE attack methods show that our method outperforms state-of-the-art detection methods.
arXiv Detail & Related papers (2021-05-21T02:51:58Z) - On the Transferability of Adversarial Attacksagainst Neural Text
Classifier [121.6758865857686]
We investigate the transferability of adversarial examples for text classification models.
We propose a genetic algorithm to find an ensemble of models that can induce adversarial examples to fool almost all existing models.
We derive word replacement rules that can be used for model diagnostics from these adversarial examples.
arXiv Detail & Related papers (2020-11-17T10:45:05Z) - LOGAN: Local Group Bias Detection by Clustering [86.38331353310114]
We argue that evaluating bias at the corpus level is not enough for understanding how biases are embedded in a model.
We propose LOGAN, a new bias detection technique based on clustering.
Experiments on toxicity classification and object classification tasks show that LOGAN identifies bias in a local region.
arXiv Detail & Related papers (2020-10-06T16:42:51Z) - FairCVtest Demo: Understanding Bias in Multimodal Learning with a
Testbed in Fair Automatic Recruitment [79.23531577235887]
This demo shows the capacity of the Artificial Intelligence (AI) behind a recruitment tool to extract sensitive information from unstructured data.
Aditionally, the demo includes a new algorithm for discrimination-aware learning which eliminates sensitive information in our multimodal AI framework.
arXiv Detail & Related papers (2020-09-12T17:45:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.