DeepAID: Interpreting and Improving Deep Learning-based Anomaly
Detection in Security Applications
- URL: http://arxiv.org/abs/2109.11495v1
- Date: Thu, 23 Sep 2021 16:52:05 GMT
- Title: DeepAID: Interpreting and Improving Deep Learning-based Anomaly
Detection in Security Applications
- Authors: Dongqi Han, Zhiliang Wang, Wenqi Chen, Ying Zhong, Su Wang, Han Zhang,
Jiahai Yang, Xingang Shi, and Xia Yin
- Abstract summary: Unsupervised Deep Learning (DL) techniques have been widely used in various security-related anomaly detection applications.
The lack of interpretability creates key barriers to the adoption of DL models in practice.
We propose DeepAID, a framework aiming to (1) interpret DL-based anomaly detection systems in security domains, and (2) improve the practicality of these systems based on the interpretations.
- Score: 24.098989392716977
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised Deep Learning (DL) techniques have been widely used in various
security-related anomaly detection applications, owing to the great promise of
being able to detect unforeseen threats and superior performance provided by
Deep Neural Networks (DNN). However, the lack of interpretability creates key
barriers to the adoption of DL models in practice. Unfortunately, existing
interpretation approaches are proposed for supervised learning models and/or
non-security domains, which are unadaptable for unsupervised DL models and fail
to satisfy special requirements in security domains.
In this paper, we propose DeepAID, a general framework aiming to (1)
interpret DL-based anomaly detection systems in security domains, and (2)
improve the practicality of these systems based on the interpretations. We
first propose a novel interpretation method for unsupervised DNNs by
formulating and solving well-designed optimization problems with special
constraints for security domains. Then, we provide several applications based
on our Interpreter as well as a model-based extension Distiller to improve
security systems by solving domain-specific problems. We apply DeepAID over
three types of security-related anomaly detection systems and extensively
evaluate our Interpreter with representative prior works. Experimental results
show that DeepAID can provide high-quality interpretations for unsupervised DL
models while meeting the special requirements of security domains. We also
provide several use cases to show that DeepAID can help security operators to
understand model decisions, diagnose system mistakes, give feedback to models,
and reduce false positives.
Related papers
- Exposing the Ghost in the Transformer: Abnormal Detection for Large Language Models via Hidden State Forensics [5.384257830522198]
Large Language Models (LLMs) in critical applications have introduced severe reliability and security risks.
These vulnerabilities have been weaponized by malicious actors, leading to unauthorized access, widespread misinformation, and compromised system integrity.
We introduce a novel approach to detecting abnormal behaviors in LLMs via hidden state forensics.
arXiv Detail & Related papers (2025-04-01T05:58:14Z) - In-Context Experience Replay Facilitates Safety Red-Teaming of Text-to-Image Diffusion Models [97.82118821263825]
Text-to-image (T2I) models have shown remarkable progress, but their potential to generate harmful content remains a critical concern in the ML community.
We propose ICER, a novel red-teaming framework that generates interpretable and semantic meaningful problematic prompts.
Our work provides crucial insights for developing more robust safety mechanisms in T2I systems.
arXiv Detail & Related papers (2024-11-25T04:17:24Z) - Visually Analyze SHAP Plots to Diagnose Misclassifications in ML-based Intrusion Detection [0.3199881502576702]
Intrusion detection system (IDS) can essentially mitigate threats by providing alerts.
In order to detect these threats various machine learning (ML) and deep learning (DL) models have been proposed.
In this paper, we propose an explainable artificial intelligence (XAI) based visual analysis approach using overlapping SHAP plots.
arXiv Detail & Related papers (2024-11-04T23:08:34Z) - Unsupervised Model Diagnosis [49.36194740479798]
This paper proposes Unsupervised Model Diagnosis (UMO) to produce semantic counterfactual explanations without any user guidance.
Our approach identifies and visualizes changes in semantics, and then matches these changes to attributes from wide-ranging text sources.
arXiv Detail & Related papers (2024-10-08T17:59:03Z) - Advancing Security in AI Systems: A Novel Approach to Detecting
Backdoors in Deep Neural Networks [3.489779105594534]
backdoors can be exploited by malicious actors on deep neural networks (DNNs) and cloud services for data processing.
Our approach leverages advanced tensor decomposition algorithms to meticulously analyze the weights of pre-trained DNNs and distinguish between backdoored and clean models.
This advancement enhances the security of deep learning and AI in networked systems, providing essential cybersecurity against evolving threats in emerging technologies.
arXiv Detail & Related papers (2024-03-13T03:10:11Z) - Analyzing Adversarial Inputs in Deep Reinforcement Learning [53.3760591018817]
We present a comprehensive analysis of the characterization of adversarial inputs, through the lens of formal verification.
We introduce a novel metric, the Adversarial Rate, to classify models based on their susceptibility to such perturbations.
Our analysis empirically demonstrates how adversarial inputs can affect the safety of a given DRL system with respect to such perturbations.
arXiv Detail & Related papers (2024-02-07T21:58:40Z) - Adversarial Machine Learning In Network Intrusion Detection Domain: A
Systematic Review [0.0]
It has been found that deep learning models are vulnerable to data instances that can mislead the model to make incorrect classification decisions.
This survey explores the researches that employ different aspects of adversarial machine learning in the area of network intrusion detection.
arXiv Detail & Related papers (2021-12-06T19:10:23Z) - Towards a Privacy-preserving Deep Learning-based Network Intrusion
Detection in Data Distribution Services [0.0]
Data Distribution Service (DDS) is an innovative approach towards communication in ICS/IoT infrastructure and robotics.
Traditional intrusion detection systems (IDS) do not detect any anomalies in the publish/subscribe method.
This report presents an experimental work on simulation and application of Deep Learning for their detection.
arXiv Detail & Related papers (2021-06-12T12:53:38Z) - Exploring Robustness of Unsupervised Domain Adaptation in Semantic
Segmentation [74.05906222376608]
We propose adversarial self-supervision UDA (or ASSUDA) that maximizes the agreement between clean images and their adversarial examples by a contrastive loss in the output space.
This paper is rooted in two observations: (i) the robustness of UDA methods in semantic segmentation remains unexplored, which pose a security concern in this field; and (ii) although commonly used self-supervision (e.g., rotation and jigsaw) benefits image tasks such as classification and recognition, they fail to provide the critical supervision signals that could learn discriminative representation for segmentation tasks.
arXiv Detail & Related papers (2021-05-23T01:50:44Z) - Inspect, Understand, Overcome: A Survey of Practical Methods for AI
Safety [54.478842696269304]
The use of deep neural networks (DNNs) in safety-critical applications is challenging due to numerous model-inherent shortcomings.
In recent years, a zoo of state-of-the-art techniques aiming to address these safety concerns has emerged.
Our paper addresses both machine learning experts and safety engineers.
arXiv Detail & Related papers (2021-04-29T09:54:54Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - A cognitive based Intrusion detection system [0.0]
Intrusion detection is one of the important mechanisms that provide computer networks security.
This paper proposes a new approach based on Deep Neural Network ans Support vector machine classifier.
The proposed model predicts the attacks with better accuracy for intrusion detection rather similar methods.
arXiv Detail & Related papers (2020-05-19T13:30:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.