Towards Interpretable Ensemble Learning for Image-based Malware
Detection
- URL: http://arxiv.org/abs/2101.04889v1
- Date: Wed, 13 Jan 2021 05:46:44 GMT
- Title: Towards Interpretable Ensemble Learning for Image-based Malware
Detection
- Authors: Yuzhou Lin, Xiaolin Chang
- Abstract summary: This paper aims for designing an Interpretable Ensemble learning approach for image-based Malware Detection.
Experiment results indicate that IEMD achieves a higher detection accuracy up to 99.87% while exhibiting interpretability with high quality of prediction results.
- Score: 4.721069729610892
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning (DL) models for image-based malware detection have exhibited
their capability in producing high prediction accuracy. But model
interpretability is posing challenges to their widespread application in
security and safety-critical application domains. This paper aims for designing
an Interpretable Ensemble learning approach for image-based Malware Detection
(IEMD). We first propose a Selective Deep Ensemble Learning-based (SDEL)
detector and then design an Ensemble Deep Taylor Decomposition (EDTD) approach,
which can give the pixel-level explanation to SDEL detector outputs.
Furthermore, we develop formulas for calculating fidelity, robustness and
expressiveness on pixel-level heatmaps in order to assess the quality of EDTD
explanation. With EDTD explanation, we develop a novel Interpretable Dropout
approach (IDrop), which establishes IEMD by training SDEL detector. Experiment
results exhibit the better explanation of our EDTD than the previous
explanation methods for image-based malware detection. Besides, experiment
results indicate that IEMD achieves a higher detection accuracy up to 99.87%
while exhibiting interpretability with high quality of prediction results.
Moreover, experiment results indicate that IEMD interpretability increases with
the increasing detection accuracy during the construction of IEMD. This
consistency suggests that IDrop can mitigate the tradeoff between model
interpretability and detection accuracy.
Related papers
- AssemAI: Interpretable Image-Based Anomaly Detection for Manufacturing Pipelines [0.0]
Anomaly detection in manufacturing pipelines remains a critical challenge, intensified by the complexity and variability of industrial environments.
This paper introduces AssemAI, an interpretable image-based anomaly detection system tailored for smart manufacturing pipelines.
arXiv Detail & Related papers (2024-08-05T01:50:09Z) - X-Fake: Juggling Utility Evaluation and Explanation of Simulated SAR Images [49.546627070454456]
The distribution inconsistency between real and simulated data is the main obstacle that influences the utility of simulated SAR images.
We propose a novel trustworthy utility evaluation framework with a counterfactual explanation for simulated SAR images for the first time, denoted as X-Fake.
The proposed framework is validated on four simulated SAR image datasets obtained from electromagnetic models and generative artificial intelligence approaches.
arXiv Detail & Related papers (2024-07-28T09:27:53Z) - Improving Interpretability and Robustness for the Detection of AI-Generated Images [6.116075037154215]
We analyze existing state-of-the-art AIGI detection methods based on frozen CLIP embeddings.
We show how to interpret them, shedding light on how images produced by various AI generators differ from real ones.
arXiv Detail & Related papers (2024-06-21T10:33:09Z) - RIGID: A Training-free and Model-Agnostic Framework for Robust AI-Generated Image Detection [60.960988614701414]
RIGID is a training-free and model-agnostic method for robust AI-generated image detection.
RIGID significantly outperforms existing trainingbased and training-free detectors.
arXiv Detail & Related papers (2024-05-30T14:49:54Z) - Robust CLIP-Based Detector for Exposing Diffusion Model-Generated Images [13.089550724738436]
Diffusion models (DMs) have revolutionized image generation, producing high-quality images with applications spanning various fields.
Their ability to create hyper-realistic images poses significant challenges in distinguishing between real and synthetic content.
This work introduces a robust detection framework that integrates image and text features extracted by CLIP model with a Multilayer Perceptron (MLP) classifier.
arXiv Detail & Related papers (2024-04-19T14:30:41Z) - Out-of-Distribution Detection for Monocular Depth Estimation [4.873593653200759]
Motivated by anomaly detection, we propose to detect OOD images from an encoder-decoder depth estimation model.
We build our experiments on the standard NYU Depth V2 and KITTI benchmarks as in-distribution data.
arXiv Detail & Related papers (2023-08-11T11:25:23Z) - Single Image Depth Prediction Made Better: A Multivariate Gaussian Take [163.14849753700682]
We introduce an approach that performs continuous modeling of per-pixel depth.
Our method's accuracy (named MG) is among the top on the KITTI depth-prediction benchmark leaderboard.
arXiv Detail & Related papers (2023-03-31T16:01:03Z) - Masked Images Are Counterfactual Samples for Robust Fine-tuning [77.82348472169335]
Fine-tuning deep learning models can lead to a trade-off between in-distribution (ID) performance and out-of-distribution (OOD) robustness.
We propose a novel fine-tuning method, which uses masked images as counterfactual samples that help improve the robustness of the fine-tuning model.
arXiv Detail & Related papers (2023-03-06T11:51:28Z) - Deep Learning-Based Defect Classification and Detection in SEM Images [1.9206693386750882]
In particular, we train RetinaNet models using different ResNet, VGGNet architectures as backbone.
We propose a preference-based ensemble strategy to combine the output predictions from different models in order to achieve better performance on classification and detection of defects.
arXiv Detail & Related papers (2022-06-20T16:34:11Z) - On the Robustness of Quality Measures for GANs [136.18799984346248]
This work evaluates the robustness of quality measures of generative models such as Inception Score (IS) and Fr'echet Inception Distance (FID)
We show that such metrics can also be manipulated by additive pixel perturbations.
arXiv Detail & Related papers (2022-01-31T06:43:09Z) - NADS: Neural Architecture Distribution Search for Uncertainty Awareness [79.18710225716791]
Machine learning (ML) systems often encounter Out-of-Distribution (OoD) errors when dealing with testing data coming from a distribution different from training data.
Existing OoD detection approaches are prone to errors and even sometimes assign higher likelihoods to OoD samples.
We propose Neural Architecture Distribution Search (NADS) to identify common building blocks among all uncertainty-aware architectures.
arXiv Detail & Related papers (2020-06-11T17:39:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.