Inherently Interpretable and Uncertainty-Aware Models for Online Learning in Cyber-Security Problems
- URL: http://arxiv.org/abs/2411.09393v1
- Date: Thu, 14 Nov 2024 12:11:08 GMT
- Title: Inherently Interpretable and Uncertainty-Aware Models for Online Learning in Cyber-Security Problems
- Authors: Benjamin Kolicic, Alberto Caron, Chris Hicks, Vasilios Mavroudis,
- Abstract summary: We propose a novel pipeline for online supervised learning problems in cyber-security.
Our approach aims to balance predictive performance with transparency.
This work contributes to the growing field of interpretable AI.
- Score: 0.22499166814992438
- License:
- Abstract: In this paper, we address the critical need for interpretable and uncertainty-aware machine learning models in the context of online learning for high-risk industries, particularly cyber-security. While deep learning and other complex models have demonstrated impressive predictive capabilities, their opacity and lack of uncertainty quantification present significant questions about their trustworthiness. We propose a novel pipeline for online supervised learning problems in cyber-security, that harnesses the inherent interpretability and uncertainty awareness of Additive Gaussian Processes (AGPs) models. Our approach aims to balance predictive performance with transparency while improving the scalability of AGPs, which represents their main drawback, potentially enabling security analysts to better validate threat detection, troubleshoot and reduce false positives, and generally make trustworthy, informed decisions. This work contributes to the growing field of interpretable AI by proposing a class of models that can be significantly beneficial for high-stake decision problems such as the ones typical of the cyber-security domain. The source code is available.
Related papers
- Open Problems in Machine Unlearning for AI Safety [61.43515658834902]
Machine unlearning -- the ability to selectively forget or suppress specific types of knowledge -- has shown promise for privacy and data removal tasks.
In this paper, we identify key limitations that prevent unlearning from serving as a comprehensive solution for AI safety.
arXiv Detail & Related papers (2025-01-09T03:59:10Z) - Applications of Positive Unlabeled (PU) and Negative Unlabeled (NU) Learning in Cybersecurity [0.0]
This paper explores the relatively underexplored application of Positive Unlabeled (PU) Learning and Negative Unlabeled (NU) Learning in the cybersecurity domain.
The paper identifies key areas of cybersecurity--such as intrusion detection, vulnerability management, malware detection, and threat intelligence--where PU/NU learning can offer significant improvements.
We propose future directions to advance the integration of PU/NU learning in cybersecurity, offering solutions that can better detect, manage, and mitigate emerging cyber threats.
arXiv Detail & Related papers (2024-12-09T04:55:10Z) - ChatNVD: Advancing Cybersecurity Vulnerability Assessment with Large Language Models [0.46873264197900916]
This paper explores the potential application of Large Language Models (LLMs) to enhance the assessment of software vulnerabilities.
We develop three variants of ChatNVD, utilizing three prominent LLMs: GPT-4o mini by OpenAI, Llama 3 by Meta, and Gemini 1.5 Pro by Google.
To evaluate their efficacy, we conduct a comparative analysis of these models using a comprehensive questionnaire comprising common security vulnerability questions.
arXiv Detail & Related papers (2024-12-06T03:45:49Z) - Artificial intelligence and cybersecurity in banking sector: opportunities and risks [0.0]
Machine learning (ML) enables systems to adapt and learn from vast datasets.
This study highlights the dual-use nature of AI tools, which can be used by malicious users.
The paper emphasizes the importance of developing machine learning models with key characteristics such as security, trust, resilience and robustness.
arXiv Detail & Related papers (2024-11-28T22:09:55Z) - New Emerged Security and Privacy of Pre-trained Model: a Survey and Outlook [54.24701201956833]
Security and privacy issues have undermined users' confidence in pre-trained models.
Current literature lacks a clear taxonomy of emerging attacks and defenses for pre-trained models.
This taxonomy categorizes attacks and defenses into No-Change, Input-Change, and Model-Change approaches.
arXiv Detail & Related papers (2024-11-12T10:15:33Z) - Analyzing Adversarial Inputs in Deep Reinforcement Learning [53.3760591018817]
We present a comprehensive analysis of the characterization of adversarial inputs, through the lens of formal verification.
We introduce a novel metric, the Adversarial Rate, to classify models based on their susceptibility to such perturbations.
Our analysis empirically demonstrates how adversarial inputs can affect the safety of a given DRL system with respect to such perturbations.
arXiv Detail & Related papers (2024-02-07T21:58:40Z) - A robust statistical framework for cyber-vulnerability prioritisation under partial information in threat intelligence [0.0]
This work introduces a robust statistical framework for quantitative and qualitative reasoning under uncertainty about cyber-vulnerabilities.
We identify a novel accuracy measure suited for rank in variance under partial knowledge of the whole set of existing vulnerabilities.
We discuss the implications of partial knowledge about cyber-vulnerabilities on threat intelligence and decision-making in operational scenarios.
arXiv Detail & Related papers (2023-02-16T15:05:43Z) - Uncertainty as a Form of Transparency: Measuring, Communicating, and
Using Uncertainty [66.17147341354577]
We argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions.
We describe how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems.
This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness.
arXiv Detail & Related papers (2020-11-15T17:26:14Z) - Trustworthy AI [75.99046162669997]
Brittleness to minor adversarial changes in the input data, ability to explain the decisions, address the bias in their training data, are some of the most prominent limitations.
We propose the tutorial on Trustworthy AI to address six critical issues in enhancing user and public trust in AI systems.
arXiv Detail & Related papers (2020-11-02T20:04:18Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - A Safety Framework for Critical Systems Utilising Deep Neural Networks [13.763070043077633]
This paper presents a principled novel safety argument framework for critical systems that utilise deep neural networks.
The approach allows various forms of predictions, e.g., future reliability of passing some demands, or confidence on a required reliability level.
It is supported by a Bayesian analysis using operational data and the recent verification and validation techniques for deep learning.
arXiv Detail & Related papers (2020-03-07T23:35:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.