Vulnerability Prioritization: An Offensive Security Approach
- URL: http://arxiv.org/abs/2206.11182v1
- Date: Wed, 22 Jun 2022 15:43:41 GMT
- Title: Vulnerability Prioritization: An Offensive Security Approach
- Authors: Muhammed Fatih Bulut, Abdulhamid Adebayo, Daby Sow, Steve Ocepek
- Abstract summary: We propose a new way of prioritizing vulnerabilities.
Our approach is inspired by how offensive security practitioners perform penetration testing.
We evaluate our approach with a real world case study for a large client, and the accuracy of machine learning to automate the process end to end.
- Score: 1.6911982356562938
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Organizations struggle to handle sheer number of vulnerabilities in their
cloud environments. The de facto methodology used for prioritizing
vulnerabilities is to use Common Vulnerability Scoring System (CVSS). However,
CVSS has inherent limitations that makes it not ideal for prioritization. In
this work, we propose a new way of prioritizing vulnerabilities. Our approach
is inspired by how offensive security practitioners perform penetration
testing. We evaluate our approach with a real world case study for a large
client, and the accuracy of machine learning to automate the process end to
end.
Related papers
- Boosting Cybersecurity Vulnerability Scanning based on LLM-supported Static Application Security Testing [5.644999288757871]
Large Language Models (LLMs) have demonstrated powerful code analysis capabilities, but their static training data and privacy risks limit their effectiveness.
We propose LSAST, a novel approach that integrates LLMs with SAST scanners to enhance vulnerability detection.
We set a new benchmark for static vulnerability analysis, offering a robust, privacy-conscious solution.
arXiv Detail & Related papers (2024-09-24T04:42:43Z) - No Regrets: Investigating and Improving Regret Approximations for Curriculum Discovery [53.08822154199948]
Unsupervised Environment Design (UED) methods have gained recent attention as their adaptive curricula promise to enable agents to be robust to in- and out-of-distribution tasks.
This work investigates how existing UED methods select training environments, focusing on task prioritisation metrics.
We develop a method that directly trains on scenarios with high learnability.
arXiv Detail & Related papers (2024-08-27T14:31:54Z) - SecScore: Enhancing the CVSS Threat Metric Group with Empirical Evidences [0.0]
One of the most widely used vulnerability scoring systems (CVSS) does not address the increasing likelihood of emerging an exploit code.
We present SecScore, an innovative vulnerability severity score that enhances CVSS Threat metric group.
arXiv Detail & Related papers (2024-05-14T12:25:55Z) - ASSERT: Automated Safety Scenario Red Teaming for Evaluating the
Robustness of Large Language Models [65.79770974145983]
ASSERT, Automated Safety Scenario Red Teaming, consists of three methods -- semantically aligned augmentation, target bootstrapping, and adversarial knowledge injection.
We partition our prompts into four safety domains for a fine-grained analysis of how the domain affects model performance.
We find statistically significant performance differences of up to 11% in absolute classification accuracy among semantically related scenarios and error rates of up to 19% absolute error in zero-shot adversarial settings.
arXiv Detail & Related papers (2023-10-14T17:10:28Z) - Automated CVE Analysis for Threat Prioritization and Impact Prediction [4.540236408836132]
We introduce our novel predictive model and tool (called CVEDrill) which revolutionizes CVE analysis and threat prioritization.
CVEDrill accurately estimates the Common Vulnerability Scoring System (CVSS) vector for precise threat mitigation and priority ranking.
It seamlessly automates the classification of CVEs into the appropriate Common Weaknession (CWE) hierarchy classes.
arXiv Detail & Related papers (2023-09-06T14:34:03Z) - Can An Old Fashioned Feature Extraction and A Light-weight Model Improve
Vulnerability Type Identification Performance? [6.423483122892239]
We investigate the problem of vulnerability type identification (VTI)
We evaluate the performance of the well-known and advanced pre-trained models for VTI on a large set of vulnerabilities.
We introduce a lightweight independent component to refine the predictions of the baseline approach.
arXiv Detail & Related papers (2023-06-26T14:28:51Z) - Approximate Shielding of Atari Agents for Safe Exploration [83.55437924143615]
We propose a principled algorithm for safe exploration based on the concept of shielding.
We present preliminary results that show our approximate shielding algorithm effectively reduces the rate of safety violations.
arXiv Detail & Related papers (2023-04-21T16:19:54Z) - Evaluating Model-free Reinforcement Learning toward Safety-critical
Tasks [70.76757529955577]
This paper revisits prior work in this scope from the perspective of state-wise safe RL.
We propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection.
To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit.
arXiv Detail & Related papers (2022-12-12T06:30:17Z) - Log Barriers for Safe Black-box Optimization with Application to Safe
Reinforcement Learning [72.97229770329214]
We introduce a general approach for seeking high dimensional non-linear optimization problems in which maintaining safety during learning is crucial.
Our approach called LBSGD is based on applying a logarithmic barrier approximation with a carefully chosen step size.
We demonstrate the effectiveness of our approach on minimizing violation in policy tasks in safe reinforcement learning.
arXiv Detail & Related papers (2022-07-21T11:14:47Z) - Attack Techniques and Threat Identification for Vulnerabilities [1.1689657956099035]
prioritization and focus become critical, to spend their limited time on the highest risk vulnerabilities.
In this work, we use machine learning and natural language processing techniques, as well as several publicly available data sets.
We first map the vulnerabilities to a standard set of common weaknesses, and then common weaknesses to the attack techniques.
This approach yields a Mean Reciprocal Rank (MRR) of 0.95, an accuracy comparable with those reported for state-of-the-art systems.
arXiv Detail & Related papers (2022-06-22T15:27:49Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.