Perspectives on risk prioritization of data center vulnerabilities using
rank aggregation and multi-objective optimization
- URL: http://arxiv.org/abs/2202.07466v1
- Date: Sat, 12 Feb 2022 11:10:22 GMT
- Title: Perspectives on risk prioritization of data center vulnerabilities using
rank aggregation and multi-objective optimization
- Authors: Bruno Grisci, Gabriela Kuhn, Felipe Colombelli, V\'itor Matter, Leomar
Lima, Karine Heinen, Mauricio Pegoraro, Marcio Borges, Sandro Rigo, Jorge
Barbosa, Rodrigo da Rosa Righi, Cristiano Andr\'e da Costa, Gabriel de
Oliveira Ramos
- Abstract summary: Review intends to present a survey of vulnerability ranking techniques and promote a discussion on how multi-objective optimization could benefit the management of vulnerabilities risk prioritization.
The main contribution of this work is to point out multi-objective optimization as a not commonly explored but promising strategy to prioritize vulnerabilities, enabling better time management and increasing security.
- Score: 4.675433981885177
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Nowadays, data has become an invaluable asset to entities and companies, and
keeping it secure represents a major challenge. Data centers are responsible
for storing data provided by software applications. Nevertheless, the number of
vulnerabilities has been increasing every day. Managing such vulnerabilities is
essential for building a reliable and secure network environment. Releasing
patches to fix security flaws in software is a common practice to handle these
vulnerabilities. However, prioritization becomes crucial for organizations with
an increasing number of vulnerabilities since time and resources to fix them
are usually limited. This review intends to present a survey of vulnerability
ranking techniques and promote a discussion on how multi-objective optimization
could benefit the management of vulnerabilities risk prioritization. The
state-of-the-art approaches for risk prioritization were reviewed, intending to
develop an effective model for ranking vulnerabilities in data centers. The
main contribution of this work is to point out multi-objective optimization as
a not commonly explored but promising strategy to prioritize vulnerabilities,
enabling better time management and increasing security.
Related papers
- RedSage: A Cybersecurity Generalist LLM [45.91667919408369]
RedSage is an open-source, locally deployable cybersecurity assistant with domain-aware pretraining and post-training.<n>We use a large-scale web filtering and manual collection of high-quality resources, spanning 28.6K documents across frameworks, offensive techniques, and security tools.<n>RedSage is evaluated on established cybersecurity benchmarks (e.g., CTI-Bench, CyberMetric, SECURE) and general LLM benchmarks to assess broader generalization.
arXiv Detail & Related papers (2026-01-29T18:59:57Z) - ORCA -- An Automated Threat Analysis Pipeline for O-RAN Continuous Development [57.61878484176942]
Open-Radio Access Network (O-RAN) integrates numerous software components in a cloud-like deployment, opening the radio access network to previously unconsidered security threats.<n>Current vulnerability assessment practices often rely on manual, labor-intensive, and subjective investigations, leading to inconsistencies in the threat analysis.<n>We propose an automated pipeline that leverages Natural Language Processing (NLP) to minimize human intervention and associated biases.
arXiv Detail & Related papers (2026-01-20T07:31:59Z) - An empirical analysis of zero-day vulnerabilities disclosed by the zero day initiative [0.0]
This study analyzes the Zero Day Initiative (ZDI) vulnerability disclosures reported between January and April 2024, Cole [2025] comprising a total of 415 vulnerabilities.<n>The primary objectives of this work are to identify trends in zero-day vulnerability disclosures, examine severity distributions across vendors, and investigate which vulnerability characteristics are most indicative of high severity.
arXiv Detail & Related papers (2025-12-16T23:15:19Z) - An Empirical Study on the Security Vulnerabilities of GPTs [48.12756684275687]
GPTs are one kind of customized AI agents based on OpenAI's large language models.<n>We present an empirical study on the security vulnerabilities of GPTs.
arXiv Detail & Related papers (2025-11-28T13:30:25Z) - From Model to Breach: Towards Actionable LLM-Generated Vulnerabilities Reporting [43.57360781012506]
We show that even the latest open-weight models are vulnerable in the earliest reported vulnerability scenarios.<n>We introduce a new severity metric that reflects the risk posed by an LLM-generated vulnerability.<n>To encourage the mitigation of the most serious and prevalent vulnerabilities, we use PE to define the Model Exposure (ME) score.
arXiv Detail & Related papers (2025-11-06T16:52:27Z) - Revisiting Vulnerability Patch Localization: An Empirical Study and LLM-Based Solution [44.388332647211776]
Open-source software vulnerability patch detection is a critical component for maintaining software security and ensuring software supply chain integrity.<n>Traditional detection methods face significant scalability challenges when processing large volumes of commit histories.<n>We propose a novel two-stage framework that combines version-driven candidate filtering with large language model-based multi-round dialogue voting.
arXiv Detail & Related papers (2025-09-19T09:09:55Z) - Vulnerability Assessment Combining CVSS Temporal Metrics and Bayesian Networks [0.0]
This work presents an innovative approach by incorporating the temporal dimension into vulnerability assessment.<n>The proposed approach dynamically computes the Temporal Score and updates the CVSS Base Score by processing data on exploits and fixes from vulnerability databases.
arXiv Detail & Related papers (2025-06-23T14:53:17Z) - CyberGym: Evaluating AI Agents' Cybersecurity Capabilities with Real-World Vulnerabilities at Scale [46.76144797837242]
Large language model (LLM) agents are becoming increasingly skilled at handling cybersecurity tasks autonomously.<n>Existing benchmarks fall short, often failing to capture real-world scenarios or being limited in scope.<n>We introduce CyberGym, a large-scale and high-quality cybersecurity evaluation framework featuring 1,507 real-world vulnerabilities.
arXiv Detail & Related papers (2025-06-03T07:35:14Z) - Towards Trustworthy GUI Agents: A Survey [64.6445117343499]
This survey examines the trustworthiness of GUI agents in five critical dimensions.
We identify major challenges such as vulnerability to adversarial attacks, cascading failure modes in sequential decision-making.
As GUI agents become more widespread, establishing robust safety standards and responsible development practices is essential.
arXiv Detail & Related papers (2025-03-30T13:26:00Z) - Towards Robust Stability Prediction in Smart Grids: GAN-based Approach under Data Constraints and Adversarial Challenges [53.2306792009435]
We introduce a novel framework to detect instability in smart grids by employing only stable data.
It relies on a Generative Adversarial Network (GAN) where the generator is trained to create instability data that are used along with stable data to train the discriminator.
Our solution, tested on a dataset composed of real-world stable and unstable samples, achieve accuracy up to 97.5% in predicting grid stability and up to 98.9% in detecting adversarial attacks.
arXiv Detail & Related papers (2025-01-27T20:48:25Z) - Open Problems in Machine Unlearning for AI Safety [61.43515658834902]
Machine unlearning -- the ability to selectively forget or suppress specific types of knowledge -- has shown promise for privacy and data removal tasks.
In this paper, we identify key limitations that prevent unlearning from serving as a comprehensive solution for AI safety.
arXiv Detail & Related papers (2025-01-09T03:59:10Z) - CRepair: CVAE-based Automatic Vulnerability Repair Technology [1.147605955490786]
Software vulnerabilities pose significant threats to the integrity, security, and reliability of modern software and its application data.
To address the challenges of vulnerability repair, researchers have proposed various solutions, with learning-based automatic vulnerability repair techniques gaining widespread attention.
This paper proposes CRepair, a CVAE-based automatic vulnerability repair technology aimed at fixing security vulnerabilities in system code.
arXiv Detail & Related papers (2024-11-08T12:55:04Z) - Safety vs. Performance: How Multi-Objective Learning Reduces Barriers to Market Entry [86.79268605140251]
We study whether there are insurmountable barriers to entry in emerging markets for large language models.
We show that the required number of data points can be significantly smaller than the incumbent company's dataset size.
Our results demonstrate how multi-objective considerations can fundamentally reduce barriers to entry.
arXiv Detail & Related papers (2024-09-05T17:45:01Z) - Agent-Driven Automatic Software Improvement [55.2480439325792]
This research proposal aims to explore innovative solutions by focusing on the deployment of agents powered by Large Language Models (LLMs)
The iterative nature of agents, which allows for continuous learning and adaptation, can help surpass common challenges in code generation.
We aim to use the iterative feedback in these systems to further fine-tune the LLMs underlying the agents, becoming better aligned to the task of automated software improvement.
arXiv Detail & Related papers (2024-06-24T15:45:22Z) - VulZoo: A Comprehensive Vulnerability Intelligence Dataset [12.229092589037808]
VulZoo is a comprehensive vulnerability intelligence dataset that covers 17 popular vulnerability information sources.
We make VulZoo publicly available and maintain it with incremental updates to facilitate future research.
arXiv Detail & Related papers (2024-06-24T06:39:07Z) - A Relevance Model for Threat-Centric Ranking of Cybersecurity Vulnerabilities [0.29998889086656577]
The relentless process of tracking and remediating vulnerabilities is a top concern for cybersecurity professionals.
We provide a framework for vulnerability management specifically focused on mitigating threats using adversary criteria derived from MITRE ATT&CK.
Our results show an average 71.5% - 91.3% improvement towards the identification of vulnerabilities likely to be targeted and exploited by cyber threat actors.
arXiv Detail & Related papers (2024-06-09T23:29:12Z) - Meta-Learning Priors for Safe Bayesian Optimization [72.8349503901712]
We build on a meta-learning algorithm, F-PACOH, capable of providing reliable uncertainty quantification in settings of data scarcity.
As core contribution, we develop a novel framework for choosing safety-compliant priors in a data-riven manner.
On benchmark functions and a high-precision motion system, we demonstrate that our meta-learned priors accelerate the convergence of safe BO approaches.
arXiv Detail & Related papers (2022-10-03T08:38:38Z) - Deep VULMAN: A Deep Reinforcement Learning-Enabled Cyber Vulnerability
Management Framework [4.685954926214926]
Cyber vulnerability management is a critical function of a cybersecurity operations center (CSOC) that helps protect organizations against cyber-attacks on their computer and network systems.
The current approaches are deterministic and one-time decision-making methods, which do not consider future uncertainties when prioritizing and selecting vulnerabilities for mitigation.
We propose a novel framework, Deep VULMAN, consisting of a deep reinforcement learning agent and an integer programming method to fill this gap in the cyber vulnerability management process.
arXiv Detail & Related papers (2022-08-03T22:32:48Z) - Towards an Improved Understanding of Software Vulnerability Assessment
Using Data-Driven Approaches [0.0]
The thesis advances the field of software security by providing knowledge and automation support for software vulnerability assessment.
The key contributions include a systematisation of knowledge, along with a suite of novel data-driven techniques.
arXiv Detail & Related papers (2022-07-24T10:22:28Z) - VELVET: a noVel Ensemble Learning approach to automatically locate
VulnErable sTatements [62.93814803258067]
This paper presents VELVET, a novel ensemble learning approach to locate vulnerable statements in source code.
Our model combines graph-based and sequence-based neural networks to successfully capture the local and global context of a program graph.
VELVET achieves 99.6% and 43.6% top-1 accuracy over synthetic data and real-world data, respectively.
arXiv Detail & Related papers (2021-12-20T22:45:27Z) - Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks,
and Defenses [150.64470864162556]
This work systematically categorizes and discusses a wide range of dataset vulnerabilities and exploits.
In addition to describing various poisoning and backdoor threat models and the relationships among them, we develop their unified taxonomy.
arXiv Detail & Related papers (2020-12-18T22:38:47Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.