A Survey on Data-driven Software Vulnerability Assessment and
Prioritization
- URL: http://arxiv.org/abs/2107.08364v1
- Date: Sun, 18 Jul 2021 04:49:22 GMT
- Title: A Survey on Data-driven Software Vulnerability Assessment and
Prioritization
- Authors: Triet H. M. Le, Huaming Chen, M. Ali Babar
- Abstract summary: Software Vulnerabilities (SVs) are increasing in complexity and scale, posing great security risks to many software systems.
Data-driven techniques such as Machine Learning and Deep Learning have taken SV assessment and prioritization to the next level.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Software Vulnerabilities (SVs) are increasing in complexity and scale, posing
great security risks to many software systems. Given the limited resources in
practice, SV assessment and prioritization help practitioners devise optimal SV
mitigation plans based on various SV characteristics. The surge in SV data
sources and data-driven techniques such as Machine Learning and Deep Learning
have taken SV assessment and prioritization to the next level. Our survey
provides a taxonomy of the past research efforts and highlights the best
practices for data-driven SV assessment and prioritization. We also discuss the
current limitations and propose potential solutions to address such issues.
Related papers
- REVAL: A Comprehension Evaluation on Reliability and Values of Large Vision-Language Models [59.445672459851274]
REVAL is a comprehensive benchmark designed to evaluate the textbfREliability and textbfVALue of Large Vision-Language Models.
REVAL encompasses over 144K image-text Visual Question Answering (VQA) samples, structured into two primary sections: Reliability and Values.
We evaluate 26 models, including mainstream open-source LVLMs and prominent closed-source models like GPT-4o and Gemini-1.5-Pro.
arXiv Detail & Related papers (2025-03-20T07:54:35Z) - LLM-Safety Evaluations Lack Robustness [58.334290876531036]
We argue that current safety alignment research efforts for large language models are hindered by many intertwined sources of noise.
We propose a set of guidelines for reducing noise and bias in evaluations of future attack and defense papers.
arXiv Detail & Related papers (2025-03-04T12:55:07Z) - EvalSVA: Multi-Agent Evaluators for Next-Gen Software Vulnerability Assessment [17.74561647070259]
We introduce EvalSVA, a multi-agent evaluators team to autonomously deliberate and evaluate various aspects of software vulnerability (SV) assessment.
EvalSVA offers a human-like process and generates both reason and answer for SV assessment.
arXiv Detail & Related papers (2024-12-11T08:00:50Z) - A Comprehensive Study of Shapley Value in Data Analytics [16.11540350411322]
This paper provides the first comprehensive study of Shapley value (SV) used throughout the data analytics (DA) workflow.
We condense four primary challenges of using SV in DA, namely computation efficiency, approximation error, privacy preservation, and interpretability.
We implement SVBench, a modular and open-sourced framework for developing SV applications in different DA tasks.
arXiv Detail & Related papers (2024-12-02T12:54:11Z) - Benchmarking Vision Language Model Unlearning via Fictitious Facial Identity Dataset [94.13848736705575]
We introduce Facial Identity Unlearning Benchmark (FIUBench), a novel VLM unlearning benchmark designed to robustly evaluate the effectiveness of unlearning algorithms.
We apply a two-stage evaluation pipeline that is designed to precisely control the sources of information and their exposure levels.
Through the evaluation of four baseline VLM unlearning algorithms within FIUBench, we find that all methods remain limited in their unlearning performance.
arXiv Detail & Related papers (2024-11-05T23:26:10Z) - SAFE: Advancing Large Language Models in Leveraging Semantic and Syntactic Relationships for Software Vulnerability Detection [23.7268575752712]
Software vulnerabilities (SVs) have emerged as a prevalent and critical concern for safety-critical security systems.
We propose a novel framework that enhances the capability of large language models to learn and utilize semantic and syntactic relationships from source code data for SVD.
arXiv Detail & Related papers (2024-09-02T00:49:02Z) - Mitigating Data Imbalance for Software Vulnerability Assessment: Does Data Augmentation Help? [0.0]
We show that mitigating data imbalance can significantly improve the predictive performance of models for all the Common Vulnerability Scoring System (CVSS) tasks.
We also discover that simple text augmentation like combining random text insertion, deletion, and replacement can outperform the baseline across the board.
arXiv Detail & Related papers (2024-07-15T13:47:55Z) - A Comprehensive Survey on Underwater Image Enhancement Based on Deep Learning [51.7818820745221]
Underwater image enhancement (UIE) presents a significant challenge within computer vision research.
Despite the development of numerous UIE algorithms, a thorough and systematic review is still absent.
arXiv Detail & Related papers (2024-05-30T04:46:40Z) - What Are We Measuring When We Evaluate Large Vision-Language Models? An Analysis of Latent Factors and Biases [87.65903426052155]
We perform a large-scale transfer learning experiment aimed at discovering latent vision-language skills from data.
We show that generation tasks suffer from a length bias, suggesting benchmarks should balance tasks with varying output lengths.
We present a new dataset, OLIVE, which simulates user instructions in the wild and presents challenges dissimilar to all datasets we tested.
arXiv Detail & Related papers (2024-04-03T02:40:35Z) - Are Latent Vulnerabilities Hidden Gems for Software Vulnerability
Prediction? An Empirical Study [4.830367174383139]
latent vulnerable functions can increase the number of SVs by 4x on average and correct up to 5k mislabeled functions.
Despite the noise, we show that the state-of-the-art SV prediction model can significantly benefit from such latent SVs.
arXiv Detail & Related papers (2024-01-20T03:36:01Z) - A Survey of Federated Unlearning: A Taxonomy, Challenges and Future
Directions [71.16718184611673]
The evolution of privacy-preserving Federated Learning (FL) has led to an increasing demand for implementing the right to be forgotten.
The implementation of selective forgetting is particularly challenging in FL due to its decentralized nature.
Federated Unlearning (FU) emerges as a strategic solution to address the increasing need for data privacy.
arXiv Detail & Related papers (2023-10-30T01:34:33Z) - A Note on "Towards Efficient Data Valuation Based on the Shapley Value'' [7.4011772612133475]
The Shapley value (SV) has emerged as a promising method for data valuation.
Group Testing-based SV estimator achieves favorable sample complexity.
arXiv Detail & Related papers (2023-02-22T15:13:45Z) - DeepCVA: Automated Commit-level Vulnerability Assessment with Deep
Multi-task Learning [0.0]
We propose a novel Deep multi-task learning model, DeepCVA, to automate seven Commit-level Vulnerability Assessment tasks simultaneously.
We conduct large-scale experiments on 1,229 vulnerability-contributing commits containing 542 different SVs in 246 real-world software projects.
DeepCVA is the best-performing model with 38% to 59.8% higher Matthews Correlation Coefficient than many supervised and unsupervised baseline models.
arXiv Detail & Related papers (2021-08-18T08:43:36Z) - A Principled Approach to Data Valuation for Federated Learning [73.19984041333599]
Federated learning (FL) is a popular technique to train machine learning (ML) models on decentralized data sources.
The Shapley value (SV) defines a unique payoff scheme that satisfies many desiderata for a data value notion.
This paper proposes a variant of the SV amenable to FL, which we call the federated Shapley value.
arXiv Detail & Related papers (2020-09-14T04:37:54Z) - Chance-Constrained Trajectory Optimization for Safe Exploration and
Learning of Nonlinear Systems [81.7983463275447]
Learning-based control algorithms require data collection with abundant supervision for training.
We present a new approach for optimal motion planning with safe exploration that integrates chance-constrained optimal control with dynamics learning and feedback control.
arXiv Detail & Related papers (2020-05-09T05:57:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.