Towards an Improved Understanding of Software Vulnerability Assessment
Using Data-Driven Approaches
- URL: http://arxiv.org/abs/2207.11708v3
- Date: Tue, 20 Jun 2023 08:56:29 GMT
- Title: Towards an Improved Understanding of Software Vulnerability Assessment
Using Data-Driven Approaches
- Authors: Triet H. M. Le
- Abstract summary: The thesis advances the field of software security by providing knowledge and automation support for software vulnerability assessment.
The key contributions include a systematisation of knowledge, along with a suite of novel data-driven techniques.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The thesis advances the field of software security by providing knowledge and
automation support for software vulnerability assessment using data-driven
approaches. Software vulnerability assessment provides important and
multifaceted information to prevent and mitigate dangerous cyber-attacks in the
wild. The key contributions include a systematisation of knowledge, along with
a suite of novel data-driven techniques and practical recommendations for
researchers and practitioners in the area. The thesis results help improve the
understanding and inform the practice of assessing ever-increasing
vulnerabilities in real-world software systems. This in turn enables more
thorough and timely fixing prioritisation and planning of these critical
security issues.
Related papers
- Bringing Order Amidst Chaos: On the Role of Artificial Intelligence in Secure Software Engineering [0.0]
The ever-evolving technological landscape offers both opportunities and threats, creating a dynamic space where chaos and order compete.
Secure software engineering (SSE) must continuously address vulnerabilities that endanger software systems.
This thesis seeks to bring order to the chaos in SSE by addressing domain-specific differences that impact AI accuracy.
arXiv Detail & Related papers (2025-01-09T11:38:58Z) - Open Problems in Machine Unlearning for AI Safety [61.43515658834902]
Machine unlearning -- the ability to selectively forget or suppress specific types of knowledge -- has shown promise for privacy and data removal tasks.
In this paper, we identify key limitations that prevent unlearning from serving as a comprehensive solution for AI safety.
arXiv Detail & Related papers (2025-01-09T03:59:10Z) - Resilient Cloud cluster with DevSecOps security model, automates a data analysis, vulnerability search and risk calculation [0.0]
The article presents the main methods of deploying web applications, ways to increase the level of information security at all stages of product development.
The cloud cluster was deployed using Terraform and the Jenkins pipeline, which checks program code for vulnerabilities.
The algorithm for calculating risk and losses is based on statistical data and the concept of the FAIR information risk assessment methodology.
arXiv Detail & Related papers (2024-12-15T13:11:48Z) - ChatNVD: Advancing Cybersecurity Vulnerability Assessment with Large Language Models [0.46873264197900916]
This paper explores the potential application of Large Language Models (LLMs) to enhance the assessment of software vulnerabilities.
We develop three variants of ChatNVD, utilizing three prominent LLMs: GPT-4o mini by OpenAI, Llama 3 by Meta, and Gemini 1.5 Pro by Google.
To evaluate their efficacy, we conduct a comparative analysis of these models using a comprehensive questionnaire comprising common security vulnerability questions.
arXiv Detail & Related papers (2024-12-06T03:45:49Z) - Charting a Path to Efficient Onboarding: The Role of Software
Visualization [49.1574468325115]
The present study aims to explore the familiarity of managers, leaders, and developers with software visualization tools.
This approach incorporated quantitative and qualitative analyses of data collected from practitioners using questionnaires and semi-structured interviews.
arXiv Detail & Related papers (2024-01-17T21:30:45Z) - Software Repositories and Machine Learning Research in Cyber Security [0.0]
The integration of robust cyber security defenses has become essential across all phases of software development.
Attempts have been made to leverage topic modeling and machine learning for the detection of these early-stage vulnerabilities in the software requirements process.
arXiv Detail & Related papers (2023-11-01T17:46:07Z) - Leveraging Traceability to Integrate Safety Analysis Artifacts into the
Software Development Process [51.42800587382228]
Safety assurance cases (SACs) can be challenging to maintain during system evolution.
We propose a solution that leverages software traceability to connect relevant system artifacts to safety analysis models.
We elicit design rationales for system changes to help safety stakeholders analyze the impact of system changes on safety.
arXiv Detail & Related papers (2023-07-14T16:03:27Z) - Perspectives on risk prioritization of data center vulnerabilities using
rank aggregation and multi-objective optimization [4.675433981885177]
Review intends to present a survey of vulnerability ranking techniques and promote a discussion on how multi-objective optimization could benefit the management of vulnerabilities risk prioritization.
The main contribution of this work is to point out multi-objective optimization as a not commonly explored but promising strategy to prioritize vulnerabilities, enabling better time management and increasing security.
arXiv Detail & Related papers (2022-02-12T11:10:22Z) - VELVET: a noVel Ensemble Learning approach to automatically locate
VulnErable sTatements [62.93814803258067]
This paper presents VELVET, a novel ensemble learning approach to locate vulnerable statements in source code.
Our model combines graph-based and sequence-based neural networks to successfully capture the local and global context of a program graph.
VELVET achieves 99.6% and 43.6% top-1 accuracy over synthetic data and real-world data, respectively.
arXiv Detail & Related papers (2021-12-20T22:45:27Z) - Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks,
and Defenses [150.64470864162556]
This work systematically categorizes and discusses a wide range of dataset vulnerabilities and exploits.
In addition to describing various poisoning and backdoor threat models and the relationships among them, we develop their unified taxonomy.
arXiv Detail & Related papers (2020-12-18T22:38:47Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.