A Smart and Defensive Human-Machine Approach to Code Analysis
- URL: http://arxiv.org/abs/2108.03294v2
- Date: Tue, 10 Aug 2021 12:16:05 GMT
- Title: A Smart and Defensive Human-Machine Approach to Code Analysis
- Authors: Fitzroy D. Nembhard, Marco M. Carvalho
- Abstract summary: We propose a method that employs the use of virtual assistants to work with programmers to ensure that software are as safe as possible.
The pro- posed method employs a recommender system that uses various metrics to help programmers select the most appropriate code analysis tool for their project.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Static analysis remains one of the most popular approaches for detecting and
correcting poor or vulnerable program code. It involves the examination of code
listings, test results, or other documentation to identify errors, violations
of development standards, or other problems, with the ultimate goal of fixing
these errors so that systems and software are as secure as possible. There
exists a plethora of static analysis tools, which makes it challenging for
businesses and programmers to select a tool to analyze their program code. It
is imperative to find ways to improve code analysis so that it can be employed
by cyber defenders to mitigate security risks. In this research, we propose a
method that employs the use of virtual assistants to work with programmers to
ensure that software are as safe as possible in order to protect
safety-critical systems from data breaches and other attacks. The pro- posed
method employs a recommender system that uses various metrics to help
programmers select the most appropriate code analysis tool for their project
and guides them through the analysis process. The system further tracks the
user's behavior regarding the adoption of the recommended practices.
Related papers
- Harnessing Large Language Models for Software Vulnerability Detection: A Comprehensive Benchmarking Study [1.03590082373586]
We propose using large language models (LLMs) to assist in finding vulnerabilities in source code.
The aim is to test multiple state-of-the-art LLMs and identify the best prompting strategies.
We find that LLMs can pinpoint many more issues than traditional static analysis tools, outperforming traditional tools in terms of recall and F1 scores.
arXiv Detail & Related papers (2024-05-24T14:59:19Z) - Efficacy of static analysis tools for software defect detection on open-source projects [0.0]
The study used popular analysis tools such as SonarQube, PMD, Checkstyle, and FindBugs to perform the comparison.
The study results show that SonarQube performs considerably well than all other tools in terms of its defect detection.
arXiv Detail & Related papers (2024-05-20T19:05:32Z) - Understanding Hackers' Work: An Empirical Study of Offensive Security
Practitioners [0.0]
Offensive security-tests are a common way to pro-actively discover potential vulnerabilities.
The chronic lack of available white-hat hackers prevents sufficient security test coverage of software.
Research into automation tries to alleviate this problem by improving the efficiency of security testing.
arXiv Detail & Related papers (2023-08-14T10:35:26Z) - Using Machine Learning To Identify Software Weaknesses From Software
Requirement Specifications [49.1574468325115]
This research focuses on finding an efficient machine learning algorithm to identify software weaknesses from requirement specifications.
Keywords extracted using latent semantic analysis help map the CWE categories to PROMISE_exp. Naive Bayes, support vector machine (SVM), decision trees, neural network, and convolutional neural network (CNN) algorithms were tested.
arXiv Detail & Related papers (2023-08-10T13:19:10Z) - Leveraging Traceability to Integrate Safety Analysis Artifacts into the
Software Development Process [51.42800587382228]
Safety assurance cases (SACs) can be challenging to maintain during system evolution.
We propose a solution that leverages software traceability to connect relevant system artifacts to safety analysis models.
We elicit design rationales for system changes to help safety stakeholders analyze the impact of system changes on safety.
arXiv Detail & Related papers (2023-07-14T16:03:27Z) - Constrained Adversarial Learning and its applicability to Automated
Software Testing: a systematic review [0.0]
This systematic review is focused on the current state-of-the-art of constrained data generation methods applied for adversarial learning and software testing.
It aims to guide researchers and developers to enhance testing tools with adversarial learning methods and improve the resilience and robustness of their digital systems.
arXiv Detail & Related papers (2023-03-14T00:27:33Z) - CodeLMSec Benchmark: Systematically Evaluating and Finding Security
Vulnerabilities in Black-Box Code Language Models [58.27254444280376]
Large language models (LLMs) for automatic code generation have achieved breakthroughs in several programming tasks.
Training data for these models is usually collected from the Internet (e.g., from open-source repositories) and is likely to contain faults and security vulnerabilities.
This unsanitized training data can cause the language models to learn these vulnerabilities and propagate them during the code generation procedure.
arXiv Detail & Related papers (2023-02-08T11:54:07Z) - Developing Hands-on Labs for Source Code Vulnerability Detection with AI [0.0]
We propose a framework including learning modules and hands on labs to guide future IT professionals towards developing secure programming habits.
This thesis our goal is to design learning modules with a set of hands on labs that will introduce students to secure programming practices using source code and log file analysis tools.
arXiv Detail & Related papers (2023-02-01T20:53:58Z) - Constrained Reinforcement Learning for Robotics via Scenario-Based
Programming [64.07167316957533]
It is crucial to optimize the performance of DRL-based agents while providing guarantees about their behavior.
This paper presents a novel technique for incorporating domain-expert knowledge into a constrained DRL training loop.
Our experiments demonstrate that using our approach to leverage expert knowledge dramatically improves the safety and the performance of the agent.
arXiv Detail & Related papers (2022-06-20T07:19:38Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - Evaluating the Safety of Deep Reinforcement Learning Models using
Semi-Formal Verification [81.32981236437395]
We present a semi-formal verification approach for decision-making tasks based on interval analysis.
Our method obtains comparable results over standard benchmarks with respect to formal verifiers.
Our approach allows to efficiently evaluate safety properties for decision-making models in practical applications.
arXiv Detail & Related papers (2020-10-19T11:18:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.