The Role of Generative AI in Strengthening Secure Software Coding Practices: A Systematic Perspective
- URL: http://arxiv.org/abs/2504.19461v1
- Date: Mon, 28 Apr 2025 04:01:12 GMT
- Title: The Role of Generative AI in Strengthening Secure Software Coding Practices: A Systematic Perspective
- Authors: Hathal S. Alwageed, Rafiq Ahmad Khan,
- Abstract summary: The integration of Generative AI (GenAI) into software development holds significant potential for improving secure coding practices.<n>This paper aims at systematically studying the impact of GenAI in enhancing secure coding practices.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As software security threats continue to evolve, the demand for innovative ways of securing coding has tremendously grown. The integration of Generative AI (GenAI) into software development holds significant potential for improving secure coding practices. This paper aims at systematically studying the impact of GenAI in enhancing secure coding practices from improving software security, setting forth its potential benefits, challenges, and implications. To outline the contribution of AI driven code generation tools, we analyze via a structured review of recent literature, application to the industry, and empirical studies on how these tools help to mitigate security risks, comply with the secure coding standards, and make software development efficient. We hope that our findings will benefit researchers, software engineers and cybersecurity professionals alike in integrating GenAI into a secure development workflow without losing the advantages GenAI provides. Finally, the state of the art advances and future directions of AI assisted in secure software engineering discussed in this study can contribute to the ongoing discourse on AI assisted in secure software engineering.
Related papers
- Comparative Analysis of AI-Driven Security Approaches in DevSecOps: Challenges, Solutions, and Future Directions [0.0]
This study conducts a systematic literature review to analyze and compare AI-driven security solutions in DevSecOps.
The findings reveal gaps in empirical validation, scalability, and integration of AI in security automation.
The study proposes future directions for optimizing AI-based security frameworks in DevSecOps.
arXiv Detail & Related papers (2025-04-27T08:18:11Z) - AISafetyLab: A Comprehensive Framework for AI Safety Evaluation and Improvement [73.0700818105842]
We introduce AISafetyLab, a unified framework and toolkit that integrates representative attack, defense, and evaluation methodologies for AI safety.<n> AISafetyLab features an intuitive interface that enables developers to seamlessly apply various techniques.<n>We conduct empirical studies on Vicuna, analyzing different attack and defense strategies to provide valuable insights into their comparative effectiveness.
arXiv Detail & Related papers (2025-02-24T02:11:52Z) - Computational Safety for Generative AI: A Signal Processing Perspective [65.268245109828]
computational safety is a mathematical framework that enables the quantitative assessment, formulation, and study of safety challenges in GenAI.<n>We show how sensitivity analysis and loss landscape analysis can be used to detect malicious prompts with jailbreak attempts.<n>We discuss key open research challenges, opportunities, and the essential role of signal processing in computational AI safety.
arXiv Detail & Related papers (2025-02-18T02:26:50Z) - SOK: Exploring Hallucinations and Security Risks in AI-Assisted Software Development with Insights for LLM Deployment [0.0]
Large Language Models (LLMs) such as GitHub Copilot, ChatGPT, Cursor AI, and Codeium AI have revolutionized the coding landscape.<n>This paper provides a comprehensive analysis of the benefits and risks associated with AI-powered coding tools.
arXiv Detail & Related papers (2025-01-31T06:00:27Z) - "I Don't Use AI for Everything": Exploring Utility, Attitude, and Responsibility of AI-empowered Tools in Software Development [19.851794567529286]
This study investigates the adoption, impact, and security considerations of AI-empowered tools in the software development process.
Our findings reveal widespread adoption of AI tools across various stages of software development.
arXiv Detail & Related papers (2024-09-20T09:17:10Z) - Future of Artificial Intelligence in Agile Software Development [0.0]
AI can assist software development managers, software testers, and other team members by leveraging LLMs, GenAI models, and AI agents.
AI has the potential to increase efficiency and reduce the risks encountered by the project management team.
arXiv Detail & Related papers (2024-08-01T16:49:50Z) - Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress? [59.96471873997733]
We propose an empirical foundation for developing more meaningful safety metrics and define AI safety in a machine learning research context.<n>We aim to provide a more rigorous framework for AI safety research, advancing the science of safety evaluations and clarifying the path towards measurable progress.
arXiv Detail & Related papers (2024-07-31T17:59:24Z) - Agent-Driven Automatic Software Improvement [55.2480439325792]
This research proposal aims to explore innovative solutions by focusing on the deployment of agents powered by Large Language Models (LLMs)
The iterative nature of agents, which allows for continuous learning and adaptation, can help surpass common challenges in code generation.
We aim to use the iterative feedback in these systems to further fine-tune the LLMs underlying the agents, becoming better aligned to the task of automated software improvement.
arXiv Detail & Related papers (2024-06-24T15:45:22Z) - Using AI Assistants in Software Development: A Qualitative Study on Security Practices and Concerns [23.867795468379743]
Recent research has demonstrated that AI-generated code can contain security issues.
How software professionals balance AI assistant usage and security remains unclear.
This paper investigates how software professionals use AI assistants in secure software development.
arXiv Detail & Related papers (2024-05-10T10:13:19Z) - Making Software Development More Diverse and Inclusive: Key Themes, Challenges, and Future Directions [50.545824691484796]
We identify six themes around the theme challenges and opportunities to improve Software Developer Diversity and Inclusion (SDDI)<n>We identify benefits, harms, and future research directions for the four main themes.<n>We discuss the remaining two themes, Artificial Intelligence & SDDI and AI & Computer Science education, which have a cross-cutting effect on the other themes.
arXiv Detail & Related papers (2024-04-10T16:18:11Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.