Using AI Assistants in Software Development: A Qualitative Study on Security Practices and Concerns
- URL: http://arxiv.org/abs/2405.06371v2
- Date: Mon, 14 Oct 2024 21:54:27 GMT
- Title: Using AI Assistants in Software Development: A Qualitative Study on Security Practices and Concerns
- Authors: Jan H. Klemmer, Stefan Albert Horstmann, Nikhil Patnaik, Cordelia Ludden, Cordell Burton Jr., Carson Powers, Fabio Massacci, Akond Rahman, Daniel Votipka, Heather Richter Lipford, Awais Rashid, Alena Naiakshina, Sascha Fahl,
- Abstract summary: Recent research has demonstrated that AI-generated code can contain security issues.
How software professionals balance AI assistant usage and security remains unclear.
This paper investigates how software professionals use AI assistants in secure software development.
- Score: 23.867795468379743
- License:
- Abstract: Following the recent release of AI assistants, such as OpenAI's ChatGPT and GitHub Copilot, the software industry quickly utilized these tools for software development tasks, e.g., generating code or consulting AI for advice. While recent research has demonstrated that AI-generated code can contain security issues, how software professionals balance AI assistant usage and security remains unclear. This paper investigates how software professionals use AI assistants in secure software development, what security implications and considerations arise, and what impact they foresee on secure software development. We conducted 27 semi-structured interviews with software professionals, including software engineers, team leads, and security testers. We also reviewed 190 relevant Reddit posts and comments to gain insights into the current discourse surrounding AI assistants for software development. Our analysis of the interviews and Reddit posts finds that despite many security and quality concerns, participants widely use AI assistants for security-critical tasks, e.g., code generation, threat modeling, and vulnerability detection. Their overall mistrust leads to checking AI suggestions in similar ways to human code, although they expect improvements and, therefore, a heavier use for security tasks in the future. We conclude with recommendations for software professionals to critically check AI suggestions, AI creators to improve suggestion security and capabilities for ethical security tasks, and academic researchers to consider general-purpose AI in software development.
Related papers
- "I Don't Use AI for Everything": Exploring Utility, Attitude, and Responsibility of AI-empowered Tools in Software Development [19.851794567529286]
This study investigates the adoption, impact, and security considerations of AI-empowered tools in the software development process.
Our findings reveal widespread adoption of AI tools across various stages of software development.
arXiv Detail & Related papers (2024-09-20T09:17:10Z) - Future of Artificial Intelligence in Agile Software Development [0.0]
AI can assist software development managers, software testers, and other team members by leveraging LLMs, GenAI models, and AI agents.
AI has the potential to increase efficiency and reduce the risks encountered by the project management team.
arXiv Detail & Related papers (2024-08-01T16:49:50Z) - The Ethics of Advanced AI Assistants [53.89899371095332]
This paper focuses on the opportunities and the ethical and societal risks posed by advanced AI assistants.
We define advanced AI assistants as artificial agents with natural language interfaces, whose function is to plan and execute sequences of actions on behalf of a user.
We consider the deployment of advanced assistants at a societal scale, focusing on cooperation, equity and access, misinformation, economic impact, the environment and how best to evaluate advanced AI assistants.
arXiv Detail & Related papers (2024-04-24T23:18:46Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - Testing autonomous vehicles and AI: perspectives and challenges from cybersecurity, transparency, robustness and fairness [53.91018508439669]
The study explores the complexities of integrating Artificial Intelligence into Autonomous Vehicles (AVs)
It examines the challenges introduced by AI components and the impact on testing procedures.
The paper identifies significant challenges and suggests future directions for research and development of AI in AV technology.
arXiv Detail & Related papers (2024-02-21T08:29:42Z) - AI Product Security: A Primer for Developers [0.685316573653194]
It is imperative to understand the threats to machine learning products and avoid common pitfalls in AI product development.
This article is addressed to developers, designers, managers and researchers of AI software products.
arXiv Detail & Related papers (2023-04-18T05:22:34Z) - Enabling Automated Machine Learning for Model-Driven AI Engineering [60.09869520679979]
We propose a novel approach to enable Model-Driven Software Engineering and Model-Driven AI Engineering.
In particular, we support Automated ML, thus assisting software engineers without deep AI knowledge in developing AI-intensive systems.
arXiv Detail & Related papers (2022-03-06T10:12:56Z) - Proceedings of the Artificial Intelligence for Cyber Security (AICS)
Workshop at AAAI 2022 [55.573187938617636]
The workshop will focus on the application of AI to problems in cyber security.
Cyber systems generate large volumes of data, utilizing this effectively is beyond human capabilities.
arXiv Detail & Related papers (2022-02-28T18:27:41Z) - Artificial Intelligence in Software Testing : Impact, Problems,
Challenges and Prospect [0.0]
The study aims to recognize and explain some of the biggest challenges software testers face while applying AI to testing.
The paper also proposes some key contributions of AI in the future to the domain of software testing.
arXiv Detail & Related papers (2022-01-14T10:21:51Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Opening the Software Engineering Toolbox for the Assessment of
Trustworthy AI [17.910325223647362]
We argue for the application of software engineering and testing practices for the assessment of trustworthy AI.
We make the connection between the seven key requirements as defined by the European Commission's AI high-level expert group.
arXiv Detail & Related papers (2020-07-14T08:16:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.