Adoption of Generative Artificial Intelligence in the German Software Engineering Industry: An Empirical Study
- URL: http://arxiv.org/abs/2601.16700v1
- Date: Fri, 23 Jan 2026 12:42:33 GMT
- Title: Adoption of Generative Artificial Intelligence in the German Software Engineering Industry: An Empirical Study
- Authors: Ludwig Felder, Tobias Eisenreich, Mahsa Fischer, Stefan Wagner, Chunyang Chen,
- Abstract summary: Generative artificial intelligence (GenAI) tools have seen rapid adoption among software developers.<n>While adoption rates in the industry are rising, the underlying factors influencing the effective use of these tools have not been thoroughly investigated.<n>This issue is particularly relevant in environments with stringent regulatory requirements, such as Germany.<n>No empirical study has systematically examined the adoption dynamics of GenAI tools within the German context.
- Score: 9.442926409509038
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative artificial intelligence (GenAI) tools have seen rapid adoption among software developers. While adoption rates in the industry are rising, the underlying factors influencing the effective use of these tools, including the depth of interaction, organizational constraints, and experience-related considerations, have not been thoroughly investigated. This issue is particularly relevant in environments with stringent regulatory requirements, such as Germany, where practitioners must address the GDPR and the EU AI Act while balancing productivity gains with intellectual property considerations. Despite the significant impact of GenAI on software engineering, to the best of our knowledge, no empirical study has systematically examined the adoption dynamics of GenAI tools within the German context. To address this gap, we present a comprehensive mixed-methods study on GenAI adoption among German software engineers. Specifically, we conducted 18 exploratory interviews with practitioners, followed by a developer survey with 109 participants. We analyze patterns of tool adoption, prompting strategies, and organizational factors that influence effectiveness. Our results indicate that experience level moderates the perceived benefits of GenAI tools, and productivity gains are not evenly distributed among developers. Further, organizational size affects both tool selection and the intensity of tool use. Limited awareness of the project context is identified as the most significant barrier. We summarize a set of actionable implications for developers, organizations, and tool vendors seeking to advance artificial intelligence (AI) assisted software development.
Related papers
- Impacts of Generative AI on Agile Teams' Productivity: A Multi-Case Longitudinal Study [5.9568322124195845]
Generative Artificial Intelligence (GenAI) tools represent a paradigm shift in software engineering.<n>This study aims to provide a longitudinal evaluation of GenAI's impact on agile software teams.
arXiv Detail & Related papers (2026-02-14T13:26:16Z) - Between Policy and Practice: GenAI Adoption in Agile Software Development Teams [3.4768202202649783]
generative AI (GenAI) tools have begun to reshape various software engineering activities.<n>This study investigates how agile practitioners adopt GenAI tools in real-world organizational contexts.
arXiv Detail & Related papers (2026-01-11T20:04:56Z) - Software Vulnerability Management in the Era of Artificial Intelligence: An Industry Perspective [1.0705399532413618]
Our study aims to determine the extent of the adoption of AI-powered tools for Software Vulnerability Management.<n>We conducted a survey study involving 60 practitioners from diverse industry sectors across 27 countries.<n>Our findings indicate that AI-powered tools are used throughout the SVM life cycle, with 69% of users reporting satisfaction with their current use.
arXiv Detail & Related papers (2025-12-20T07:58:35Z) - The SPACE of AI: Real-World Lessons on AI's Impact on Developers [0.807084206814932]
We study how developers perceive AI's influence across the dimensions of the SPACE framework: Satisfaction, Performance, Activity, Collaboration and Efficiency.<n>We find that AI is broadly adopted and widely seen as enhancing productivity, particularly for routine tasks.<n>Developers report increased efficiency and satisfaction, with less evidence of impact on collaboration.
arXiv Detail & Related papers (2025-07-31T21:45:54Z) - Assistance or Disruption? Exploring and Evaluating the Design and Trade-offs of Proactive AI Programming Support [36.082282294551405]
We introduce and evaluate Codellaborator, a design probe agent that initiates programming assistance based on editor activities and task context.<n>We find that proactive agents increase efficiency compared to prompt-only paradigm, but also incur workflow disruptions.
arXiv Detail & Related papers (2025-02-25T21:37:25Z) - Generative Artificial Intelligence-Supported Pentesting: A Comparison between Claude Opus, GPT-4, and Copilot [42.558423984270135]
GenAI can be applied across numerous fields, with particular relevance in cybersecurity.<n>In this paper, we have analyzed the potential of leading generic-purpose GenAI tools.<n>Claude Opus, GPT-4 from ChatGPT, and Copilot-in augmenting the penetration testing process as defined by the Penetration Testing Execution Standard (PTES)
arXiv Detail & Related papers (2025-01-12T22:48:37Z) - AI-Enhanced Sensemaking: Exploring the Design of a Generative AI-Based Assistant to Support Genetic Professionals [38.54324092761751]
Generative AI has the potential to transform knowledge work, but further research is needed to understand how knowledge workers envision using and interacting with generative AI.<n>Our research focused on designing a generative AI assistant to aid genetic professionals in analyzing whole genome sequences (WGS) and other clinical data for rare disease diagnosis.
arXiv Detail & Related papers (2024-12-19T22:54:49Z) - Dear Diary: A randomized controlled trial of Generative AI coding tools in the workplace [2.5280615594444567]
Generative AI coding tools are relatively new, and their impact on developers extends beyond traditional coding metrics.
This study aims to illuminate developers' preexisting beliefs about generative AI tools, their self perceptions, and how regular use of these tools may alter these beliefs.
Our findings reveal that the introduction and sustained use of generative AI coding tools significantly increases developers' perceptions of these tools as both useful and enjoyable.
arXiv Detail & Related papers (2024-10-24T00:07:27Z) - Hey GPT, Can You be More Racist? Analysis from Crowdsourced Attempts to Elicit Biased Content from Generative AI [41.96102438774773]
This work presents the findings from a university-level competition, which challenged participants to design prompts for eliciting biased outputs from GenAI tools.
We quantitatively and qualitatively analyze the competition submissions and identify a diverse set of biases in GenAI and strategies employed by participants to induce bias in GenAI.
arXiv Detail & Related papers (2024-10-20T18:44:45Z) - Supporting Human-AI Collaboration in Auditing LLMs with LLMs [33.56822240549913]
Large language models have been shown to be biased and behave irresponsibly.
It is crucial to audit these language models rigorously.
Existing auditing tools leverage either or both humans and AI to find failures.
arXiv Detail & Related papers (2023-04-19T21:59:04Z) - LLM-based Interaction for Content Generation: A Case Study on the
Perception of Employees in an IT department [85.1523466539595]
This paper presents a questionnaire survey to identify the intention to use generative tools by employees of an IT company.
Our results indicate a rather average acceptability of generative tools, although the more useful the tool is perceived to be, the higher the intention seems to be.
Our analyses suggest that the frequency of use of generative tools is likely to be a key factor in understanding how employees perceive these tools in the context of their work.
arXiv Detail & Related papers (2023-04-18T15:35:43Z) - AI Explainability 360: Impact and Design [120.95633114160688]
In 2019, we created AI Explainability 360 (Arya et al. 2020), an open source software toolkit featuring ten diverse and state-of-the-art explainability methods.
This paper examines the impact of the toolkit with several case studies, statistics, and community feedback.
The paper also describes the flexible design of the toolkit, examples of its use, and the significant educational material and documentation available to its users.
arXiv Detail & Related papers (2021-09-24T19:17:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.