The Role of Code Proficiency in the Era of Generative AI
- URL: http://arxiv.org/abs/2405.01565v1
- Date: Mon, 8 Apr 2024 06:20:42 GMT
- Title: The Role of Code Proficiency in the Era of Generative AI
- Authors: Gregorio Robles, Christoph Treude, Jesus M. Gonzalez-Barahona, Raula Gaikovina Kula,
- Abstract summary: Generative AI models are becoming integral to the developer workspace.
However, challenges emerge due to the 'black box' nature of many of these models.
This position paper advocates for a 'white box' approach to these generative models.
- Score: 10.524937623398003
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: At the current pace of technological advancements, Generative AI models, including both Large Language Models and Large Multi-modal Models, are becoming integral to the developer workspace. However, challenges emerge due to the 'black box' nature of many of these models, where the processes behind their outputs are not transparent. This position paper advocates for a 'white box' approach to these generative models, emphasizing the necessity of transparency and understanding in AI-generated code to match the proficiency levels of human developers and better enable software maintenance and evolution. We outline a research agenda aimed at investigating the alignment between AI-generated code and developer skills, highlighting the importance of responsibility, security, legal compliance, creativity, and social value in software development. The proposed research questions explore the potential of white-box methodologies to ensure that software remains an inspectable, adaptable, and trustworthy asset in the face of rapid AI integration, setting a course for research that could shape the role of code proficiency into 2030 and beyond.
Related papers
- Generative AI in Health Economics and Outcomes Research: A Taxonomy of Key Definitions and Emerging Applications, an ISPOR Working Group Report [12.204470166456561]
Generative AI shows significant potential in health economics and outcomes research (HEOR)
Generative AI shows significant potential in HEOR, enhancing efficiency, productivity, and offering novel solutions to complex challenges.
Foundation models are promising in automating complex tasks, though challenges remain in scientific reliability, bias, interpretability, and workflow integration.
arXiv Detail & Related papers (2024-10-26T15:42:50Z) - Generative AI Application for Building Industry [10.154329382433213]
This paper investigates the transformative potential of generative AI technologies, particularly large language models (LLMs) in the building industry.
The research highlights how LLMs can automate labor-intensive processes, significantly improving efficiency, accuracy, and safety in building practices.
arXiv Detail & Related papers (2024-10-01T21:59:08Z) - Agent-Driven Automatic Software Improvement [55.2480439325792]
This research proposal aims to explore innovative solutions by focusing on the deployment of agents powered by Large Language Models (LLMs)
The iterative nature of agents, which allows for continuous learning and adaptation, can help surpass common challenges in code generation.
We aim to use the iterative feedback in these systems to further fine-tune the LLMs underlying the agents, becoming better aligned to the task of automated software improvement.
arXiv Detail & Related papers (2024-06-24T15:45:22Z) - A Survey of Neural Code Intelligence: Paradigms, Advances and Beyond [84.95530356322621]
This survey presents a systematic review of the advancements in code intelligence.
It covers over 50 representative models and their variants, more than 20 categories of tasks, and an extensive coverage of over 680 related works.
Building on our examination of the developmental trajectories, we further investigate the emerging synergies between code intelligence and broader machine intelligence.
arXiv Detail & Related papers (2024-03-21T08:54:56Z) - Position Paper: Agent AI Towards a Holistic Intelligence [53.35971598180146]
We emphasize developing Agent AI -- an embodied system that integrates large foundation models into agent actions.
In this paper, we propose a novel large action model to achieve embodied intelligent behavior, the Agent Foundation Model.
arXiv Detail & Related papers (2024-02-28T16:09:56Z) - On the Challenges and Opportunities in Generative AI [135.2754367149689]
We argue that current large-scale generative AI models do not sufficiently address several fundamental issues that hinder their widespread adoption across domains.
In this work, we aim to identify key unresolved challenges in modern generative AI paradigms that should be tackled to further enhance their capabilities, versatility, and reliability.
arXiv Detail & Related papers (2024-02-28T15:19:33Z) - In-IDE Human-AI Experience in the Era of Large Language Models; A
Literature Review [2.6703221234079946]
The study of in-IDE Human-AI Experience is critical in understanding how these AI tools are transforming the software development process.
We conducted a literature review to study the current state of in-IDE Human-AI Experience research.
arXiv Detail & Related papers (2024-01-19T14:55:51Z) - Exploring the intersection of Generative AI and Software Development [0.0]
The synergy between generative AI and Software Engineering emerges as a transformative frontier.
This whitepaper delves into the unexplored realm, elucidating how generative AI techniques can revolutionize software development.
It serves as a guide for stakeholders, urging discussions and experiments in the application of generative AI in Software Engineering.
arXiv Detail & Related papers (2023-12-21T19:23:23Z) - A Vision for Operationalising Diversity and Inclusion in AI [5.4897262701261225]
This study seeks to envision the operationalization of the ethical imperatives of diversity and inclusion (D&I) within AI ecosystems.
A significant challenge in AI development is the effective operationalization of D&I principles.
This paper proposes a vision of a framework for developing a tool utilizing persona-based simulation by Generative AI (GenAI)
arXiv Detail & Related papers (2023-12-11T02:44:39Z) - CodeLMSec Benchmark: Systematically Evaluating and Finding Security
Vulnerabilities in Black-Box Code Language Models [58.27254444280376]
Large language models (LLMs) for automatic code generation have achieved breakthroughs in several programming tasks.
Training data for these models is usually collected from the Internet (e.g., from open-source repositories) and is likely to contain faults and security vulnerabilities.
This unsanitized training data can cause the language models to learn these vulnerabilities and propagate them during the code generation procedure.
arXiv Detail & Related papers (2023-02-08T11:54:07Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.