Aligning Academia with Industry: An Empirical Study of Industrial Needs and Academic Capabilities in AI-Driven Software Engineering
- URL: http://arxiv.org/abs/2512.15148v1
- Date: Wed, 17 Dec 2025 07:29:18 GMT
- Title: Aligning Academia with Industry: An Empirical Study of Industrial Needs and Academic Capabilities in AI-Driven Software Engineering
- Authors: Hang Yu, Yuzhou Lai, Li Zhang, Xiaoli Lian, Fang Liu, Yanrui Dong, Ting Zhang, Zhi Jin, David Lo,
- Abstract summary: The rapid advancement of large language models (LLMs) is fundamentally reshaping software engineering (SE)<n>While top-tier SE venues continue to show sustained or emerging focus on areas like automated testing and program repair, the alignment of these academic advances with real industrial needs remains unclear.<n>This study aims to refocus academic attention on these important yet under-explored problems and to guide future SE research toward greater industrial impact.
- Score: 45.09204791294318
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The rapid advancement of large language models (LLMs) is fundamentally reshaping software engineering (SE), driving a paradigm shift in both academic research and industrial practice. While top-tier SE venues continue to show sustained or emerging focus on areas like automated testing and program repair, with researchers worldwide reporting continuous performance gains, the alignment of these academic advances with real industrial needs remains unclear. To bridge this gap, we first conduct a systematic analysis of 1,367 papers published in FSE, ASE, and ICSE between 2022 and 2025, identifying key research topics, commonly used benchmarks, industrial relevance, and open-source availability. We then carry out an empirical survey across 17 organizations, collecting 282 responses on six prominent topics, i.e., program analysis, automated testing, code generation/completion, issue resolution, pre-trained code models, and dependency management, through structured questionnaires. By contrasting academic capabilities with industrial feedback, we derive seven critical implications, highlighting under-addressed challenges in software requirements and architecture, the reliability and explainability of intelligent SE approaches, input assumptions in academic research, practical evaluation tensions, and ethical considerations. This study aims to refocus academic attention on these important yet under-explored problems and to guide future SE research toward greater industrial impact.
Related papers
- The Story is Not the Science: Execution-Grounded Evaluation of Mechanistic Interpretability Research [56.80927148740585]
We address the challenges of scalability and rigor by flipping the dynamic and developing AI agents as research evaluators.<n>We use mechanistic interpretability research as a testbed, build standardized research output, and develop MechEvalAgent.<n>Our work demonstrates the potential of AI agents to transform research evaluation and pave the way for rigorous scientific practices.
arXiv Detail & Related papers (2026-02-05T19:00:02Z) - A Systematic Mapping on Software Fairness: Focus, Trends and Industrial Context [0.0]
This paper presents a systematic literature mapping to explore and categorize current advancements in fairness solutions within software engineering.<n>We focus on three key dimensions: research trends, research focus, and viability in industrial contexts.
arXiv Detail & Related papers (2025-12-29T16:09:08Z) - Retrieval-Augmented Generation in Industry: An Interview Study on Use Cases, Requirements, Challenges, and Evaluation [0.0]
Retrieval-Augmented Generation (RAG) is a rapidly evolving field within AI.<n>There is a significant lack of research on its practical application in industrial contexts.<n>Our study investigates how companies apply RAG in practice.
arXiv Detail & Related papers (2025-08-11T09:40:54Z) - AI4Research: A Survey of Artificial Intelligence for Scientific Research [55.5452803680643]
We present a comprehensive survey on AI for Research (AI4Research)<n>We first introduce a systematic taxonomy to classify five mainstream tasks in AI4Research.<n>We identify key research gaps and highlight promising future directions.
arXiv Detail & Related papers (2025-07-02T17:19:20Z) - AI Education in a Mirror: Challenges Faced by Academic and Industry Experts [15.332866859177747]
This study provides preliminary insights into challenges AI professionals encounter in both academia and industry.<n>We identify key challenges related to data quality and availability, model scalability, practical constraints, user behavior, and explainability.<n>These exploratory findings suggest that AI curricula could better integrate real-world complexities, software engineering principles, and interdisciplinary learning.
arXiv Detail & Related papers (2025-05-02T16:52:49Z) - SuperGPQA: Scaling LLM Evaluation across 285 Graduate Disciplines [118.8024915014751]
Large language models (LLMs) have demonstrated remarkable proficiency in academic disciplines such as mathematics, physics, and computer science.<n>However, human knowledge encompasses over 200 specialized disciplines, far exceeding the scope of existing benchmarks.<n>We present SuperGPQA, a benchmark that evaluates graduate-level knowledge and reasoning capabilities across 285 disciplines.
arXiv Detail & Related papers (2025-02-20T17:05:58Z) - Bridging the Gap: A Study of AI-based Vulnerability Management between Industry and Academia [4.4037442949276455]
Recent research advances in Artificial Intelligence (AI) have yielded promising results for automated software vulnerability management.
The industry remains very cautious and selective about integrating AI-based techniques into their security vulnerability management workflow.
We propose a set of future directions to help better understand industry expectations, improve the practical usability of AI-based security vulnerability research, and drive a synergistic relationship between industry and academia.
arXiv Detail & Related papers (2024-05-03T19:00:50Z) - Insights Towards Better Case Study Reporting in Software Engineering [0.0]
This paper aims to share insights that can enhance the quality and impact of case study reporting.
We emphasize the need to follow established guidelines, accurate classification, and detailed context descriptions in case studies.
We aim to encourage researchers to adopt more rigorous and communicative strategies, ensuring that case studies are methodologically sound.
arXiv Detail & Related papers (2024-02-13T12:29:26Z) - The Technological Emergence of AutoML: A Survey of Performant Software
and Applications in the Context of Industry [72.10607978091492]
Automated/Autonomous Machine Learning (AutoML/AutonoML) is a relatively young field.
This review makes two primary contributions to knowledge around this topic.
It provides the most up-to-date and comprehensive survey of existing AutoML tools, both open-source and commercial.
arXiv Detail & Related papers (2022-11-08T10:42:08Z) - Artificial Intelligence for IT Operations (AIOPS) Workshop White Paper [50.25428141435537]
Artificial Intelligence for IT Operations (AIOps) is an emerging interdisciplinary field arising in the intersection between machine learning, big data, streaming analytics, and the management of IT operations.
Main aim of the AIOPS workshop is to bring together researchers from both academia and industry to present their experiences, results, and work in progress in this field.
arXiv Detail & Related papers (2021-01-15T10:43:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.