Towards Effective Issue Assignment using Online Machine Learning
- URL: http://arxiv.org/abs/2505.02437v1
- Date: Mon, 05 May 2025 08:05:13 GMT
- Title: Towards Effective Issue Assignment using Online Machine Learning
- Authors: Athanasios Michailoudis, Themistoklis Diamantopoulos, Antonios Favvas, Andreas L. Symeonidis,
- Abstract summary: We propose an Online Machine Learning methodology that adapts to the evolving characteristics of software projects.<n>Our system processes issues as a data stream, dynamically learning from new data and adjusting in real time to changes in team composition and project requirements.
- Score: 1.3749490831384266
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Efficient issue assignment in software development relates to faster resolution time, resources optimization, and reduced development effort. To this end, numerous systems have been developed to automate issue assignment, including AI and machine learning approaches. Most of them, however, often solely focus on a posteriori analyses of textual features (e.g. issue titles, descriptions), disregarding the temporal characteristics of software development. Thus, they fail to adapt as projects and teams evolve, such cases of team evolution, or project phase shifts (e.g. from development to maintenance). To incorporate such cases in the issue assignment process, we propose an Online Machine Learning methodology that adapts to the evolving characteristics of software projects. Our system processes issues as a data stream, dynamically learning from new data and adjusting in real time to changes in team composition and project requirements. We incorporate metadata such as issue descriptions, components and labels and leverage adaptive drift detection mechanisms to identify when model re-evaluation is necessary. Upon assessing our methodology on a set of software projects, we conclude that it can be effective on issue assignment, while meeting the evolving needs of software teams.
Related papers
- Does Machine Unlearning Truly Remove Model Knowledge? A Framework for Auditing Unlearning in LLMs [58.24692529185971]
We introduce a comprehensive auditing framework for unlearning evaluation comprising three benchmark datasets, six unlearning algorithms, and five prompt-based auditing methods.<n>We evaluate the effectiveness and robustness of different unlearning strategies.
arXiv Detail & Related papers (2025-05-29T09:19:07Z) - Towards an Interpretable Analysis for Estimating the Resolution Time of Software Issues [1.4039240369201997]
We build an issue monitoring system that extracts the actual effort required to fix issues on a per-project basis.<n>Our approach employs topic modeling to capture issue semantics and leverages metadata for interpretable resolution time analysis.
arXiv Detail & Related papers (2025-05-02T08:38:59Z) - QualiTagger: Automating software quality detection in issue trackers [4.917423556150366]
This research uses cutting edge models like Transformers to identify what text is usually associated with different quality properties.<n>We also study the distribution of such qualities in issue trackers from openly accessible software repositories.
arXiv Detail & Related papers (2025-04-15T10:40:40Z) - Lingma SWE-GPT: An Open Development-Process-Centric Language Model for Automated Software Improvement [62.94719119451089]
Lingma SWE-GPT series learns from and simulating real-world code submission activities.
Lingma SWE-GPT 72B resolves 30.20% of GitHub issues, marking a significant improvement in automatic issue resolution.
arXiv Detail & Related papers (2024-11-01T14:27:16Z) - Agent-Driven Automatic Software Improvement [55.2480439325792]
This research proposal aims to explore innovative solutions by focusing on the deployment of agents powered by Large Language Models (LLMs)
The iterative nature of agents, which allows for continuous learning and adaptation, can help surpass common challenges in code generation.
We aim to use the iterative feedback in these systems to further fine-tune the LLMs underlying the agents, becoming better aligned to the task of automated software improvement.
arXiv Detail & Related papers (2024-06-24T15:45:22Z) - Dealing with Data for RE: Mitigating Challenges while using NLP and
Generative AI [2.9189409618561966]
Book chapter explores the evolving landscape of Software Engineering in general, and Requirements Engineering (RE) in particular.
We discuss challenges that arise while integrating Natural Language Processing (NLP) and generative AI into enterprise-critical software systems.
Book provides practical insights, solutions, and examples to equip readers with the knowledge and tools necessary.
arXiv Detail & Related papers (2024-02-26T19:19:47Z) - ChatDev: Communicative Agents for Software Development [84.90400377131962]
ChatDev is a chat-powered software development framework in which specialized agents are guided in what to communicate.
These agents actively contribute to the design, coding, and testing phases through unified language-based communication.
arXiv Detail & Related papers (2023-07-16T02:11:34Z) - Quantifying Process Quality: The Role of Effective Organizational
Learning in Software Evolution [0.0]
Real-world software applications must constantly evolve to remain relevant.
Traditional methods of software quality control involve software quality models and continuous code inspection tools.
However, there is a strong correlation and causation between the quality of the development process and the resulting software product.
arXiv Detail & Related papers (2023-05-29T12:57:14Z) - Automotive Perception Software Development: An Empirical Investigation
into Data, Annotation, and Ecosystem Challenges [10.649193588119985]
Software that contains machine learning algorithms is an integral part of automotive perception.
The development of such software, specifically the training and validation of the machine learning components, require large annotated datasets.
An industry of data and annotation services has emerged to serve the development of such data-intensive automotive software components.
arXiv Detail & Related papers (2023-03-10T14:29:06Z) - Technology Readiness Levels for Machine Learning Systems [107.56979560568232]
Development and deployment of machine learning systems can be executed easily with modern tools, but the process is typically rushed and means-to-an-end.
We have developed a proven systems engineering approach for machine learning development and deployment.
Our "Machine Learning Technology Readiness Levels" framework defines a principled process to ensure robust, reliable, and responsible systems.
arXiv Detail & Related papers (2021-01-11T15:54:48Z) - Machine Learning for Software Engineering: A Systematic Mapping [73.30245214374027]
The software development industry is rapidly adopting machine learning for transitioning modern day software systems towards highly intelligent and self-learning systems.
No comprehensive study exists that explores the current state-of-the-art on the adoption of machine learning across software engineering life cycle stages.
This study introduces a machine learning for software engineering (MLSE) taxonomy classifying the state-of-the-art machine learning techniques according to their applicability to various software engineering life cycle stages.
arXiv Detail & Related papers (2020-05-27T11:56:56Z) - Towards CRISP-ML(Q): A Machine Learning Process Model with Quality
Assurance Methodology [53.063411515511056]
We propose a process model for the development of machine learning applications.
The first phase combines business and data understanding as data availability oftentimes affects the feasibility of the project.
The sixth phase covers state-of-the-art approaches for monitoring and maintenance of a machine learning applications.
arXiv Detail & Related papers (2020-03-11T08:25:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.