Exploring the Lifecycle and Maintenance Practices of Pre-Trained Models in Open-Source Software Repositories
- URL: http://arxiv.org/abs/2504.06040v1
- Date: Tue, 08 Apr 2025 13:41:13 GMT
- Title: Exploring the Lifecycle and Maintenance Practices of Pre-Trained Models in Open-Source Software Repositories
- Authors: Matin Koohjani, Diego Elias Costa,
- Abstract summary: Pre-trained models (PTMs) are becoming a common component in open-source software (OSS) development.<n>This report presents a plan for an exploratory study to investigate how PTMs are utilized, maintained, and tested in OSS projects.
- Score: 1.3757201415751368
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Pre-trained models (PTMs) are becoming a common component in open-source software (OSS) development, yet their roles, maintenance practices, and lifecycle challenges remain underexplored. This report presents a plan for an exploratory study to investigate how PTMs are utilized, maintained, and tested in OSS projects, focusing on models hosted on platforms like Hugging Face and PyTorch Hub. We plan to explore how PTMs are used in open-source software projects and their related maintenance practices by mining software repositories that use PTMs and analyzing their code-base, historical data, and reported issues. This study aims to provide actionable insights into improving the use and sustainability of PTM in open-source projects and a step towards a foundation for advancing software engineering practices in the context of model dependencies.
Related papers
- Towards a Classification of Open-Source ML Models and Datasets for Software Engineering [52.257764273141184]
Open-source Pre-Trained Models (PTMs) and datasets provide extensive resources for various Machine Learning (ML) tasks.
These resources lack a classification tailored to Software Engineering (SE) needs.
We apply an SE-oriented classification to PTMs and datasets on a popular open-source ML repository, Hugging Face (HF), and analyze the evolution of PTMs over time.
arXiv Detail & Related papers (2024-11-14T18:52:05Z) - An Overview and Catalogue of Dependency Challenges in Open Source Software Package Registries [52.23798016734889]
This article provides a catalogue of dependency-related challenges that come with relying on OSS packages or libraries.
The catalogue is based on the scientific literature on empirical research that has been conducted to understand, quantify and overcome these challenges.
arXiv Detail & Related papers (2024-09-27T16:20:20Z) - PeaTMOSS: A Dataset and Initial Analysis of Pre-Trained Models in
Open-Source Software [6.243303627949341]
This paper presents the PeaTMOSS dataset, which comprises metadata for 281,638 PTMs and detailed snapshots for all PTMs.
The dataset includes 44,337 mappings from 15,129 downstream GitHub repositories to the 2,530 PTMs they use.
Our analysis provides the first summary statistics for the PTM supply chain, showing the trend of PTM development and common shortcomings of PTM package documentation.
arXiv Detail & Related papers (2024-02-01T15:55:50Z) - PeaTMOSS: Mining Pre-Trained Models in Open-Source Software [6.243303627949341]
We present the PeaTMOSS dataset: Pre-Trained Models in Open-Source Software.
PeaTMOSS has three parts: a snapshot of 281,638 PTMs, (2) 27,270 open-source software repositories that use PTMs, and (3) a mapping between PTMs and the projects that use them.
arXiv Detail & Related papers (2023-10-05T15:58:45Z) - ZhiJian: A Unifying and Rapidly Deployable Toolbox for Pre-trained Model
Reuse [59.500060790983994]
This paper introduces ZhiJian, a comprehensive and user-friendly toolbox for model reuse, utilizing the PyTorch backend.
ZhiJian presents a novel paradigm that unifies diverse perspectives on model reuse, encompassing target architecture construction with PTM, tuning target model with PTM, and PTM-based inference.
arXiv Detail & Related papers (2023-08-17T19:12:13Z) - An Empirical Study of Pre-Trained Model Reuse in the Hugging Face Deep
Learning Model Registry [2.1346819928536687]
Machine learning engineers have begun to reuse large-scale pre-trained models (PTMs)
We interviewed 12 practitioners from the most popular PTM ecosystem, Hugging Face, to learn the practices and challenges of PTM reuse.
Three challenges for PTM reuse are missing attributes, discrepancies between claimed and actual performance, and model risks.
arXiv Detail & Related papers (2023-03-05T02:28:15Z) - Pre-Trained Models: Past, Present and Future [126.21572378910746]
Large-scale pre-trained models (PTMs) have recently achieved great success and become a milestone in the field of artificial intelligence (AI)
By storing knowledge into huge parameters and fine-tuning on specific tasks, the rich knowledge implicitly encoded in huge parameters can benefit a variety of downstream tasks.
It is now the consensus of the AI community to adopt PTMs as backbone for downstream tasks rather than learning models from scratch.
arXiv Detail & Related papers (2021-06-14T02:40:32Z) - Towards Utility-based Prioritization of Requirements in Open Source
Environments [51.65930505153647]
We show how utility-based prioritization approaches can be used to support contributors in conventional and open source Requirements Engineering scenarios.
As an example, we show how dependencies can be taken into account in utility-based prioritization processes.
arXiv Detail & Related papers (2021-02-17T09:05:54Z) - Empirical Study on the Software Engineering Practices in Open Source ML
Package Repositories [6.2894222252929985]
Modern Machine Learning technologies require considerable technical expertise and resources to develop, train and deploy such models.
Such discovery and reuse by practitioners and researchers are being addressed by public ML package repositories.
This paper conducts an exploratory study that analyzes the structure and contents of two popular ML package repositories.
arXiv Detail & Related papers (2020-12-02T18:52:56Z) - Monitoring and explainability of models in production [58.720142291102135]
Monitoring deployed models is crucial for continued provision of high quality machine learning enabled services.
We discuss the challenges to successful implementation of solutions in each of these areas with some recent examples of production ready solutions using open source tools.
arXiv Detail & Related papers (2020-07-13T10:37:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.