Capturing Dependencies within Machine Learning via a Formal Process
Model
- URL: http://arxiv.org/abs/2208.05219v1
- Date: Wed, 10 Aug 2022 08:45:37 GMT
- Title: Capturing Dependencies within Machine Learning via a Formal Process
Model
- Authors: Fabian Ritz, Thomy Phan, Andreas Sedlmeier, Philipp Altmann, Jan
Wieghardt, Reiner Schmid, Horst Sauer, Cornel Klein, Claudia Linnhoff-Popien
and Thomas Gabor
- Abstract summary: Development of Machine Learning models is more than just a special case of software development (SD)
We define a comprehensive SD process model for ML that encompasses most tasks and artifacts described in the literature in a consistent way.
We provide various interaction points with standard SD processes in which ML often is an encapsulated task.
- Score: 11.91042044893791
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The development of Machine Learning (ML) models is more than just a special
case of software development (SD): ML models acquire properties and fulfill
requirements even without direct human interaction in a seemingly
uncontrollable manner. Nonetheless, the underlying processes can be described
in a formal way. We define a comprehensive SD process model for ML that
encompasses most tasks and artifacts described in the literature in a
consistent way. In addition to the production of the necessary artifacts, we
also focus on generating and validating fitting descriptions in the form of
specifications. We stress the importance of further evolving the ML model
throughout its life-cycle even after initial training and testing. Thus, we
provide various interaction points with standard SD processes in which ML often
is an encapsulated task. Further, our SD process model allows to formulate ML
as a (meta-) optimization problem. If automated rigorously, it can be used to
realize self-adaptive autonomous systems. Finally, our SD process model
features a description of time that allows to reason about the progress within
ML development processes. This might lead to further applications of formal
methods within the field of ML.
Related papers
- Verbalized Machine Learning: Revisiting Machine Learning with Language Models [63.10391314749408]
We introduce the framework of verbalized machine learning (VML)
VML constrains the parameter space to be human-interpretable natural language.
We empirically verify the effectiveness of VML, and hope that VML can serve as a stepping stone to stronger interpretability.
arXiv Detail & Related papers (2024-06-06T17:59:56Z) - A SysML Profile for the Standardized Description of Processes during
System Development [40.539768677361735]
The VDI/VDE 3682 standard for Formalised Process De-scription (FPD) provides a simple and easily understandable representation of processes.
This contribution focuses on the development of a Domain-Specific Modeling Language(D) that facilitates the integration of VDI/VDE 3682 into the Systems Modeling Language (SysML)
arXiv Detail & Related papers (2024-03-11T13:44:38Z) - Continuous Management of Machine Learning-Based Application Behavior [3.316045828362788]
Non-functional properties of Machine Learning models must be monitored, verified, and maintained.
We propose a multi-model approach that aims to guarantee a stable non-functional behavior of ML-based applications.
We experimentally evaluate our solution in a real-world scenario focusing on non-functional property fairness.
arXiv Detail & Related papers (2023-11-21T15:47:06Z) - Adapting Large Language Models for Content Moderation: Pitfalls in Data
Engineering and Supervised Fine-tuning [79.53130089003986]
Large Language Models (LLMs) have become a feasible solution for handling tasks in various domains.
In this paper, we introduce how to fine-tune a LLM model that can be privately deployed for content moderation.
arXiv Detail & Related papers (2023-10-05T09:09:44Z) - Towards an MLOps Architecture for XAI in Industrial Applications [2.0457031151514977]
Machine learning (ML) has become a popular tool in the industrial sector as it helps to improve operations, increase efficiency, and reduce costs.
One of the remaining Machine Learning Operations (MLOps) challenges is the need for explanations.
We developed a novel MLOps software architecture to address the challenge of integrating explanations and feedback capabilities into the ML development and deployment processes.
arXiv Detail & Related papers (2023-09-22T09:56:25Z) - Self-directed Machine Learning [86.3709575146414]
In education science, self-directed learning has been shown to be more effective than passive teacher-guided learning.
We introduce the principal concept of Self-directed Machine Learning (SDML) and propose a framework for SDML.
Our proposed SDML process benefits from self task selection, self data selection, self model selection, self optimization strategy selection and self evaluation metric selection.
arXiv Detail & Related papers (2022-01-04T18:32:06Z) - Panoramic Learning with A Standardized Machine Learning Formalism [116.34627789412102]
This paper presents a standardized equation of the learning objective, that offers a unifying understanding of diverse ML algorithms.
It also provides guidance for mechanic design of new ML solutions, and serves as a promising vehicle towards panoramic learning with all experiences.
arXiv Detail & Related papers (2021-08-17T17:44:38Z) - Learning by Design: Structuring and Documenting the Human Choices in
Machine Learning Development [6.903929927172917]
We present a method consisting of eight design questions that outline the deliberation and normative choices going into creating a machine learning model.
Our method affords several benefits, such as supporting critical assessment through methodological transparency.
We believe that our method can help ML practitioners structure and justify their choices and assumptions when developing ML models.
arXiv Detail & Related papers (2021-05-03T08:47:45Z) - Transfer Learning without Knowing: Reprogramming Black-box Machine
Learning Models with Scarce Data and Limited Resources [78.72922528736011]
We propose a novel approach, black-box adversarial reprogramming (BAR), that repurposes a well-trained black-box machine learning model.
Using zeroth order optimization and multi-label mapping techniques, BAR can reprogram a black-box ML model solely based on its input-output responses.
BAR outperforms state-of-the-art methods and yields comparable performance to the vanilla adversarial reprogramming method.
arXiv Detail & Related papers (2020-07-17T01:52:34Z) - Technology Readiness Levels for AI & ML [79.22051549519989]
Development of machine learning systems can be executed easily with modern tools, but the process is typically rushed and means-to-an-end.
Engineering systems follow well-defined processes and testing standards to streamline development for high-quality, reliable results.
We propose a proven systems engineering approach for machine learning development and deployment.
arXiv Detail & Related papers (2020-06-21T17:14:34Z) - Insights into Performance Fitness and Error Metrics for Machine Learning [1.827510863075184]
Machine learning (ML) is the field of training machines to achieve high level of cognition and perform human-like analysis.
This paper examines a number of the most commonly-used performance fitness and error metrics for regression and classification algorithms.
arXiv Detail & Related papers (2020-05-17T22:59:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.