A Reference Model for Common Understanding of Capabilities and Skills in
Manufacturing
- URL: http://arxiv.org/abs/2209.09632v1
- Date: Thu, 15 Sep 2022 20:45:00 GMT
- Title: A Reference Model for Common Understanding of Capabilities and Skills in
Manufacturing
- Authors: Aljosha K\"ocher, Alexander Belyaev, Jesko Hermann, J\"urgen Bock,
Kristof Meixner, Magnus Volkmann, Michael Winter, Patrick Zimmermann, Stephan
Grimm, and Christian Diedrich
- Abstract summary: In manufacturing, many use cases of Industry 4.0 require vendor-neutral and machine-readable information models.
This paper presents a reference model developed jointly by members of various organizations in a working group of the Plattform Industrie 4.0.
- Score: 46.118331027975366
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In manufacturing, many use cases of Industry 4.0 require vendor-neutral and
machine-readable information models to describe, implement and execute resource
functions. Such models have been researched under the terms capabilities and
skills. Standardization of such models is required, but currently not
available. This paper presents a reference model developed jointly by members
of various organizations in a working group of the Plattform Industrie 4.0.
This model covers definitions of most important aspects of capabilities and
skills. It can be seen as a basis for further standardization efforts.
Related papers
- Constraint based Modeling according to Reference Design [0.0]
Reference models in form of best practices are an essential element to ensured knowledge as design for reuse.
We present a generic approach for the formal description of reference models using semantic technologies and their application.
It is possible to use multiple reference models in context of system of system designs.
arXiv Detail & Related papers (2024-06-17T07:41:27Z) - EduNLP: Towards a Unified and Modularized Library for Educational Resources [78.8523961816045]
We present a unified, modularized, and extensive library, EduNLP, focusing on educational resource understanding.
In the library, we decouple the whole workflow to four key modules with consistent interfaces including data configuration, processing, model implementation, and model evaluation.
For the current version, we primarily provide 10 typical models from four categories, and 5 common downstream-evaluation tasks in the education domain on 8 subjects for users' usage.
arXiv Detail & Related papers (2024-06-03T12:45:40Z) - Multimodal CLIP Inference for Meta-Few-Shot Image Classification [0.0]
Multimodal foundation models like CLIP learn a joint (image, text) embedding.
This study demonstrates that combining modalities from CLIP's text and image encoders outperforms state-of-the-art meta-few-shot learners on widely adopted benchmarks.
arXiv Detail & Related papers (2024-03-26T17:47:54Z) - Learn From Model Beyond Fine-Tuning: A Survey [78.80920533793595]
Learn From Model (LFM) focuses on the research, modification, and design of foundation models (FM) based on the model interface.
The study of LFM techniques can be broadly categorized into five major areas: model tuning, model distillation, model reuse, meta learning and model editing.
This paper gives a comprehensive review of the current methods based on FM from the perspective of LFM.
arXiv Detail & Related papers (2023-10-12T10:20:36Z) - Generative AI for Business Strategy: Using Foundation Models to Create
Business Strategy Tools [0.7784248206747153]
We propose the use of foundation models for business decision making.
We derive IT artifacts in the form of asequence of signed business networks.
Such artifacts can inform business stakeholders about the state of the market and their own positioning.
arXiv Detail & Related papers (2023-08-27T19:03:12Z) - Foundation models in brief: A historical, socio-technical focus [2.5991265608180396]
Foundation models can be disruptive for future AI development by scaling up deep learning.
Models achieve state-of-the-art performance on a variety of tasks in domains such as natural language processing and computer vision.
arXiv Detail & Related papers (2022-12-17T22:11:33Z) - Learnware: Small Models Do Big [69.88234743773113]
The prevailing big model paradigm, which has achieved impressive results in natural language processing and computer vision applications, has not yet addressed those issues, whereas becoming a serious source of carbon emissions.
This article offers an overview of the learnware paradigm, which attempts to enable users not need to build machine learning models from scratch, with the hope of reusing small models to do things even beyond their original purposes.
arXiv Detail & Related papers (2022-10-07T15:55:52Z) - Concept for a Technical Infrastructure for Management of Predictive
Models in Industrial Applications [0.0]
We describe our technological concept for a model management system.
This concept includes versioned storage of data, support for different machine learning algorithms, fine tuning of models, subsequent deployment of models and monitoring of model performance after deployment.
arXiv Detail & Related papers (2021-07-29T08:38:46Z) - Explainable Matrix -- Visualization for Global and Local
Interpretability of Random Forest Classification Ensembles [78.6363825307044]
We propose Explainable Matrix (ExMatrix), a novel visualization method for Random Forest (RF) interpretability.
It employs a simple yet powerful matrix-like visual metaphor, where rows are rules, columns are features, and cells are rules predicates.
ExMatrix applicability is confirmed via different examples, showing how it can be used in practice to promote RF models interpretability.
arXiv Detail & Related papers (2020-05-08T21:03:48Z) - Model Reuse with Reduced Kernel Mean Embedding Specification [70.044322798187]
We present a two-phase framework for finding helpful models for a current application.
In the upload phase, when a model is uploading into the pool, we construct a reduced kernel mean embedding (RKME) as a specification for the model.
Then in the deployment phase, the relatedness of the current task and pre-trained models will be measured based on the value of the RKME specification.
arXiv Detail & Related papers (2020-01-20T15:15:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.