Unveiling Relations in the Industry 4.0 Standards Landscape based on
Knowledge Graph Embeddings
- URL: http://arxiv.org/abs/2006.04556v1
- Date: Wed, 3 Jun 2020 17:37:08 GMT
- Title: Unveiling Relations in the Industry 4.0 Standards Landscape based on
Knowledge Graph Embeddings
- Authors: Ariam Rivas, Irl\'an Grangel-Gonz\'alez, Diego Collarana, Jens
Lehmann, Maria-Esther Vidal
- Abstract summary: Industry4.0 (I4.0) standards and standardization frameworks have been proposed with the goal of emphempowering interoperability in smart factories.
We study the relatedness among standards and frameworks based on community analysis to discover knowledge that helps to cope with interoperability conflicts between standards.
- Score: 10.098126048053384
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Industry~4.0 (I4.0) standards and standardization frameworks have been
proposed with the goal of \emph{empowering interoperability} in smart
factories. These standards enable the description and interaction of the main
components, systems, and processes inside of a smart factory. Due to the
growing number of frameworks and standards, there is an increasing need for
approaches that automatically analyze the landscape of I4.0 standards.
Standardization frameworks classify standards according to their functions into
layers and dimensions. However, similar standards can be classified differently
across the frameworks, producing, thus, interoperability conflicts among them.
Semantic-based approaches that rely on ontologies and knowledge graphs, have
been proposed to represent standards, known relations among them, as well as
their classification according to existing frameworks. Albeit informative, the
structured modeling of the I4.0 landscape only provides the foundations for
detecting interoperability issues. Thus, graph-based analytical methods able to
exploit knowledge encoded by these approaches, are required to uncover
alignments among standards. We study the relatedness among standards and
frameworks based on community analysis to discover knowledge that helps to cope
with interoperability conflicts between standards. We use knowledge graph
embeddings to automatically create these communities exploiting the meaning of
the existing relationships. In particular, we focus on the identification of
similar standards, i.e., communities of standards, and analyze their properties
to detect unknown relations. We empirically evaluate our approach on a
knowledge graph of I4.0 standards using the Trans$^*$ family of embedding
models for knowledge graph entities. Our results are promising and suggest that
relations among standards can be detected accurately.
Related papers
- A Model-oriented Reasoning Framework for Privacy Analysis of Complex Systems [2.001711587270359]
This paper proposes a reasoning framework for privacy properties of systems and their environments.
It can capture any knowledge leaks on different logical levels to answer the question: which entity can learn what?
arXiv Detail & Related papers (2024-05-14T06:52:56Z) - Normative Requirements Operationalization with Large Language Models [3.456725053685842]
Normative non-functional requirements specify constraints that a system must observe in order to avoid violations of social, legal, ethical, empathetic, and cultural norms.
Recent research has tackled this challenge using a domain-specific language to specify normative requirements.
We propose a complementary approach that uses Large Language Models to extract semantic relationships between abstract representations of system capabilities.
arXiv Detail & Related papers (2024-04-18T17:01:34Z) - Ethical-Lens: Curbing Malicious Usages of Open-Source Text-to-Image Models [51.69735366140249]
We introduce Ethical-Lens, a framework designed to facilitate the value-aligned usage of text-to-image tools.
Ethical-Lens ensures value alignment in text-to-image models across toxicity and bias dimensions.
Our experiments reveal that Ethical-Lens enhances alignment capabilities to levels comparable with or superior to commercial models.
arXiv Detail & Related papers (2024-04-18T11:38:25Z) - Standardize: Aligning Language Models with Expert-Defined Standards for Content Generation [3.666326242924816]
We introduce Standardize, a retrieval-style in-context learning-based framework to guide large language models to align with expert-defined standards.
Our findings show that models can gain a 45% to 100% increase in precise accuracy across open and commercial LLMs evaluated.
arXiv Detail & Related papers (2024-02-19T23:18:18Z) - OpenPerf: A Benchmarking Framework for the Sustainable Development of
the Open-Source Ecosystem [6.188178422139467]
OpenPerf is a benchmarking framework designed for the sustainable development of the open-source ecosystem.
We implement 3 data science task benchmarks, 2 index-based benchmarks, and 1 standard benchmark.
We have developed a comprehensive toolkit for OpenPerf, which offers robust data management, tool integration, and user interface capabilities.
arXiv Detail & Related papers (2023-11-26T07:01:36Z) - Causal Discovery with Language Models as Imperfect Experts [119.22928856942292]
We consider how expert knowledge can be used to improve the data-driven identification of causal graphs.
We propose strategies for amending such expert knowledge based on consistency properties.
We report a case study, on real data, where a large language model is used as an imperfect expert.
arXiv Detail & Related papers (2023-07-05T16:01:38Z) - Fairness meets Cross-Domain Learning: a new perspective on Models and
Metrics [80.07271410743806]
We study the relationship between cross-domain learning (CD) and model fairness.
We introduce a benchmark on face and medical images spanning several demographic groups as well as classification and localization tasks.
Our study covers 14 CD approaches alongside three state-of-the-art fairness algorithms and shows how the former can outperform the latter.
arXiv Detail & Related papers (2023-03-25T09:34:05Z) - Relational Proxies: Emergent Relationships as Fine-Grained
Discriminators [52.17542855760418]
We propose a novel approach that leverages information between the global and local part of an object for encoding its label.
We design Proxies based on our theoretical findings and evaluate it on seven challenging fine-grained benchmark datasets.
We also experimentally validate our theory and obtain consistent results across multiple benchmarks.
arXiv Detail & Related papers (2022-10-05T11:08:04Z) - fairlib: A Unified Framework for Assessing and Improving Classification
Fairness [66.27822109651757]
fairlib is an open-source framework for assessing and improving classification fairness.
We implement 14 debiasing methods, including pre-processing, at-training-time, and post-processing approaches.
The built-in metrics cover the most commonly used fairness criterion and can be further generalized and customized for fairness evaluation.
arXiv Detail & Related papers (2022-05-04T03:50:23Z) - Neural Production Systems [90.75211413357577]
Visual environments are structured, consisting of distinct objects or entities.
To partition images into entities, deep-learning researchers have proposed structural inductive biases.
We take inspiration from cognitive science and resurrect a classic approach, which consists of a set of rule templates.
This architecture achieves a flexible, dynamic flow of control and serves to factorize entity-specific and rule-based information.
arXiv Detail & Related papers (2021-03-02T18:53:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.