Driving Digital Engineering Integration and Interoperability Through
Semantic Integration of Models with Ontologies
- URL: http://arxiv.org/abs/2206.10454v1
- Date: Wed, 8 Jun 2022 14:58:09 GMT
- Title: Driving Digital Engineering Integration and Interoperability Through
Semantic Integration of Models with Ontologies
- Authors: Daniel Dunbar, Thomas Hagedorn, Mark Blackburn, John Dzielski, Steven
Hespelt, Benjamin Kruse, Dinesh Verma, Zhongyuan Yu
- Abstract summary: This paper introduces the Digital Engineering Framework for Integration and.
DEFII, for incorporating SWT into engineering design and analysis tasks.
The framework includes three notional interfaces for interacting with ontology-aligned data.
Use of the framework results in a tool-agnostic authoritative source of truth spanning the entire project, system, or mission.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Engineered solutions are becoming more complex and multi-disciplinary in
nature. This evolution requires new techniques to enhance design and analysis
tasks that incorporate data integration and interoperability across various
engineering tool suites spanning multiple domains at different abstraction
levels. Semantic Web Technologies (SWT) offer data integration and
interoperability benefits as well as other opportunities to enhance reasoning
across knowledge represented in multiple disparate models. This paper
introduces the Digital Engineering Framework for Integration and
Interoperability (DEFII) for incorporating SWT into engineering design and
analysis tasks. The framework includes three notional interfaces for
interacting with ontology-aligned data. It also introduces a novel Model
Interface Specification Diagram (MISD) that provides a tool-agnostic model
representation enabled by SWT that exposes data stored for use by external
users through standards-based interfaces. Use of the framework results in a
tool-agnostic authoritative source of truth spanning the entire project,
system, or mission.
Related papers
- Flex: End-to-End Text-Instructed Visual Navigation with Foundation Models [59.892436892964376]
We investigate the minimal data requirements and architectural adaptations necessary to achieve robust closed-loop performance with vision-based control policies.
Our findings are synthesized in Flex (Fly-lexically), a framework that uses pre-trained Vision Language Models (VLMs) as frozen patch-wise feature extractors.
We demonstrate the effectiveness of this approach on quadrotor fly-to-target tasks, where agents trained via behavior cloning successfully generalize to real-world scenes.
arXiv Detail & Related papers (2024-10-16T19:59:31Z) - Bridging Design Gaps: A Parametric Data Completion Approach With Graph Guided Diffusion Models [9.900586490845694]
This study introduces a generative imputation model leveraging graph attention networks and tabular diffusion models for completing missing parametric data in engineering designs.
We demonstrate our model significantly outperforms existing classical methods, such as MissForest, hotDeck, PPCA, and TabCSDI in both the accuracy and diversity of imputation options.
The graph model helps accurately capture and impute complex parametric interdependencies from an assembly graph, which is key for design problems.
arXiv Detail & Related papers (2024-06-17T16:03:17Z) - MergeNet: Knowledge Migration across Heterogeneous Models, Tasks, and Modalities [72.68829963458408]
We present MergeNet, which learns to bridge the gap of parameter spaces of heterogeneous models.
The core mechanism of MergeNet lies in the parameter adapter, which operates by querying the source model's low-rank parameters.
MergeNet is learned alongside both models, allowing our framework to dynamically transfer and adapt knowledge relevant to the current stage.
arXiv Detail & Related papers (2024-04-20T08:34:39Z) - Interfacing Foundation Models' Embeddings [131.0352288172788]
We present FIND, a generalized interface for aligning foundation models' embeddings with unified image and dataset-level understanding spanning modality and granularity.
In light of the interleaved embedding space, we introduce FIND-Bench, which introduces new training and evaluation annotations to the COCO dataset for interleaved segmentation and retrieval.
arXiv Detail & Related papers (2023-12-12T18:58:02Z) - Multi-Grained Multimodal Interaction Network for Entity Linking [65.30260033700338]
Multimodal entity linking task aims at resolving ambiguous mentions to a multimodal knowledge graph.
We propose a novel Multi-GraIned Multimodal InteraCtion Network $textbf(MIMIC)$ framework for solving the MEL task.
arXiv Detail & Related papers (2023-07-19T02:11:19Z) - Interactive Design by Integrating a Large Pre-Trained Language Model and
Building Information Modeling [0.0]
This study explores the potential of generative artificial intelligence (AI) models, specifically OpenAI's generative pre-trained transformer (GPT) series.
Our findings demonstrate the effectiveness of state-of-the-art language models in facilitating dynamic collaboration between architects and AI systems.
arXiv Detail & Related papers (2023-06-25T08:18:03Z) - Using Textual Interface to Align External Knowledge for End-to-End
Task-Oriented Dialogue Systems [53.38517204698343]
We propose a novel paradigm that uses a textual interface to align external knowledge and eliminate redundant processes.
We demonstrate our paradigm in practice through MultiWOZ-Remake, including an interactive textual interface built for the MultiWOZ database.
arXiv Detail & Related papers (2023-05-23T05:48:21Z) - Tool interoperability for model-based systems engineering [0.7182467727359453]
We discuss several tools, each state-of-the-art in its own discipline, offering functionality such as specification, synthesis, and verification.
We present Analytics as a Service, built on the Arrowhead framework, to connect these tools and make them interoperable.
arXiv Detail & Related papers (2023-02-07T14:45:04Z) - Universal Information Extraction as Unified Semantic Matching [54.19974454019611]
We decouple information extraction into two abilities, structuring and conceptualizing, which are shared by different tasks and schemas.
Based on this paradigm, we propose to universally model various IE tasks with Unified Semantic Matching framework.
In this way, USM can jointly encode schema and input text, uniformly extract substructures in parallel, and controllably decode target structures on demand.
arXiv Detail & Related papers (2023-01-09T11:51:31Z) - SINGA-Easy: An Easy-to-Use Framework for MultiModal Analysis [18.084628500554462]
We introduce SINGA-Easy, a new deep learning framework that provides distributed hyper- parameter tuning at the training stage, dynamic computational cost control at the inference stage, and intuitive user interactions with multimedia contents facilitated by model explanation.
Our experiments on the training and deployment of multi-modality data analysis applications show that the framework is both usable and adaptable to dynamic inference loads.
arXiv Detail & Related papers (2021-08-03T08:39:54Z) - Modular approach to data preprocessing in ALOHA and application to a
smart industry use case [0.0]
The paper addresses a modular approach, integrated into the ALOHA tool flow, to support the data preprocessing and transformation pipeline.
To demonstrate the effectiveness of the approach, we present some experimental results related to a keyword spotting use case.
arXiv Detail & Related papers (2021-02-02T06:48:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.