Fountain -- an intelligent contextual assistant combining knowledge
representation and language models for manufacturing risk identification
- URL: http://arxiv.org/abs/2308.00364v1
- Date: Tue, 1 Aug 2023 08:12:43 GMT
- Title: Fountain -- an intelligent contextual assistant combining knowledge
representation and language models for manufacturing risk identification
- Authors: Saurabh Kumar, Daniel Fuchs, Klaus Spindler
- Abstract summary: We developed Fountain as a contextual assistant integrated in the deviation management workflow.
We present the nuances of selecting and adapting pretrained language models for an engineering domain.
We demonstrate that the model adaptation is feasible using moderate computational infrastructure already available to most engineering teams in manufacturing organizations.
- Score: 7.599675376503671
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deviations from the approved design or processes during mass production can
lead to unforeseen risks. However, these changes are sometimes necessary due to
changes in the product design characteristics or an adaptation in the
manufacturing process. A major challenge is to identify these risks early in
the workflow so that failures leading to warranty claims can be avoided. We
developed Fountain as a contextual assistant integrated in the deviation
management workflow that helps in identifying the risks based on the
description of the existing design and process criteria and the proposed
deviation. In the manufacturing context, it is important that the assistant
provides recommendations that are explainable and consistent. We achieve this
through a combination of the following two components 1) language models
finetuned for domain specific semantic similarity and, 2) knowledge
representation in the form of a property graph derived from the bill of
materials, Failure Modes and Effect Analysis (FMEA) and prior failures reported
by customers. Here, we present the nuances of selecting and adapting pretrained
language models for an engineering domain, continuous model updates based on
user interaction with the contextual assistant and creating the causal chain
for explainable recommendations based on the knowledge representation.
Additionally, we demonstrate that the model adaptation is feasible using
moderate computational infrastructure already available to most engineering
teams in manufacturing organizations and inference can be performed on standard
CPU only instances for integration with existing applications making these
methods easily deployable.
Related papers
- Retrieval-Augmented Instruction Tuning for Automated Process Engineering Calculations : A Tool-Chaining Problem-Solving Framework with Attributable Reflection [0.0]
We introduce a novel autonomous agent framework leveraging Retrieval-Augmented Instruction-Tuning (RAIT) to enhance open, customizable small code language models (SLMs)
By combining instruction tuned code SLMs with Retrieval-Augmented Code Generation (RACG) using external tools, the agent generates, debugs, and optimize code from natural language specifications.
Our approach addresses the limitations of the current lack of a foundational AI model for specialized process engineering tasks and offers benefits of explainability, knowledge editing, and cost-effectiveness.
arXiv Detail & Related papers (2024-08-28T15:33:47Z) - Knowledge Graph Modeling-Driven Large Language Model Operating System (LLM OS) for Task Automation in Process Engineering Problem-Solving [0.0]
We present the Process Engineering Operations Assistant (PEOA), an AI-driven framework designed to solve complex problems in the chemical and process industries.
The framework employs a modular architecture orchestrated by a meta-agent, which serves as the central coordinator.
The results demonstrate the framework effectiveness in automating calculations, accelerating prototyping, and providing AI-augmented decision support for industrial processes.
arXiv Detail & Related papers (2024-08-23T13:52:47Z) - Tuning-Free Accountable Intervention for LLM Deployment -- A
Metacognitive Approach [55.613461060997004]
Large Language Models (LLMs) have catalyzed transformative advances across a spectrum of natural language processing tasks.
We propose an innovative textitmetacognitive approach, dubbed textbfCLEAR, to equip LLMs with capabilities for self-aware error identification and correction.
arXiv Detail & Related papers (2024-03-08T19:18:53Z) - A Systematic Survey of Prompt Engineering in Large Language Models:
Techniques and Applications [11.568575664316143]
This paper provides a structured overview of recent advancements in prompt engineering, categorized by application area.
We provide a summary detailing the prompting methodology, its applications, the models involved, and the datasets utilized.
This systematic analysis enables a better understanding of this rapidly developing field and facilitates future research by illuminating open challenges and opportunities for prompt engineering.
arXiv Detail & Related papers (2024-02-05T19:49:13Z) - Tradeoffs Between Alignment and Helpfulness in Language Models with Representation Engineering [15.471566708181824]
We study the tradeoff between the increase in alignment and decrease in helpfulness of the model.
Under the conditions of our framework, alignment can be guaranteed with representation engineering.
We show that helpfulness is harmed quadratically with the norm of the representation engineering vector.
arXiv Detail & Related papers (2024-01-29T17:38:14Z) - On Conditional and Compositional Language Model Differentiable Prompting [75.76546041094436]
Prompts have been shown to be an effective method to adapt a frozen Pretrained Language Model (PLM) to perform well on downstream tasks.
We propose a new model, Prompt Production System (PRopS), which learns to transform task instructions or input metadata, into continuous prompts.
arXiv Detail & Related papers (2023-07-04T02:47:42Z) - Large Language Models with Controllable Working Memory [64.71038763708161]
Large language models (LLMs) have led to a series of breakthroughs in natural language processing (NLP)
What further sets these models apart is the massive amounts of world knowledge they internalize during pretraining.
How the model's world knowledge interacts with the factual information presented in the context remains under explored.
arXiv Detail & Related papers (2022-11-09T18:58:29Z) - Natural Language Processing for Systems Engineering: Automatic
Generation of Systems Modelling Language Diagrams [0.10312968200748115]
An approach is proposed to assist systems engineers in the automatic generation of systems diagrams from unstructured natural language text.
The intention is to provide the users with a more standardised, comprehensive and automated starting point onto which subsequently refine and adapt the diagrams according to their needs.
arXiv Detail & Related papers (2022-08-09T19:20:33Z) - Exploring the Trade-off between Plausibility, Change Intensity and
Adversarial Power in Counterfactual Explanations using Multi-objective
Optimization [73.89239820192894]
We argue that automated counterfactual generation should regard several aspects of the produced adversarial instances.
We present a novel framework for the generation of counterfactual examples.
arXiv Detail & Related papers (2022-05-20T15:02:53Z) - KAT: A Knowledge Augmented Transformer for Vision-and-Language [56.716531169609915]
We propose a novel model - Knowledge Augmented Transformer (KAT) - which achieves a strong state-of-the-art result on the open-domain multimodal task of OK-VQA.
Our approach integrates implicit and explicit knowledge in an end to end encoder-decoder architecture, while still jointly reasoning over both knowledge sources during answer generation.
An additional benefit of explicit knowledge integration is seen in improved interpretability of model predictions in our analysis.
arXiv Detail & Related papers (2021-12-16T04:37:10Z) - Contextualized Perturbation for Textual Adversarial Attack [56.370304308573274]
Adversarial examples expose the vulnerabilities of natural language processing (NLP) models.
This paper presents CLARE, a ContextuaLized AdversaRial Example generation model that produces fluent and grammatical outputs.
arXiv Detail & Related papers (2020-09-16T06:53:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.