Knowledge Sharing in Manufacturing using Large Language Models: User
Evaluation and Model Benchmarking
- URL: http://arxiv.org/abs/2401.05200v2
- Date: Mon, 26 Feb 2024 12:46:37 GMT
- Title: Knowledge Sharing in Manufacturing using Large Language Models: User
Evaluation and Model Benchmarking
- Authors: Samuel Kernan Freire, Chaofan Wang, Mina Foosherian, Stefan Wellsandt,
Santiago Ruiz-Arenas and Evangelos Niforatos
- Abstract summary: Large Language Model (LLM)-based system designed to retrieve information from factory documentation and knowledge shared by expert operators.
System aims to efficiently answer queries from operators and facilitate the sharing of new knowledge.
- Score: 7.976952274443561
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Recent advances in natural language processing enable more intelligent ways
to support knowledge sharing in factories. In manufacturing, operating
production lines has become increasingly knowledge-intensive, putting strain on
a factory's capacity to train and support new operators. This paper introduces
a Large Language Model (LLM)-based system designed to retrieve information from
the extensive knowledge contained in factory documentation and knowledge shared
by expert operators. The system aims to efficiently answer queries from
operators and facilitate the sharing of new knowledge. We conducted a user
study at a factory to assess its potential impact and adoption, eliciting
several perceived benefits, namely, enabling quicker information retrieval and
more efficient resolution of issues. However, the study also highlighted a
preference for learning from a human expert when such an option is available.
Furthermore, we benchmarked several commercial and open-sourced LLMs for this
system. The current state-of-the-art model, GPT-4, consistently outperformed
its counterparts, with open-source models trailing closely, presenting an
attractive option given their data privacy and customization benefits. In
summary, this work offers preliminary insights and a system design for
factories considering using LLM tools for knowledge management.
Related papers
- Large Language Models for Manufacturing [41.12098478080648]
Large Language Models (LLMs) have the potential to transform manufacturing industry, offering new opportunities to optimize processes, improve efficiency, and drive innovation.
This paper focuses on the integration of LLMs into the manufacturing domain, focusing on their potential to automate and enhance various aspects of manufacturing.
arXiv Detail & Related papers (2024-10-28T18:13:47Z) - Large Language Models are Limited in Out-of-Context Knowledge Reasoning [65.72847298578071]
Large Language Models (LLMs) possess extensive knowledge and strong capabilities in performing in-context reasoning.
This paper focuses on a significant aspect of out-of-context reasoning: Out-of-Context Knowledge Reasoning (OCKR), which is to combine multiple knowledge to infer new knowledge.
arXiv Detail & Related papers (2024-06-11T15:58:59Z) - Knowledge Adaptation from Large Language Model to Recommendation for Practical Industrial Application [54.984348122105516]
Large Language Models (LLMs) pretrained on massive text corpus presents a promising avenue for enhancing recommender systems.
We propose an Llm-driven knowlEdge Adaptive RecommeNdation (LEARN) framework that synergizes open-world knowledge with collaborative knowledge.
arXiv Detail & Related papers (2024-05-07T04:00:30Z) - Enhancing Question Answering for Enterprise Knowledge Bases using Large Language Models [46.51659135636255]
EKRG is a novel Retrieval-Generation framework based on large language models (LLMs)
We introduce an instruction-tuning method using an LLM to generate sufficient document-question pairs for training a knowledge retriever.
We develop a relevance-aware teacher-student learning strategy to further enhance the efficiency of the training process.
arXiv Detail & Related papers (2024-04-10T10:38:17Z) - ChatSOS: LLM-based knowledge Q&A system for safety engineering [0.0]
This study introduces an LLM-based Q&A system for safety engineering, enhancing the comprehension and response accuracy of the model.
We employ prompt engineering to incorporate external knowledge databases, thus enriching the LLM with up-to-date and reliable information.
Our findings indicate that the integration of external knowledge significantly augments the capabilities of LLM for in-depth problem analysis and autonomous task assignment.
arXiv Detail & Related papers (2023-12-14T03:25:23Z) - Beyond Factuality: A Comprehensive Evaluation of Large Language Models
as Knowledge Generators [78.63553017938911]
Large language models (LLMs) outperform information retrieval techniques for downstream knowledge-intensive tasks.
However, community concerns abound regarding the factuality and potential implications of using this uncensored knowledge.
We introduce CONNER, designed to evaluate generated knowledge from six important perspectives.
arXiv Detail & Related papers (2023-10-11T08:22:37Z) - ExpeL: LLM Agents Are Experiential Learners [60.54312035818746]
We introduce the Experiential Learning (ExpeL) agent to allow learning from agent experiences without requiring parametric updates.
Our agent autonomously gathers experiences and extracts knowledge using natural language from a collection of training tasks.
At inference, the agent recalls its extracted insights and past experiences to make informed decisions.
arXiv Detail & Related papers (2023-08-20T03:03:34Z) - Knowledge Rumination for Pre-trained Language Models [77.55888291165462]
We propose a new paradigm dubbed Knowledge Rumination to help the pre-trained language model utilize related latent knowledge without retrieving it from the external corpus.
We apply the proposed knowledge rumination to various language models, including RoBERTa, DeBERTa, and GPT-3.
arXiv Detail & Related papers (2023-05-15T15:47:09Z) - LM-CORE: Language Models with Contextually Relevant External Knowledge [13.451001884972033]
We argue that storing large amounts of knowledge in the model parameters is sub-optimal given the ever-growing amounts of knowledge and resource requirements.
We present LM-CORE -- a general framework to achieve this -- that allows textitdecoupling of the language model training from the external knowledge source.
Experimental results show that LM-CORE, having access to external knowledge, achieves significant and robust outperformance over state-of-the-art knowledge-enhanced language models on knowledge probing tasks.
arXiv Detail & Related papers (2022-08-12T18:59:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.