Trusted Knowledge Extraction for Operations and Maintenance Intelligence
- URL: http://arxiv.org/abs/2507.22935v1
- Date: Thu, 24 Jul 2025 17:36:16 GMT
- Title: Trusted Knowledge Extraction for Operations and Maintenance Intelligence
- Authors: Kathleen Mealey, Jonathan A. Karr Jr., Priscila Saboia Moreira, Paul R. Brenner, Charles F. Vardeman II,
- Abstract summary: We focus on the operational and maintenance intelligence use case for trusted applications in the aircraft industry.<n>A baseline dataset is derived from a rich public domain US Federal Aviation Administration dataset focused on equipment failures or maintenance requirements.<n>We assess the zero-shot performance of NLP and LLM tools that can be operated within a controlled, confidential environment.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deriving operational intelligence from organizational data repositories is a key challenge due to the dichotomy of data confidentiality vs data integration objectives, as well as the limitations of Natural Language Processing (NLP) tools relative to the specific knowledge structure of domains such as operations and maintenance. In this work, we discuss Knowledge Graph construction and break down the Knowledge Extraction process into its Named Entity Recognition, Coreference Resolution, Named Entity Linking, and Relation Extraction functional components. We then evaluate sixteen NLP tools in concert with or in comparison to the rapidly advancing capabilities of Large Language Models (LLMs). We focus on the operational and maintenance intelligence use case for trusted applications in the aircraft industry. A baseline dataset is derived from a rich public domain US Federal Aviation Administration dataset focused on equipment failures or maintenance requirements. We assess the zero-shot performance of NLP and LLM tools that can be operated within a controlled, confidential environment (no data is sent to third parties). Based on our observation of significant performance limitations, we discuss the challenges related to trusted NLP and LLM tools as well as their Technical Readiness Level for wider use in mission-critical industries such as aviation. We conclude with recommendations to enhance trust and provide our open-source curated dataset to support further baseline testing and evaluation.
Related papers
- Adaptive Tool Use in Large Language Models with Meta-Cognition Trigger [49.81945268343162]
We propose MeCo, an adaptive decision-making strategy for external tool use.<n>MeCo quantifies metacognitive scores by capturing high-level cognitive signals in the representation space.<n>MeCo is fine-tuning-free and incurs minimal cost.
arXiv Detail & Related papers (2025-02-18T15:45:01Z) - Towards Human-Guided, Data-Centric LLM Co-Pilots [53.35493881390917]
CliMB-DC is a human-guided, data-centric framework for machine learning co-pilots.<n>It combines advanced data-centric tools with LLM-driven reasoning to enable robust, context-aware data processing.<n>We show how CliMB-DC can transform uncurated datasets into ML-ready formats.
arXiv Detail & Related papers (2025-01-17T17:51:22Z) - ToolBridge: An Open-Source Dataset to Equip LLMs with External Tool Capabilities [43.232034005763005]
This paper aims to elucidate the detailed process involved in constructing datasets that empower language models to learn how to utilize external tools.
ToolBridge proposes to employ a collection of general open-access datasets as its raw dataset pool.
By supervised fine-tuning on these curated data entries, LLMs can invoke external tools in appropriate contexts to boost their predictive accuracy.
arXiv Detail & Related papers (2024-10-08T20:54:40Z) - Learning to Ask: When LLM Agents Meet Unclear Instruction [55.65312637965779]
Large language models (LLMs) can leverage external tools for addressing a range of tasks unattainable through language skills alone.<n>We evaluate the performance of LLMs tool-use under imperfect instructions, analyze the error patterns, and build a challenging tool-use benchmark called Noisy ToolBench.<n>We propose a novel framework, Ask-when-Needed (AwN), which prompts LLMs to ask questions to users whenever they encounter obstacles due to unclear instructions.
arXiv Detail & Related papers (2024-08-31T23:06:12Z) - Assessing the Performance of Chinese Open Source Large Language Models in Information Extraction Tasks [12.400599440431188]
Information Extraction (IE) plays a crucial role in Natural Language Processing (NLP)
Recent experiments focusing on English IE tasks have shed light on the challenges faced by Large Language Models (LLMs) in achieving optimal performance.
arXiv Detail & Related papers (2024-06-04T08:00:40Z) - DeepFMEA -- A Scalable Framework Harmonizing Process Expertise and Data-Driven PHM [0.0]
In most industrial settings, data is often limited in quantity, and its quality can be inconsistent.
To bridge this gap in practice, successfully industrialized PHM tools rely on the introduction of domain expertise as a prior.
DeepFMEA draws inspiration from the Failure Mode and Effects Analysis (FMEA) in its structured approach to the analysis of any technical system.
arXiv Detail & Related papers (2024-05-13T09:41:34Z) - LEARN: Knowledge Adaptation from Large Language Model to Recommendation for Practical Industrial Application [54.984348122105516]
Llm-driven knowlEdge Adaptive RecommeNdation (LEARN) framework synergizes open-world knowledge with collaborative knowledge.<n>We propose an Llm-driven knowlEdge Adaptive RecommeNdation (LEARN) framework that synergizes open-world knowledge with collaborative knowledge.
arXiv Detail & Related papers (2024-05-07T04:00:30Z) - ChatSOS: LLM-based knowledge Q&A system for safety engineering [0.0]
This study introduces an LLM-based Q&A system for safety engineering, enhancing the comprehension and response accuracy of the model.
We employ prompt engineering to incorporate external knowledge databases, thus enriching the LLM with up-to-date and reliable information.
Our findings indicate that the integration of external knowledge significantly augments the capabilities of LLM for in-depth problem analysis and autonomous task assignment.
arXiv Detail & Related papers (2023-12-14T03:25:23Z) - Improving Open Information Extraction with Large Language Models: A
Study on Demonstration Uncertainty [52.72790059506241]
Open Information Extraction (OIE) task aims at extracting structured facts from unstructured text.
Despite the potential of large language models (LLMs) like ChatGPT as a general task solver, they lag behind state-of-the-art (supervised) methods in OIE tasks.
arXiv Detail & Related papers (2023-09-07T01:35:24Z) - Thrust: Adaptively Propels Large Language Models with External Knowledge [69.50273822565363]
Large-scale pre-trained language models (PTLMs) are shown to encode rich knowledge in their model parameters.<n>The inherent knowledge in PTLMs can be opaque or static, making external knowledge necessary.<n>We propose the instance-level adaptive propulsion of external knowledge (IAPEK), where we only conduct the retrieval when necessary.
arXiv Detail & Related papers (2023-07-19T20:16:46Z) - Extracting Semantics from Maintenance Records [0.2578242050187029]
We develop three approaches to extracting named entity recognition from maintenance records.
We develop a syntactic rules and semantic-based approach and an approach leveraging a pre-trained language model.
Our evaluations on a real-world aviation maintenance records dataset show promising results.
arXiv Detail & Related papers (2021-08-11T21:23:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.