Machine Learning Model Integration with Open World Temporal Logic for Process Automation
- URL: http://arxiv.org/abs/2506.17776v2
- Date: Sun, 27 Jul 2025 13:55:29 GMT
- Title: Machine Learning Model Integration with Open World Temporal Logic for Process Automation
- Authors: Dyuman Aditya, Colton Payne, Mario Leiva, Paulo Shakarian,
- Abstract summary: This paper introduces a novel approach that integrates the outputs from various machine learning models directly with the PyReason framework.<n>PyReason's foundation in generalized annotated logic allows for the seamless incorporation of real-valued outputs from diverse ML models.<n>This integration finds utility across numerous domains, including manufacturing, healthcare, and business operations.
- Score: 0.13499500088995461
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advancements in Machine Learning (ML) have yielded powerful models capable of extracting structured information from diverse and complex data sources. However, a significant challenge lies in translating these perceptual or extractive outputs into actionable, reasoned decisions within complex operational workflows. To address these challenges, this paper introduces a novel approach that integrates the outputs from various machine learning models directly with the PyReason framework, an open-world temporal logic programming reasoning engine. PyReason's foundation in generalized annotated logic allows for the seamless incorporation of real-valued outputs (e.g., probabilities, confidence scores) from diverse ML models, treating them as truth intervals within its logical framework. Crucially, PyReason provides mechanisms, implemented in Python, to continuously poll ML model outputs, convert them into logical facts, and dynamically recompute the minimal model, ensuring real-tine adaptive decision-making. Furthermore, its native support for temporal reasoning, knowledge graph integration, and fully explainable interface traces enables sophisticated analysis over time-sensitive process data and existing organizational knowledge. By combining the strengths of perception and extraction from ML models with the logical deduction and transparency of PyReason, we aim to create a powerful system for automating complex processes. This integration finds utility across numerous domains, including manufacturing, healthcare, and business operations.
Related papers
- Feature Engineering for Agents: An Adaptive Cognitive Architecture for Interpretable ML Monitoring [2.1205272468688574]
We propose a cognitive architecture for ML monitoring that applies feature engineering principles to agents based on Large Language Models.<n>Decision Procedure module simulates feature engineering through three key steps: Refactor, Break Down, and Compile.<n> Experiments using multiple LLMs demonstrate the efficacy of our approach, achieving significantly higher accuracy compared to various baselines.
arXiv Detail & Related papers (2025-06-11T13:48:25Z) - MLE-Dojo: Interactive Environments for Empowering LLM Agents in Machine Learning Engineering [57.156093929365255]
Gym-style framework for systematically reinforcement learning, evaluating, and improving autonomous large language model (LLM) agents.<n>MLE-Dojo covers diverse, open-ended MLE tasks carefully curated to reflect realistic engineering scenarios.<n>Its fully executable environment supports comprehensive agent training via both supervised fine-tuning and reinforcement learning.
arXiv Detail & Related papers (2025-05-12T17:35:43Z) - Evaluating Large Language Models for Real-World Engineering Tasks [75.97299249823972]
This paper introduces a curated database comprising over 100 questions derived from authentic, production-oriented engineering scenarios.<n>Using this dataset, we evaluate four state-of-the-art Large Language Models (LLMs)<n>Our results show that LLMs demonstrate strengths in basic temporal and structural reasoning but struggle significantly with abstract reasoning, formal modeling, and context-sensitive engineering logic.
arXiv Detail & Related papers (2025-05-12T14:05:23Z) - Enhancing Large Language Models through Neuro-Symbolic Integration and Ontological Reasoning [0.0]
Large Language Models (LLMs) demonstrate impressive capabilities in natural language processing but suffer from inaccuracies and logical inconsistencies known as hallucinations.<n>We propose a neuro-symbolic approach integrating symbolic ontological reasoning and machine learning methods to enhance the consistency and reliability of LLM outputs.
arXiv Detail & Related papers (2025-04-10T10:39:24Z) - Scaling Autonomous Agents via Automatic Reward Modeling And Planning [52.39395405893965]
Large language models (LLMs) have demonstrated remarkable capabilities across a range of tasks.<n>However, they still struggle with problems requiring multi-step decision-making and environmental feedback.<n>We propose a framework that can automatically learn a reward model from the environment without human annotations.
arXiv Detail & Related papers (2025-02-17T18:49:25Z) - Transformers Use Causal World Models in Maze-Solving Tasks [49.67445252528868]
We identify World Models in transformers trained on maze-solving tasks.<n>We find that it is easier to activate features than to suppress them.<n> positional encoding schemes appear to influence how World Models are structured within the model's residual stream.
arXiv Detail & Related papers (2024-12-16T15:21:04Z) - Control Industrial Automation System with Large Language Model Agents [2.2369578015657954]
This paper introduces a framework for integrating large language models with industrial automation systems.<n>At the core of the framework are an agent system designed for industrial tasks, a structured prompting method, and an event-driven information modeling mechanism.<n>Our contribution includes a formal system design, proof-of-concept implementation, and a method for generating task-specific datasets.
arXiv Detail & Related papers (2024-09-26T16:19:37Z) - Cognitive LLMs: Towards Integrating Cognitive Architectures and Large Language Models for Manufacturing Decision-making [51.737762570776006]
LLM-ACTR is a novel neuro-symbolic architecture that provides human-aligned and versatile decision-making.
Our framework extracts and embeds knowledge of ACT-R's internal decision-making process as latent neural representations.
Our experiments on novel Design for Manufacturing tasks show both improved task performance as well as improved grounded decision-making capability.
arXiv Detail & Related papers (2024-08-17T11:49:53Z) - Unveiling LLM Mechanisms Through Neural ODEs and Control Theory [4.084134914321567]
This paper proposes a framework combining Neural Ordinary Differential Equations (Neural ODEs) and robust control theory to enhance the interpretability and control of large language models (LLMs)<n> Experimental results show that the integration of Neural ODEs and control theory significantly improves output consistency and model interpretability, advancing the development of explainable AI technologies.
arXiv Detail & Related papers (2024-06-23T22:56:34Z) - Automata Extraction from Transformers [5.419884861365132]
We propose an automata extraction algorithm specifically designed for Transformer models.
Treating the Transformer model as a black-box system, we track the model through the transformation process of their internal latent representations.
We then use classical pedagogical approaches like L* algorithm to interpret them as deterministic finite-state automata.
arXiv Detail & Related papers (2024-06-08T20:07:24Z) - AXOLOTL: Fairness through Assisted Self-Debiasing of Large Language
Model Outputs [20.772266479533776]
AXOLOTL is a novel post-processing framework that operates agnostically across tasks and models.
It identifies biases, proposes resolutions, and guides the model to self-debias its outputs.
This approach minimizes computational costs and preserves model performance.
arXiv Detail & Related papers (2024-03-01T00:02:37Z) - DeforestVis: Behavior Analysis of Machine Learning Models with Surrogate Decision Stumps [46.58231605323107]
We propose DeforestVis, a visual analytics tool that offers summarization of the behaviour of complex ML models.
DeforestVis helps users to explore the complexity versus fidelity trade-off by incrementally generating more stumps.
We show the applicability and usefulness of DeforestVis with two use cases and expert interviews with data analysts and model developers.
arXiv Detail & Related papers (2023-03-31T21:17:15Z) - SOLIS -- The MLOps journey from data acquisition to actionable insights [62.997667081978825]
In this paper we present a unified deployment pipeline and freedom-to-operate approach that supports all requirements while using basic cross-platform tensor framework and script language engines.
This approach however does not supply the needed procedures and pipelines for the actual deployment of machine learning capabilities in real production grade systems.
arXiv Detail & Related papers (2021-12-22T14:45:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.