OntoPret: An Ontology for the Interpretation of Human Behavior
- URL: http://arxiv.org/abs/2510.23553v1
- Date: Mon, 27 Oct 2025 17:28:51 GMT
- Title: OntoPret: An Ontology for the Interpretation of Human Behavior
- Authors: Alexis Ellis, Stacie Severyn, Fjollë Novakazi, Hadi Banaee, Cogan Shimizu,
- Abstract summary: A research gap exists between techno centric robotic frameworks, which often lack nuanced models of human behavior, and collaborative interpretation.<n>This paper addresses this gap by presenting OntoPret, an adaptability for the interpretation of human behavior.
- Score: 0.024466725954625887
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As human machine teaming becomes central to paradigms like Industry 5.0, a critical need arises for machines to safely and effectively interpret complex human behaviors. A research gap currently exists between techno centric robotic frameworks, which often lack nuanced models of human behavior, and descriptive behavioral ontologies, which are not designed for real time, collaborative interpretation. This paper addresses this gap by presenting OntoPret, an ontology for the interpretation of human behavior. Grounded in cognitive science and a modular engineering methodology, OntoPret provides a formal, machine processable framework for classifying behaviors, including task deviations and deceptive actions. We demonstrate its adaptability across two distinct use cases manufacturing and gameplay and establish the semantic foundations necessary for advanced reasoning about human intentions.
Related papers
- When Should an AI Act? A Human-Centered Model of Scene, Context, and Behavior for Agentic AI Design [0.44743648495907423]
Agentic AI increasingly intervenes proactively by inferring users' situations from contextual data.<n>We propose a conceptual model that reframes behavior as an outcome integrating interpretive Scene, Context, and Human Behavior Factors.<n>We derive five agent design principles that guide intervention depth, timing, intensity, and restraint.
arXiv Detail & Related papers (2026-02-26T09:56:37Z) - Explicit World Models for Reliable Human-Robot Collaboration [13.90528067304433]
We take a radically different tack to the issue of reliable embodied AI.<n>We emphasise the dynamic, ambiguous and subjective nature of human-robot interactions.
arXiv Detail & Related papers (2026-01-05T00:58:19Z) - Dehumanizing Machines: Mitigating Anthropomorphic Behaviors in Text Generation Systems [55.99010491370177]
How to intervene on such system outputs to mitigate anthropomorphic behaviors and their attendant harmful outcomes remains understudied.<n>We compile an inventory of interventions grounded both in prior literature and a crowdsourcing study where participants edited system outputs to make them less human-like.<n>We also develop a conceptual framework to help characterize the landscape of possible interventions, articulate distinctions between different types of interventions, and provide a theoretical basis for evaluating the effectiveness of different interventions.
arXiv Detail & Related papers (2025-02-19T18:06:37Z) - Real-time Addressee Estimation: Deployment of a Deep-Learning Model on
the iCub Robot [52.277579221741746]
Addressee Estimation is a skill essential for social robots to interact smoothly with humans.
Inspired by human perceptual skills, a deep-learning model for Addressee Estimation is designed, trained, and deployed on an iCub robot.
The study presents the procedure of such implementation and the performance of the model deployed in real-time human-robot interaction.
arXiv Detail & Related papers (2023-11-09T13:01:21Z) - Causal Discovery of Dynamic Models for Predicting Human Spatial
Interactions [5.742409080817885]
We propose an application of causal discovery methods to model human-robot spatial interactions.
New methods and practical solutions are discussed to exploit, for the first time, a state-of-the-art causal discovery algorithm.
arXiv Detail & Related papers (2022-10-29T08:56:48Z) - Robust Planning for Human-Robot Joint Tasks with Explicit Reasoning on
Human Mental State [2.8246074016493457]
We consider the human-aware task planning problem where a human-robot team is given a shared task with a known objective to achieve.
Recent approaches tackle it by modeling it as a team of independent, rational agents, where the robot plans for both agents' (shared) tasks.
We describe a novel approach to solve such problems, which models and uses execution-time observability conventions.
arXiv Detail & Related papers (2022-10-17T09:21:00Z) - Interpreting Neural Policies with Disentangled Tree Representations [58.769048492254555]
We study interpretability of compact neural policies through the lens of disentangled representation.
We leverage decision trees to obtain factors of variation for disentanglement in robot learning.
We introduce interpretability metrics that measure disentanglement of learned neural dynamics.
arXiv Detail & Related papers (2022-10-13T01:10:41Z) - Modeling Human Behavior Part I -- Learning and Belief Approaches [0.0]
We focus on techniques which learn a model or policy of behavior through exploration and feedback.
Next generation autonomous and adaptive systems will largely include AI agents and humans working together as teams.
arXiv Detail & Related papers (2022-05-13T07:33:49Z) - Data-driven emotional body language generation for social robotics [58.88028813371423]
In social robotics, endowing humanoid robots with the ability to generate bodily expressions of affect can improve human-robot interaction and collaboration.
We implement a deep learning data-driven framework that learns from a few hand-designed robotic bodily expressions.
The evaluation study found that the anthropomorphism and animacy of the generated expressions are not perceived differently from the hand-designed ones.
arXiv Detail & Related papers (2022-05-02T09:21:39Z) - Active Inference in Robotics and Artificial Agents: Survey and
Challenges [51.29077770446286]
We review the state-of-the-art theory and implementations of active inference for state-estimation, control, planning and learning.
We showcase relevant experiments that illustrate its potential in terms of adaptation, generalization and robustness.
arXiv Detail & Related papers (2021-12-03T12:10:26Z) - Deep Interpretable Models of Theory of Mind For Human-Agent Teaming [0.7734726150561086]
We develop an interpretable modular neural framework for modeling the intentions of other observed entities.
We demonstrate the efficacy of our approach with experiments on data from human participants on a search and rescue task in Minecraft.
arXiv Detail & Related papers (2021-04-07T06:18:58Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.