Investigating the Combination of Planning-Based and Data-Driven Methods
for Goal Recognition
- URL: http://arxiv.org/abs/2301.05608v1
- Date: Fri, 13 Jan 2023 15:24:02 GMT
- Title: Investigating the Combination of Planning-Based and Data-Driven Methods
for Goal Recognition
- Authors: Nils Wilken, Lea Cohausz, Johannes Schaum, Stefan L\"udtke and Heiner
Stuckenschmidt
- Abstract summary: We investigate the application of two state-of-the-art, planning-based plan recognition approaches in a real-world setting.
We show that such approaches have difficulties when used to recognize the goals of human subjects, because human behaviour is typically not perfectly rational.
We propose an extension to the existing approaches through a classification-based method trained on observed behaviour data.
- Score: 7.620967781722714
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: An important feature of pervasive, intelligent assistance systems is the
ability to dynamically adapt to the current needs of their users. Hence, it is
critical for such systems to be able to recognize those goals and needs based
on observations of the user's actions and state of the environment. In this
work, we investigate the application of two state-of-the-art, planning-based
plan recognition approaches in a real-world setting. So far, these approaches
were only evaluated in artificial settings in combination with agents that act
perfectly rational. We show that such approaches have difficulties when used to
recognize the goals of human subjects, because human behaviour is typically not
perfectly rational. To overcome this issue, we propose an extension to the
existing approaches through a classification-based method trained on observed
behaviour data. We empirically show that the proposed extension not only
outperforms the purely planning-based- and purely data-driven goal recognition
methods but is also able to recognize the correct goal more reliably,
especially when only a small number of observations were seen. This
substantially improves the usefulness of hybrid goal recognition approaches for
intelligent assistance systems, as recognizing a goal early opens much more
possibilities for supportive reactions of the system.
Related papers
- Handling Reward Misspecification in the Presence of Expectation Mismatch [19.03141646688652]
We use the theory of mind, i.e., the human user's beliefs about the AI agent, as a basis to develop a formal explanatory framework.
We propose a new interactive algorithm that uses the specified reward to infer potential user expectations.
arXiv Detail & Related papers (2024-04-12T19:43:37Z) - Evidential Active Recognition: Intelligent and Prudent Open-World
Embodied Perception [21.639429724987902]
Active recognition enables robots to explore novel observations, thereby acquiring more information while circumventing undesired viewing conditions.
Most recognition modules are developed under the closed-world assumption, which makes them ill-equipped to handle unexpected inputs, such as the absence of the target object in the current observation.
We propose treating active recognition as a sequential evidence-gathering process, providing by-step uncertainty and reliable prediction under the evidence combination theory.
arXiv Detail & Related papers (2023-11-23T03:51:46Z) - Evaluating General-Purpose AI with Psychometrics [43.85432514910491]
We discuss the need for a comprehensive and accurate evaluation of general-purpose AI systems such as large language models.
Current evaluation methodology, mostly based on benchmarks of specific tasks, falls short of adequately assessing these versatile AI systems.
To tackle these challenges, we suggest transitioning from task-oriented evaluation to construct-oriented evaluation.
arXiv Detail & Related papers (2023-10-25T05:38:38Z) - Leveraging Planning Landmarks for Hybrid Online Goal Recognition [7.690707525070737]
We propose a hybrid method for online goal recognition that combines a symbolic planning landmark based approach and a data-driven goal recognition approach.
The proposed method is significantly more efficient in terms of computation time than the state-of-the-art but also improves goal recognition performance.
arXiv Detail & Related papers (2023-01-25T13:21:30Z) - Discrete Factorial Representations as an Abstraction for Goal
Conditioned Reinforcement Learning [99.38163119531745]
We show that applying a discretizing bottleneck can improve performance in goal-conditioned RL setups.
We experimentally prove the expected return on out-of-distribution goals, while still allowing for specifying goals with expressive structure.
arXiv Detail & Related papers (2022-11-01T03:31:43Z) - Modelling Assessment Rubrics through Bayesian Networks: a Pragmatic
Approach [59.77710485234197]
This paper presents an approach to deriving a learner model directly from an assessment rubric.
We illustrate how the approach can be applied to automatize the human assessment of an activity developed for testing computational thinking skills.
arXiv Detail & Related papers (2022-09-07T10:09:12Z) - Generative multitask learning mitigates target-causing confounding [61.21582323566118]
We propose a simple and scalable approach to causal representation learning for multitask learning.
The improvement comes from mitigating unobserved confounders that cause the targets, but not the input.
Our results on the Attributes of People and Taskonomy datasets reflect the conceptual improvement in robustness to prior probability shift.
arXiv Detail & Related papers (2022-02-08T20:42:14Z) - Goal-Aware Cross-Entropy for Multi-Target Reinforcement Learning [15.33496710690063]
We propose goal-aware cross-entropy (GACE) loss, that can be utilized in a self-supervised way.
We then devise goal-discriminative attention networks (GDAN) which utilize the goal-relevant information to focus on the given instruction.
arXiv Detail & Related papers (2021-10-25T14:24:39Z) - Adversarial Intrinsic Motivation for Reinforcement Learning [60.322878138199364]
We investigate whether the Wasserstein-1 distance between a policy's state visitation distribution and a target distribution can be utilized effectively for reinforcement learning tasks.
Our approach, termed Adversarial Intrinsic Motivation (AIM), estimates this Wasserstein-1 distance through its dual objective and uses it to compute a supplemental reward function.
arXiv Detail & Related papers (2021-05-27T17:51:34Z) - Understanding the origin of information-seeking exploration in
probabilistic objectives for control [62.997667081978825]
An exploration-exploitation trade-off is central to the description of adaptive behaviour.
One approach to solving this trade-off has been to equip or propose that agents possess an intrinsic 'exploratory drive'
We show that this combination of utility maximizing and information-seeking behaviour arises from the minimization of an entirely difference class of objectives.
arXiv Detail & Related papers (2021-03-11T18:42:39Z) - AvE: Assistance via Empowerment [77.08882807208461]
We propose a new paradigm for assistance by instead increasing the human's ability to control their environment.
This task-agnostic objective preserves the person's autonomy and ability to achieve any eventual state.
arXiv Detail & Related papers (2020-06-26T04:40:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.