Probabilistic Active Goal Recognition
- URL: http://arxiv.org/abs/2507.21846v1
- Date: Tue, 29 Jul 2025 14:22:29 GMT
- Title: Probabilistic Active Goal Recognition
- Authors: Chenyuan Zhang, Cristian Rojas Cardenas, Hamid Rezatofighi, Mor Vered, Buser Say,
- Abstract summary: We adopt a probabilistic framework for Active Goal Recognition.<n>We propose an integrated solution that combines a joint belief update mechanism with a Monte Carlo Tree Search algorithm.<n>We show that our joint belief update significantly outperforms passive goal recognition.
- Score: 12.583886275683302
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In multi-agent environments, effective interaction hinges on understanding the beliefs and intentions of other agents. While prior work on goal recognition has largely treated the observer as a passive reasoner, Active Goal Recognition (AGR) focuses on strategically gathering information to reduce uncertainty. We adopt a probabilistic framework for Active Goal Recognition and propose an integrated solution that combines a joint belief update mechanism with a Monte Carlo Tree Search (MCTS) algorithm, allowing the observer to plan efficiently and infer the actor's hidden goal without requiring domain-specific knowledge. Through comprehensive empirical evaluation in a grid-based domain, we show that our joint belief update significantly outperforms passive goal recognition, and that our domain-independent MCTS performs comparably to our strong domain-specific greedy baseline. These results establish our solution as a practical and robust framework for goal inference, advancing the field toward more interactive and adaptive multi-agent systems.
Related papers
- Self-Paced Collaborative and Adversarial Network for Unsupervised Domain Adaptation [74.27130400558013]
This paper proposes a new unsupervised domain adaptation approach called Collaborative and Adversarial Network (CAN)<n>CAN uses the domain-collaborative and domain-adversarial learning strategy for training the neural network.<n>To further enhance the discriminability in the target domain, we propose Self-Paced CAN (SPCAN)
arXiv Detail & Related papers (2025-06-24T02:58:37Z) - Qualitative Analysis of $ω$-Regular Objectives on Robust MDPs [5.129009542652635]
We study the qualitative problem for reachability and parity objectives on Robust Markov Decision Processes.<n>We first present efficient algorithms with oracle access to uncertainty sets that solve these problems.<n>We then report experimental results demonstrating the effectiveness of our oracle-based approach.
arXiv Detail & Related papers (2025-05-07T16:15:40Z) - Diffusion-Reinforcement Learning Hierarchical Motion Planning in Multi-agent Adversarial Games [6.532258098619471]
We propose a hierarchical architecture that integrates a high-level diffusion model to plan global paths responsive to environment data.<n>We show that our approach outperforms baselines by 77.18% and 47.38% on detection and goal reaching rate.
arXiv Detail & Related papers (2024-03-16T03:53:55Z) - Spatio-Temporal Domain Awareness for Multi-Agent Collaborative
Perception [18.358998861454477]
Multi-agent collaborative perception as a potential application for vehicle-to-everything communication could significantly improve the performance perception of autonomous vehicles over single-agent perception.
We propose SCOPE, a novel collaborative perception framework that aggregates awareness characteristics across agents in an end-to-end manner.
arXiv Detail & Related papers (2023-07-26T03:00:31Z) - Attention Based Feature Fusion For Multi-Agent Collaborative Perception [4.120288148198388]
We propose an intermediate collaborative perception solution in the form of a graph attention network (GAT)
The proposed approach develops an attention-based aggregation strategy to fuse intermediate representations exchanged among multiple connected agents.
This approach adaptively highlights important regions in the intermediate feature maps at both the channel and spatial levels, resulting in improved object detection precision.
arXiv Detail & Related papers (2023-05-03T12:06:11Z) - Generative multitask learning mitigates target-causing confounding [61.21582323566118]
We propose a simple and scalable approach to causal representation learning for multitask learning.
The improvement comes from mitigating unobserved confounders that cause the targets, but not the input.
Our results on the Attributes of People and Taskonomy datasets reflect the conceptual improvement in robustness to prior probability shift.
arXiv Detail & Related papers (2022-02-08T20:42:14Z) - Goal-Aware Cross-Entropy for Multi-Target Reinforcement Learning [15.33496710690063]
We propose goal-aware cross-entropy (GACE) loss, that can be utilized in a self-supervised way.
We then devise goal-discriminative attention networks (GDAN) which utilize the goal-relevant information to focus on the given instruction.
arXiv Detail & Related papers (2021-10-25T14:24:39Z) - Learning to Plan Optimistically: Uncertainty-Guided Deep Exploration via
Latent Model Ensembles [73.15950858151594]
This paper presents Latent Optimistic Value Exploration (LOVE), a strategy that enables deep exploration through optimism in the face of uncertain long-term rewards.
We combine latent world models with value function estimation to predict infinite-horizon returns and recover associated uncertainty via ensembling.
We apply LOVE to visual robot control tasks in continuous action spaces and demonstrate on average more than 20% improved sample efficiency in comparison to state-of-the-art and other exploration objectives.
arXiv Detail & Related papers (2020-10-27T22:06:57Z) - Learning from Extrinsic and Intrinsic Supervisions for Domain
Generalization [95.73898853032865]
We present a new domain generalization framework that learns how to generalize across domains simultaneously.
We demonstrate the effectiveness of our approach on two standard object recognition benchmarks.
arXiv Detail & Related papers (2020-07-18T03:12:24Z) - Off-Dynamics Reinforcement Learning: Training for Transfer with Domain
Classifiers [138.68213707587822]
We propose a simple, practical, and intuitive approach for domain adaptation in reinforcement learning.
We show that we can achieve this goal by compensating for the difference in dynamics by modifying the reward function.
Our approach is applicable to domains with continuous states and actions and does not require learning an explicit model of the dynamics.
arXiv Detail & Related papers (2020-06-24T17:47:37Z) - Mutual Information-based State-Control for Intrinsically Motivated
Reinforcement Learning [102.05692309417047]
In reinforcement learning, an agent learns to reach a set of goals by means of an external reward signal.
In the natural world, intelligent organisms learn from internal drives, bypassing the need for external signals.
We propose to formulate an intrinsic objective as the mutual information between the goal states and the controllable states.
arXiv Detail & Related papers (2020-02-05T19:21:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.