Real-time goal recognition using approximations in Euclidean space
- URL: http://arxiv.org/abs/2307.07876v2
- Date: Fri, 23 Aug 2024 20:22:59 GMT
- Title: Real-time goal recognition using approximations in Euclidean space
- Authors: Douglas Tesch, Leonardo Rosa Amado, Felipe Meneguzzi,
- Abstract summary: We develop an efficient method for goal recognition that relies either on a single call to the planner for each possible goal in discrete domains or a simplified motion model that reduces the computational burden in continuous ones.
The resulting approach performs the online component of recognition orders of magnitude faster than the current state of the art, making it the first online method effectively usable for robotics applications that require sub-second recognition.
- Score: 10.003540430416091
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While recent work on online goal recognition efficiently infers goals under low observability, comparatively less work focuses on online goal recognition that works in both discrete and continuous domains. Online goal recognition approaches often rely on repeated calls to the planner at each new observation, incurring high computational costs. Recognizing goals online in continuous space quickly and reliably is critical for any trajectory planning problem since the real physical world is fast-moving, e.g. robot applications. We develop an efficient method for goal recognition that relies either on a single call to the planner for each possible goal in discrete domains or a simplified motion model that reduces the computational burden in continuous ones. The resulting approach performs the online component of recognition orders of magnitude faster than the current state of the art, making it the first online method effectively usable for robotics applications that require sub-second recognition.
Related papers
- KOI: Accelerating Online Imitation Learning via Hybrid Key-state Guidance [51.09834120088799]
We introduce the hybrid Key-state guided Online Imitation (KOI) learning method.
We use visual-language models to extract semantic key states from expert trajectory, indicating the objectives of "what to do"
Within the intervals between semantic key states, optical flow is employed to capture motion key states to understand the mechanisms of "how to do"
arXiv Detail & Related papers (2024-08-06T02:53:55Z) - Algorithm Design for Online Meta-Learning with Task Boundary Detection [63.284263611646]
We propose a novel algorithm for task-agnostic online meta-learning in non-stationary environments.
We first propose two simple but effective detection mechanisms of task switches and distribution shift.
We show that a sublinear task-averaged regret can be achieved for our algorithm under mild conditions.
arXiv Detail & Related papers (2023-02-02T04:02:49Z) - Leveraging Planning Landmarks for Hybrid Online Goal Recognition [7.690707525070737]
We propose a hybrid method for online goal recognition that combines a symbolic planning landmark based approach and a data-driven goal recognition approach.
The proposed method is significantly more efficient in terms of computation time than the state-of-the-art but also improves goal recognition performance.
arXiv Detail & Related papers (2023-01-25T13:21:30Z) - Investigating the Combination of Planning-Based and Data-Driven Methods
for Goal Recognition [7.620967781722714]
We investigate the application of two state-of-the-art, planning-based plan recognition approaches in a real-world setting.
We show that such approaches have difficulties when used to recognize the goals of human subjects, because human behaviour is typically not perfectly rational.
We propose an extension to the existing approaches through a classification-based method trained on observed behaviour data.
arXiv Detail & Related papers (2023-01-13T15:24:02Z) - Goal Recognition as a Deep Learning Task: the GRNet Approach [0.0]
In automated planning, recognising the goal of an agent from a trace of observations is an important task with many applications.
We study an alternative approach where goal recognition is formulated as a classification task addressed by machine learning.
Our approach, called GRNet, is primarily aimed at making goal recognition more accurate as well as faster by learning how to solve it in a given domain.
arXiv Detail & Related papers (2022-10-05T16:42:48Z) - Planning to Practice: Efficient Online Fine-Tuning by Composing Goals in
Latent Space [76.46113138484947]
General-purpose robots require diverse repertoires of behaviors to complete challenging tasks in real-world unstructured environments.
To address this issue, goal-conditioned reinforcement learning aims to acquire policies that can reach goals for a wide range of tasks on command.
We propose Planning to Practice, a method that makes it practical to train goal-conditioned policies for long-horizon tasks.
arXiv Detail & Related papers (2022-05-17T06:58:17Z) - Goal Recognition as Reinforcement Learning [20.651718821998106]
We develop a framework that combines model-free reinforcement learning and goal recognition.
This framework consists of two main stages: Offline learning of policies or utility functions for each potential goal, and online inference.
The resulting instantiation achieves state-of-the-art performance against goal recognizers on standard evaluation domains and superior performance in noisy environments.
arXiv Detail & Related papers (2022-02-13T16:16:43Z) - Real-time landmark detection for precise endoscopic submucosal
dissection via shape-aware relation network [51.44506007844284]
We propose a shape-aware relation network for accurate and real-time landmark detection in endoscopic submucosal dissection surgery.
We first devise an algorithm to automatically generate relation keypoint heatmaps, which intuitively represent the prior knowledge of spatial relations among landmarks.
We then develop two complementary regularization schemes to progressively incorporate the prior knowledge into the training process.
arXiv Detail & Related papers (2021-11-08T07:57:30Z) - C-Planning: An Automatic Curriculum for Learning Goal-Reaching Tasks [133.40619754674066]
Goal-conditioned reinforcement learning can solve tasks in a wide range of domains, including navigation and manipulation.
We propose the distant goal-reaching task by using search at training time to automatically generate intermediate states.
E-step corresponds to planning an optimal sequence of waypoints using graph search, while the M-step aims to learn a goal-conditioned policy to reach those waypoints.
arXiv Detail & Related papers (2021-10-22T22:05:31Z) - SABER: Data-Driven Motion Planner for Autonomously Navigating
Heterogeneous Robots [112.2491765424719]
We present an end-to-end online motion planning framework that uses a data-driven approach to navigate a heterogeneous robot team towards a global goal.
We use model predictive control (SMPC) to calculate control inputs that satisfy robot dynamics, and consider uncertainty during obstacle avoidance with chance constraints.
recurrent neural networks are used to provide a quick estimate of future state uncertainty considered in the SMPC finite-time horizon solution.
A Deep Q-learning agent is employed to serve as a high-level path planner, providing the SMPC with target positions that move the robots towards a desired global goal.
arXiv Detail & Related papers (2021-08-03T02:56:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.