Adaptation of Task Goal States from Prior Knowledge
- URL: http://arxiv.org/abs/2502.03918v1
- Date: Thu, 06 Feb 2025 09:51:04 GMT
- Title: Adaptation of Task Goal States from Prior Knowledge
- Authors: Andrei Costinescu, Darius Burschka,
- Abstract summary: We define a framework to define a task with freedom and variability in its goal state.
A robot could use this to observe the execution of a task and target a different goal from the observed one.
- Score: 1.098383730564372
- License:
- Abstract: This paper presents a framework to define a task with freedom and variability in its goal state. A robot could use this to observe the execution of a task and target a different goal from the observed one; a goal that is still compatible with the task description but would be easier for the robot to execute. We define the model of an environment state and an environment variation, and present experiments on how to interactively create the variation from a single task demonstration and how to use this variation to create an execution plan for bringing any environment into the goal state.
Related papers
- Planning with affordances: Integrating learned affordance models and symbolic planning [0.0]
We augment an existing task and motion planning framework with learned affordance models of objects in the world.
Each task can be seen as changing the current state of the world to a given goal state.
A symbolic planning algorithm uses this information and the starting and goal state to create a feasible plan to reach the desired goal state.
arXiv Detail & Related papers (2025-02-04T23:15:38Z) - Imagination Policy: Using Generative Point Cloud Models for Learning Manipulation Policies [25.760946763103483]
We propose Imagination Policy, a novel multi-task key-frame policy network for solving high-precision pick and place tasks.
Instead of learning actions directly, Imagination Policy generates point clouds to imagine desired states which are then translated to actions using rigid action estimation.
arXiv Detail & Related papers (2024-06-17T17:00:41Z) - MANER: Multi-Agent Neural Rearrangement Planning of Objects in Cluttered
Environments [8.15681999722805]
This paper proposes a learning-based framework for multi-agent object rearrangement planning.
It addresses the challenges of task sequencing and path planning in complex environments.
arXiv Detail & Related papers (2023-06-10T23:53:28Z) - Optimal task and motion planning and execution for human-robot
multi-agent systems in dynamic environments [54.39292848359306]
We propose a combined task and motion planning approach to optimize sequencing, assignment, and execution of tasks.
The framework relies on decoupling tasks and actions, where an action is one possible geometric realization of a symbolic task.
We demonstrate the approach effectiveness in a collaborative manufacturing scenario, in which a robotic arm and a human worker shall assemble a mosaic.
arXiv Detail & Related papers (2023-03-27T01:50:45Z) - ProgPrompt: Generating Situated Robot Task Plans using Large Language
Models [68.57918965060787]
Large language models (LLMs) can be used to score potential next actions during task planning.
We present a programmatic LLM prompt structure that enables plan generation functional across situated environments.
arXiv Detail & Related papers (2022-09-22T20:29:49Z) - Zero-shot Task Adaptation using Natural Language [43.807555235240365]
We propose a novel setting where an agent is given both a demonstration and a description.
Our approach is able to complete more than 95% of target tasks when using template-based descriptions.
arXiv Detail & Related papers (2021-06-05T21:39:04Z) - Rearrangement: A Challenge for Embodied AI [229.8891614821016]
We describe a framework for research and evaluation in Embodied AI.
Our proposal is based on a canonical task: Rearrangement.
We present experimental testbeds of rearrangement scenarios in four different simulation environments.
arXiv Detail & Related papers (2020-11-03T19:42:32Z) - Goal-Aware Prediction: Learning to Model What Matters [105.43098326577434]
One of the fundamental challenges in using a learned forward dynamics model is the mismatch between the objective of the learned model and that of the downstream planner or policy.
We propose to direct prediction towards task relevant information, enabling the model to be aware of the current task and encouraging it to only model relevant quantities of the state space.
We find that our method more effectively models the relevant parts of the scene conditioned on the goal, and as a result outperforms standard task-agnostic dynamics models and model-free reinforcement learning.
arXiv Detail & Related papers (2020-07-14T16:42:59Z) - Adaptive Procedural Task Generation for Hard-Exploration Problems [78.20918366839399]
We introduce Adaptive Procedural Task Generation (APT-Gen) to facilitate reinforcement learning in hard-exploration problems.
At the heart of our approach is a task generator that learns to create tasks from a parameterized task space via a black-box procedural generation module.
To enable curriculum learning in the absence of a direct indicator of learning progress, we propose to train the task generator by balancing the agent's performance in the generated tasks and the similarity to the target tasks.
arXiv Detail & Related papers (2020-07-01T09:38:51Z) - Automatic Curriculum Learning through Value Disagreement [95.19299356298876]
Continually solving new, unsolved tasks is the key to learning diverse behaviors.
In the multi-task domain, where an agent needs to reach multiple goals, the choice of training goals can largely affect sample efficiency.
We propose setting up an automatic curriculum for goals that the agent needs to solve.
We evaluate our method across 13 multi-goal robotic tasks and 5 navigation tasks, and demonstrate performance gains over current state-of-the-art methods.
arXiv Detail & Related papers (2020-06-17T03:58:25Z) - Generating Automatic Curricula via Self-Supervised Active Domain
Randomization [11.389072560141388]
We extend the self-play framework to jointly learn a goal and environment curriculum.
Our method generates a coupled goal-task curriculum, where agents learn through progressively more difficult tasks and environment variations.
Our results show that a curriculum of co-evolving the environment difficulty together with the difficulty of goals set in each environment provides practical benefits in the goal-directed tasks tested.
arXiv Detail & Related papers (2020-02-18T22:45:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.