Learning Dual-arm Object Rearrangement for Cartesian Robots
- URL: http://arxiv.org/abs/2402.13634v1
- Date: Wed, 21 Feb 2024 09:13:08 GMT
- Title: Learning Dual-arm Object Rearrangement for Cartesian Robots
- Authors: Shishun Zhang, Qijin She, Wenhao Li, Chenyang Zhu, Yongjun Wang,
Ruizhen Hu, Kai Xu
- Abstract summary: This work focuses on the dual-arm object rearrangement problem abstracted from a realistic industrial scenario of Cartesian robots.
The goal of this problem is to transfer all the objects from sources to targets with the minimum total completion time.
We develop an effective object-to-arm task assignment strategy for minimizing the cumulative task execution time and maximizing the dual-arm cooperation efficiency.
- Score: 28.329845378085054
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work focuses on the dual-arm object rearrangement problem abstracted
from a realistic industrial scenario of Cartesian robots. The goal of this
problem is to transfer all the objects from sources to targets with the minimum
total completion time. To achieve the goal, the core idea is to develop an
effective object-to-arm task assignment strategy for minimizing the cumulative
task execution time and maximizing the dual-arm cooperation efficiency. One of
the difficulties in the task assignment is the scalability problem. As the
number of objects increases, the computation time of traditional
offline-search-based methods grows strongly for computational complexity.
Encouraged by the adaptability of reinforcement learning (RL) in long-sequence
task decisions, we propose an online task assignment decision method based on
RL, and the computation time of our method only increases linearly with the
number of objects. Further, we design an attention-based network to model the
dependencies between the input states during the whole task execution process
to help find the most reasonable object-to-arm correspondence in each task
assignment round. In the experimental part, we adapt some search-based methods
to this specific setting and compare our method with them. Experimental result
shows that our approach achieves outperformance over search-based methods in
total execution time and computational efficiency, and also verifies the
generalization of our method to different numbers of objects. In addition, we
show the effectiveness of our method deployed on the real robot in the
supplementary video.
Related papers
- A Two-stage Reinforcement Learning-based Approach for Multi-entity Task Allocation [27.480892280342417]
Decision makers must allocate entities to tasks reasonably across different scenarios.
Traditional methods assume static attributes and numbers of tasks and entities, often relying on dynamic programming and algorithms for solutions.
We propose a two-stage task allocation algorithm based on similarity, utilizing reinforcement learning to learn allocation strategies.
arXiv Detail & Related papers (2024-06-29T17:13:44Z) - Fast Inference and Transfer of Compositional Task Structures for
Few-shot Task Generalization [101.72755769194677]
We formulate it as a few-shot reinforcement learning problem where a task is characterized by a subtask graph.
Our multi-task subtask graph inferencer (MTSGI) first infers the common high-level task structure in terms of the subtask graph from the training tasks.
Our experiment results on 2D grid-world and complex web navigation domains show that the proposed method can learn and leverage the common underlying structure of the tasks for faster adaptation to the unseen tasks.
arXiv Detail & Related papers (2022-05-25T10:44:25Z) - Skill-based Meta-Reinforcement Learning [65.31995608339962]
We devise a method that enables meta-learning on long-horizon, sparse-reward tasks.
Our core idea is to leverage prior experience extracted from offline datasets during meta-learning.
arXiv Detail & Related papers (2022-04-25T17:58:19Z) - Active Multi-Task Representation Learning [50.13453053304159]
We give the first formal study on resource task sampling by leveraging the techniques from active learning.
We propose an algorithm that iteratively estimates the relevance of each source task to the target task and samples from each source task based on the estimated relevance.
arXiv Detail & Related papers (2022-02-02T08:23:24Z) - Automatic Goal Generation using Dynamical Distance Learning [5.797847756967884]
Reinforcement Learning (RL) agents can learn to solve complex sequential decision making tasks by interacting with the environment.
In the field of multi-goal RL, where agents are required to reach multiple goals to solve complex tasks, improving sample efficiency can be especially challenging.
We propose a method for automatic goal generation using a dynamical distance function (DDF) in a self-supervised fashion.
arXiv Detail & Related papers (2021-11-07T16:23:56Z) - Fast Line Search for Multi-Task Learning [0.0]
We propose a novel idea for line search algorithms in multi-task learning.
The idea is to use latent representation space instead of parameter space for finding step size.
We compare this idea with classical backtracking and gradient methods with a constant learning rate on MNIST, CIFAR-10, Cityscapes tasks.
arXiv Detail & Related papers (2021-10-02T21:02:29Z) - Efficiently Identifying Task Groupings for Multi-Task Learning [55.80489920205404]
Multi-task learning can leverage information learned by one task to benefit the training of other tasks.
We suggest an approach to select which tasks should train together in multi-task learning models.
Our method determines task groupings in a single training run by co-training all tasks together and quantifying the effect to which one task's gradient would affect another task's loss.
arXiv Detail & Related papers (2021-09-10T02:01:43Z) - Exploring Relational Context for Multi-Task Dense Prediction [76.86090370115]
We consider a multi-task environment for dense prediction tasks, represented by a common backbone and independent task-specific heads.
We explore various attention-based contexts, such as global and local, in the multi-task setting.
We propose an Adaptive Task-Relational Context module, which samples the pool of all available contexts for each task pair.
arXiv Detail & Related papers (2021-04-28T16:45:56Z) - Neural Architecture Search From Fr\'echet Task Distance [50.9995960884133]
We show how the distance between a target task and each task in a given set of baseline tasks can be used to reduce the neural architecture search space for the target task.
The complexity reduction in search space for task-specific architectures is achieved by building on the optimized architectures for similar tasks instead of doing a full search without using this side information.
arXiv Detail & Related papers (2021-03-23T20:43:31Z) - Distributed Primal-Dual Optimization for Online Multi-Task Learning [22.45069527817333]
We propose an adaptive primal-dual algorithm, which captures task-specific noise in adversarial learning and carries out a projection-free update with runtime efficiency.
Our model is well-suited to decentralized periodic-connected tasks as it allows the energy-starved or bandwidth-constraint tasks to postpone the update.
Empirical results confirm that the proposed model is highly effective on various real-world datasets.
arXiv Detail & Related papers (2020-04-02T23:36:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.