Compositional Multi-Object Reinforcement Learning with Linear Relation
Networks
- URL: http://arxiv.org/abs/2201.13388v1
- Date: Mon, 31 Jan 2022 17:53:30 GMT
- Title: Compositional Multi-Object Reinforcement Learning with Linear Relation
Networks
- Authors: Davide Mambelli, Frederik Tr\"auble, Stefan Bauer, Bernhard
Sch\"olkopf, Francesco Locatello
- Abstract summary: We focus on models that can learn manipulation tasks in fixed multi-object settings and extrapolate this skill zero-shot without any drop in performance when the number of objects changes.
Our approach, which scales linearly in $K$, allows agents to extrapolate and generalize zero-shot to any new object number.
- Score: 38.59852895970774
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although reinforcement learning has seen remarkable progress over the last
years, solving robust dexterous object-manipulation tasks in multi-object
settings remains a challenge. In this paper, we focus on models that can learn
manipulation tasks in fixed multi-object settings and extrapolate this skill
zero-shot without any drop in performance when the number of objects changes.
We consider the generic task of bringing a specific cube out of a set to a goal
position. We find that previous approaches, which primarily leverage attention
and graph neural network-based architectures, do not generalize their skills
when the number of input objects changes while scaling as $K^2$. We propose an
alternative plug-and-play module based on relational inductive biases to
overcome these limitations. Besides exceeding performances in their training
environment, we show that our approach, which scales linearly in $K$, allows
agents to extrapolate and generalize zero-shot to any new object number.
Related papers
- Zero-Shot Object-Centric Representation Learning [72.43369950684057]
We study current object-centric methods through the lens of zero-shot generalization.
We introduce a benchmark comprising eight different synthetic and real-world datasets.
We find that training on diverse real-world images improves transferability to unseen scenarios.
arXiv Detail & Related papers (2024-08-17T10:37:07Z) - Deep Learning-Based Object Pose Estimation: A Comprehensive Survey [73.74933379151419]
We discuss the recent advances in deep learning-based object pose estimation.
Our survey also covers multiple input data modalities, degrees-of-freedom of output poses, object properties, and downstream tasks.
arXiv Detail & Related papers (2024-05-13T14:44:22Z) - TaE: Task-aware Expandable Representation for Long Tail Class Incremental Learning [42.630413950957795]
We introduce a novel Task-aware Expandable (TaE) framework to learn diverse representations from each incremental task.
TaE achieves state-of-the-art performance.
arXiv Detail & Related papers (2024-02-08T16:37:04Z) - Complementary Learning Subnetworks for Parameter-Efficient
Class-Incremental Learning [40.13416912075668]
We propose a rehearsal-free CIL approach that learns continually via the synergy between two Complementary Learning Subnetworks.
Our method achieves competitive results against state-of-the-art methods, especially in accuracy gain, memory cost, training efficiency, and task-order.
arXiv Detail & Related papers (2023-06-21T01:43:25Z) - Efficient and Robust Training of Dense Object Nets for Multi-Object
Robot Manipulation [8.321536457963655]
We propose a framework for robust and efficient training of Dense Object Nets (DON)
We focus on training with multi-object data instead of singulated objects, combined with a well-chosen augmentation scheme.
We demonstrate the robustness and accuracy of our proposed framework on a real-world robotic grasping task.
arXiv Detail & Related papers (2022-06-24T08:24:42Z) - Improving Feature Generalizability with Multitask Learning in Class
Incremental Learning [12.632121107536843]
Many deep learning applications, like keyword spotting, require the incorporation of new concepts (classes) over time, referred to as Class Incremental Learning (CIL)
The major challenge in CIL is catastrophic forgetting, i.e., preserving as much of the old knowledge as possible while learning new tasks.
We propose multitask learning during base model training to improve the feature generalizability.
Our approach enhances the average incremental learning accuracy by up to 5.5%, which enables more reliable and accurate keyword spotting over time.
arXiv Detail & Related papers (2022-04-26T07:47:54Z) - Plug and Play, Model-Based Reinforcement Learning [60.813074750879615]
We introduce an object-based representation that allows zero-shot integration of new objects from known object classes.
This is achieved by representing the global transition dynamics as a union of local transition functions.
Experiments show that our representation can achieve sample-efficiency in a variety of set-ups.
arXiv Detail & Related papers (2021-08-20T01:20:15Z) - Hierarchical Few-Shot Imitation with Skill Transition Models [66.81252581083199]
Few-shot Imitation with Skill Transition Models (FIST) is an algorithm that extracts skills from offline data and utilizes them to generalize to unseen tasks.
We show that FIST is capable of generalizing to new tasks and substantially outperforms prior baselines in navigation experiments.
arXiv Detail & Related papers (2021-07-19T15:56:01Z) - Self-supervised Visual Reinforcement Learning with Object-centric
Representations [11.786249372283562]
We propose to use object-centric representations as a modular and structured observation space.
We show that the structure in the representations in combination with goal-conditioned attention policies helps the autonomous agent to discover and learn useful skills.
arXiv Detail & Related papers (2020-11-29T14:55:09Z) - Incremental Object Detection via Meta-Learning [77.55310507917012]
We propose a meta-learning approach that learns to reshape model gradients, such that information across incremental tasks is optimally shared.
In comparison to existing meta-learning methods, our approach is task-agnostic, allows incremental addition of new-classes and scales to high-capacity models for object detection.
arXiv Detail & Related papers (2020-03-17T13:40:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.