O2O-Afford: Annotation-Free Large-Scale Object-Object Affordance
Learning
- URL: http://arxiv.org/abs/2106.15087v1
- Date: Tue, 29 Jun 2021 04:38:12 GMT
- Title: O2O-Afford: Annotation-Free Large-Scale Object-Object Affordance
Learning
- Authors: Kaichun Mo, Yuzhe Qin, Fanbo Xiang, Hao Su, Leonidas Guibas
- Abstract summary: We propose a unified affordance learning framework to learn object-object interaction for various tasks.
We are able to conduct large-scale object-object affordance learning without the need for human annotations or demonstrations.
Experiments on large-scale synthetic data and real-world data prove the effectiveness of the proposed approach.
- Score: 24.9242853417825
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Contrary to the vast literature in modeling, perceiving, and understanding
agent-object (e.g., human-object, hand-object, robot-object) interaction in
computer vision and robotics, very few past works have studied the task of
object-object interaction, which also plays an important role in robotic
manipulation and planning tasks. There is a rich space of object-object
interaction scenarios in our daily life, such as placing an object on a messy
tabletop, fitting an object inside a drawer, pushing an object using a tool,
etc. In this paper, we propose a unified affordance learning framework to learn
object-object interaction for various tasks. By constructing four object-object
interaction task environments using physical simulation (SAPIEN) and thousands
of ShapeNet models with rich geometric diversity, we are able to conduct
large-scale object-object affordance learning without the need for human
annotations or demonstrations. At the core of technical contribution, we
propose an object-kernel point convolution network to reason about detailed
interaction between two objects. Experiments on large-scale synthetic data and
real-world data prove the effectiveness of the proposed approach. Please refer
to the project webpage for code, data, video, and more materials:
https://cs.stanford.edu/~kaichun/o2oafford
Related papers
- Keypoint Abstraction using Large Models for Object-Relative Imitation Learning [78.92043196054071]
Generalization to novel object configurations and instances across diverse tasks and environments is a critical challenge in robotics.
Keypoint-based representations have been proven effective as a succinct representation for essential object capturing features.
We propose KALM, a framework that leverages large pre-trained vision-language models to automatically generate task-relevant and cross-instance consistent keypoints.
arXiv Detail & Related papers (2024-10-30T17:37:31Z) - Entity-Centric Reinforcement Learning for Object Manipulation from Pixels [22.104757862869526]
Reinforcement Learning (RL) offers a general approach to learn object manipulation.
In practice, domains with more than a few objects are difficult for RL agents due to the curse of dimensionality.
We propose a structured approach for visual RL that is suitable for representing multiple objects and their interaction.
arXiv Detail & Related papers (2024-04-01T16:25:08Z) - AffordanceLLM: Grounding Affordance from Vision Language Models [36.97072698640563]
Affordance grounding refers to the task of finding the area of an object with which one can interact.
Much of the knowledge is hidden and beyond the image content with the supervised labels from a limited training set.
We make an attempt to improve the generalization capability of the current affordance grounding by taking the advantage of the rich world, abstract, and human-object-interaction knowledge.
arXiv Detail & Related papers (2024-01-12T03:21:02Z) - Multi-Object Graph Affordance Network: Goal-Oriented Planning through Learned Compound Object Affordances [1.9336815376402723]
The Multi-Object Graph Affordance Network models complex compound object affordances by learning the outcomes of robot actions that facilitate interactions between an object and a compound.
We show that our system successfully modeled the affordances of compound objects that include concave and convex objects, in both simulated and real-world environments.
arXiv Detail & Related papers (2023-09-19T08:40:46Z) - ROAM: Robust and Object-Aware Motion Generation Using Neural Pose
Descriptors [73.26004792375556]
This paper shows that robustness and generalisation to novel scene objects in 3D object-aware character synthesis can be achieved by training a motion model with as few as one reference object.
We leverage an implicit feature representation trained on object-only datasets, which encodes an SE(3)-equivariant descriptor field around the object.
We demonstrate substantial improvements in 3D virtual character motion and interaction quality and robustness to scenarios with unseen objects.
arXiv Detail & Related papers (2023-08-24T17:59:51Z) - Object-agnostic Affordance Categorization via Unsupervised Learning of
Graph Embeddings [6.371828910727037]
Acquiring knowledge about object interactions and affordances can facilitate scene understanding and human-robot collaboration tasks.
We address the problem of affordance categorization for class-agnostic objects with an open set of interactions.
A novel depth-informed qualitative spatial representation is proposed for the construction of Activity Graphs.
arXiv Detail & Related papers (2023-03-30T15:04:04Z) - Learn to Predict How Humans Manipulate Large-sized Objects from
Interactive Motions [82.90906153293585]
We propose a graph neural network, HO-GCN, to fuse motion data and dynamic descriptors for the prediction task.
We show the proposed network that consumes dynamic descriptors can achieve state-of-the-art prediction results and help the network better generalize to unseen objects.
arXiv Detail & Related papers (2022-06-25T09:55:39Z) - Lifelong Ensemble Learning based on Multiple Representations for
Few-Shot Object Recognition [6.282068591820947]
We present a lifelong ensemble learning approach based on multiple representations to address the few-shot object recognition problem.
To facilitate lifelong learning, each approach is equipped with a memory unit for storing and retrieving object information instantly.
We have performed extensive sets of experiments to assess the performance of the proposed approach in offline, and open-ended scenarios.
arXiv Detail & Related papers (2022-05-04T10:29:10Z) - MetaGraspNet: A Large-Scale Benchmark Dataset for Vision-driven Robotic
Grasping via Physics-based Metaverse Synthesis [78.26022688167133]
We present a large-scale benchmark dataset for vision-driven robotic grasping via physics-based metaverse synthesis.
The proposed dataset contains 100,000 images and 25 different object types.
We also propose a new layout-weighted performance metric alongside the dataset for evaluating object detection and segmentation performance.
arXiv Detail & Related papers (2021-12-29T17:23:24Z) - INVIGORATE: Interactive Visual Grounding and Grasping in Clutter [56.00554240240515]
INVIGORATE is a robot system that interacts with human through natural language and grasps a specified object in clutter.
We train separate neural networks for object detection, for visual grounding, for question generation, and for OBR detection and grasping.
We build a partially observable Markov decision process (POMDP) that integrates the learned neural network modules.
arXiv Detail & Related papers (2021-08-25T07:35:21Z) - Supervised Training of Dense Object Nets using Optimal Descriptors for
Industrial Robotic Applications [57.87136703404356]
Dense Object Nets (DONs) by Florence, Manuelli and Tedrake introduced dense object descriptors as a novel visual object representation for the robotics community.
In this paper we show that given a 3D model of an object, we can generate its descriptor space image, which allows for supervised training of DONs.
We compare the training methods on generating 6D grasps for industrial objects and show that our novel supervised training approach improves the pick-and-place performance in industry-relevant tasks.
arXiv Detail & Related papers (2021-02-16T11:40:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.