Property-Aware Robot Object Manipulation: a Generative Approach
- URL: http://arxiv.org/abs/2106.04385v1
- Date: Tue, 8 Jun 2021 14:15:36 GMT
- Title: Property-Aware Robot Object Manipulation: a Generative Approach
- Authors: Luca Garello and Linda Lastrico, Francesco Rea, Fulvio Mastrogiovanni,
Nicoletta Noceti and Alessandra Sciutti
- Abstract summary: In this work, we focus on how to generate robot motion adapted to the hidden properties of the manipulated objects.
We explore the possibility of leveraging Generative Adversarial Networks to synthesize new actions coherent with the properties of the object.
Our results show that Generative Adversarial Nets can be a powerful tool for the generation of novel and meaningful transportation actions.
- Score: 57.70237375696411
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: When transporting an object, we unconsciously adapt our movement to its
properties, for instance by slowing down when the item is fragile. The most
relevant features of an object are immediately revealed to a human observer by
the way the handling occurs, without any need for verbal description. It would
greatly facilitate collaboration to enable humanoid robots to perform movements
that convey similar intuitive cues to the observers. In this work, we focus on
how to generate robot motion adapted to the hidden properties of the
manipulated objects, such as their weight and fragility. We explore the
possibility of leveraging Generative Adversarial Networks to synthesize new
actions coherent with the properties of the object. The use of a generative
approach allows us to create new and consistent motion patterns, without the
need of collecting a large number of recorded human-led demonstrations.
Besides, the informative content of the actions is preserved. Our results show
that Generative Adversarial Nets can be a powerful tool for the generation of
novel and meaningful transportation actions, which result effectively modulated
as a function of the object weight and the carefulness required in its
handling.
Related papers
- Deep Active Visual Attention for Real-time Robot Motion Generation:
Emergence of Tool-body Assimilation and Adaptive Tool-use [9.141661467673817]
This paper proposes a novel robot motion generation model, inspired by a human cognitive structure.
The model incorporates a state-driven active top-down visual attention module, which acquires attentions that can actively change targets based on task states.
The results suggested an improvement of flexibility in model's visual perception, which sustained stable attention and motion even if it was provided with untrained tools or exposed to experimenter's distractions.
arXiv Detail & Related papers (2022-06-29T10:55:32Z) - Learn to Predict How Humans Manipulate Large-sized Objects from
Interactive Motions [82.90906153293585]
We propose a graph neural network, HO-GCN, to fuse motion data and dynamic descriptors for the prediction task.
We show the proposed network that consumes dynamic descriptors can achieve state-of-the-art prediction results and help the network better generalize to unseen objects.
arXiv Detail & Related papers (2022-06-25T09:55:39Z) - Synthesis and Execution of Communicative Robotic Movements with
Generative Adversarial Networks [59.098560311521034]
We focus on how to transfer on two different robotic platforms the same kinematics modulation that humans adopt when manipulating delicate objects.
We choose to modulate the velocity profile adopted by the robots' end-effector, inspired by what humans do when transporting objects with different characteristics.
We exploit a novel Generative Adversarial Network architecture, trained with human kinematics examples, to generalize over them and generate new and meaningful velocity profiles.
arXiv Detail & Related papers (2022-03-29T15:03:05Z) - Improving Object Permanence using Agent Actions and Reasoning [8.847502932609737]
Existing approaches learn object permanence from low-level perception.
We argue that object permanence can be improved when the robot uses knowledge about executed actions.
arXiv Detail & Related papers (2021-10-01T07:09:49Z) - From Movement Kinematics to Object Properties: Online Recognition of
Human Carefulness [112.28757246103099]
We show how a robot can infer online, from vision alone, whether or not the human partner is careful when moving an object.
We demonstrated that a humanoid robot could perform this inference with high accuracy (up to 81.3%) even with a low-resolution camera.
The prompt recognition of movement carefulness from observing the partner's action will allow robots to adapt their actions on the object to show the same degree of care as their human partners.
arXiv Detail & Related papers (2021-09-01T16:03:13Z) - Careful with That! Observation of Human Movements to Estimate Objects
Properties [106.925705883949]
We focus on the features of human motor actions that communicate insights on the weight of an object.
Our final goal is to enable a robot to autonomously infer the degree of care required in object handling.
arXiv Detail & Related papers (2021-03-02T08:14:56Z) - Object Properties Inferring from and Transfer for Human Interaction
Motions [51.896592493436984]
In this paper, we present a fine-grained action recognition method that learns to infer object properties from human interaction motion alone.
We collect a large number of videos and 3D skeletal motions of the performing actors using an inertial motion capture device.
In particular, we learn to identify the interacting object, by estimating its weight, or its fragility or delicacy.
arXiv Detail & Related papers (2020-08-20T14:36:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.