Robots with Different Embodiments Can Express and Influence Carefulness
in Object Manipulation
- URL: http://arxiv.org/abs/2208.02058v1
- Date: Wed, 3 Aug 2022 13:26:52 GMT
- Title: Robots with Different Embodiments Can Express and Influence Carefulness
in Object Manipulation
- Authors: Linda Lastrico, Luca Garello, Francesco Rea, Nicoletta Noceti, Fulvio
Mastrogiovanni, Alessandra Sciutti, Alessandro Carf\`i
- Abstract summary: This work investigates the perception of object manipulations performed with a communicative intent by two robots.
We designed the robots' movements to communicate carefulness or not during the transportation of objects.
- Score: 104.5440430194206
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Humans have an extraordinary ability to communicate and read the properties
of objects by simply watching them being carried by someone else. This level of
communicative skills and interpretation, available to humans, is essential for
collaborative robots if they are to interact naturally and effectively. For
example, suppose a robot is handing over a fragile object. In that case, the
human who receives it should be informed of its fragility in advance, through
an immediate and implicit message, i.e., by the direct modulation of the
robot's action. This work investigates the perception of object manipulations
performed with a communicative intent by two robots with different embodiments
(an iCub humanoid robot and a Baxter robot). We designed the robots' movements
to communicate carefulness or not during the transportation of objects. We
found that not only this feature is correctly perceived by human observers, but
it can elicit as well a form of motor adaptation in subsequent human object
manipulations. In addition, we get an insight into which motion features may
induce to manipulate an object more or less carefully.
Related papers
- Built Different: Tactile Perception to Overcome Cross-Embodiment Capability Differences in Collaborative Manipulation [1.9048510647598207]
Tactile sensing is a powerful means of implicit communication between a human and a robot assistant.
In this paper, we investigate how tactile sensing can transcend cross-embodiment differences across robotic systems.
We show how our method can enable a cooperative task where a robot and human must work together to maneuver objects through space.
arXiv Detail & Related papers (2024-09-23T10:45:41Z) - Open-World Object Manipulation using Pre-trained Vision-Language Models [72.87306011500084]
For robots to follow instructions from people, they must be able to connect the rich semantic information in human vocabulary.
We develop a simple approach, which leverages a pre-trained vision-language model to extract object-identifying information.
In a variety of experiments on a real mobile manipulator, we find that MOO generalizes zero-shot to a wide range of novel object categories and environments.
arXiv Detail & Related papers (2023-03-02T01:55:10Z) - Explain yourself! Effects of Explanations in Human-Robot Interaction [10.389325878657697]
Explanations of robot decisions could affect user perceptions, justify their reliability, and increase trust.
The effects on human perceptions of robots that explain their decisions have not been studied thoroughly.
This study demonstrates the need for and potential of explainable human-robot interaction.
arXiv Detail & Related papers (2022-04-09T15:54:27Z) - Synthesis and Execution of Communicative Robotic Movements with
Generative Adversarial Networks [59.098560311521034]
We focus on how to transfer on two different robotic platforms the same kinematics modulation that humans adopt when manipulating delicate objects.
We choose to modulate the velocity profile adopted by the robots' end-effector, inspired by what humans do when transporting objects with different characteristics.
We exploit a novel Generative Adversarial Network architecture, trained with human kinematics examples, to generalize over them and generate new and meaningful velocity profiles.
arXiv Detail & Related papers (2022-03-29T15:03:05Z) - From Movement Kinematics to Object Properties: Online Recognition of
Human Carefulness [112.28757246103099]
We show how a robot can infer online, from vision alone, whether or not the human partner is careful when moving an object.
We demonstrated that a humanoid robot could perform this inference with high accuracy (up to 81.3%) even with a low-resolution camera.
The prompt recognition of movement carefulness from observing the partner's action will allow robots to adapt their actions on the object to show the same degree of care as their human partners.
arXiv Detail & Related papers (2021-09-01T16:03:13Z) - Careful with That! Observation of Human Movements to Estimate Objects
Properties [106.925705883949]
We focus on the features of human motor actions that communicate insights on the weight of an object.
Our final goal is to enable a robot to autonomously infer the degree of care required in object handling.
arXiv Detail & Related papers (2021-03-02T08:14:56Z) - Human Grasp Classification for Reactive Human-to-Robot Handovers [50.91803283297065]
We propose an approach for human-to-robot handovers in which the robot meets the human halfway.
We collect a human grasp dataset which covers typical ways of holding objects with various hand shapes and poses.
We present a planning and execution approach that takes the object from the human hand according to the detected grasp and hand position.
arXiv Detail & Related papers (2020-03-12T19:58:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.