Deep Active Visual Attention for Real-time Robot Motion Generation:
Emergence of Tool-body Assimilation and Adaptive Tool-use
- URL: http://arxiv.org/abs/2206.14530v1
- Date: Wed, 29 Jun 2022 10:55:32 GMT
- Title: Deep Active Visual Attention for Real-time Robot Motion Generation:
Emergence of Tool-body Assimilation and Adaptive Tool-use
- Authors: Hyogo Hiruma, Hiroshi Ito, Hiroki Mori, and Tetsuya Ogata
- Abstract summary: This paper proposes a novel robot motion generation model, inspired by a human cognitive structure.
The model incorporates a state-driven active top-down visual attention module, which acquires attentions that can actively change targets based on task states.
The results suggested an improvement of flexibility in model's visual perception, which sustained stable attention and motion even if it was provided with untrained tools or exposed to experimenter's distractions.
- Score: 9.141661467673817
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sufficiently perceiving the environment is a critical factor in robot motion
generation. Although the introduction of deep visual processing models have
contributed in extending this ability, existing methods lack in the ability to
actively modify what to perceive; humans perform internally during visual
cognitive processes. This paper addresses the issue by proposing a novel robot
motion generation model, inspired by a human cognitive structure. The model
incorporates a state-driven active top-down visual attention module, which
acquires attentions that can actively change targets based on task states. We
term such attentions as role-based attentions, since the acquired attention
directed to targets that shared a coherent role throughout the motion. The
model was trained on a robot tool-use task, in which the role-based attentions
perceived the robot grippers and tool as identical end-effectors, during object
picking and object dragging motions respectively. This is analogous to a
biological phenomenon called tool-body assimilation, in which one regards a
handled tool as an extension of one's body. The results suggested an
improvement of flexibility in model's visual perception, which sustained stable
attention and motion even if it was provided with untrained tools or exposed to
experimenter's distractions.
Related papers
- Learning Goal-based Movement via Motivational-based Models in Cognitive
Mobile Robots [58.720142291102135]
Humans have needs motivating their behavior according to intensity and context.
We also create preferences associated with each action's perceived pleasure, which is susceptible to changes over time.
This makes decision-making more complex, requiring learning to balance needs and preferences according to the context.
arXiv Detail & Related papers (2023-02-20T04:52:24Z) - Synthesis and Execution of Communicative Robotic Movements with
Generative Adversarial Networks [59.098560311521034]
We focus on how to transfer on two different robotic platforms the same kinematics modulation that humans adopt when manipulating delicate objects.
We choose to modulate the velocity profile adopted by the robots' end-effector, inspired by what humans do when transporting objects with different characteristics.
We exploit a novel Generative Adversarial Network architecture, trained with human kinematics examples, to generalize over them and generate new and meaningful velocity profiles.
arXiv Detail & Related papers (2022-03-29T15:03:05Z) - Attention Mechanisms in Computer Vision: A Survey [75.6074182122423]
We provide a comprehensive review of various attention mechanisms in computer vision.
We categorize them according to approach, such as channel attention, spatial attention, temporal attention and branch attention.
We suggest future directions for attention mechanism research.
arXiv Detail & Related papers (2021-11-15T09:18:40Z) - Improving Object Permanence using Agent Actions and Reasoning [8.847502932609737]
Existing approaches learn object permanence from low-level perception.
We argue that object permanence can be improved when the robot uses knowledge about executed actions.
arXiv Detail & Related papers (2021-10-01T07:09:49Z) - From Movement Kinematics to Object Properties: Online Recognition of
Human Carefulness [112.28757246103099]
We show how a robot can infer online, from vision alone, whether or not the human partner is careful when moving an object.
We demonstrated that a humanoid robot could perform this inference with high accuracy (up to 81.3%) even with a low-resolution camera.
The prompt recognition of movement carefulness from observing the partner's action will allow robots to adapt their actions on the object to show the same degree of care as their human partners.
arXiv Detail & Related papers (2021-09-01T16:03:13Z) - Maintaining a Reliable World Model using Action-aware Perceptual
Anchoring [4.971403153199917]
There is a need for robots to maintain a model of its surroundings even when objects go out of view and are no longer visible.
This requires anchoring perceptual information onto symbols that represent the objects in the environment.
We present a model for action-aware perceptual anchoring that enables robots to track objects in a persistent manner.
arXiv Detail & Related papers (2021-07-07T06:35:14Z) - Property-Aware Robot Object Manipulation: a Generative Approach [57.70237375696411]
In this work, we focus on how to generate robot motion adapted to the hidden properties of the manipulated objects.
We explore the possibility of leveraging Generative Adversarial Networks to synthesize new actions coherent with the properties of the object.
Our results show that Generative Adversarial Nets can be a powerful tool for the generation of novel and meaningful transportation actions.
arXiv Detail & Related papers (2021-06-08T14:15:36Z) - How to select and use tools? : Active Perception of Target Objects Using
Multimodal Deep Learning [9.677391628613025]
We focus on active perception using multimodal sensorimotor data while a robot interacts with objects.
We construct a deep neural networks (DNN) model that learns to recognize object characteristics.
We also examine the contributions of images, force, and tactile data and show that learning a variety of multimodal information results in rich perception for tool use.
arXiv Detail & Related papers (2021-06-04T12:49:30Z) - Careful with That! Observation of Human Movements to Estimate Objects
Properties [106.925705883949]
We focus on the features of human motor actions that communicate insights on the weight of an object.
Our final goal is to enable a robot to autonomously infer the degree of care required in object handling.
arXiv Detail & Related papers (2021-03-02T08:14:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.