Neural Style Transfer with Twin-Delayed DDPG for Shared Control of
Robotic Manipulators
- URL: http://arxiv.org/abs/2402.00722v1
- Date: Thu, 1 Feb 2024 16:14:32 GMT
- Title: Neural Style Transfer with Twin-Delayed DDPG for Shared Control of
Robotic Manipulators
- Authors: Raul Fernandez-Fernandez, Marco Aggravi, Paolo Robuffo Giordano, Juan
G. Victores and Claudio Pacchierotti
- Abstract summary: We propose a framework for transferring a set of styles to the motion of a robotic manipulator.
An autoencoder architecture extracts and defines the Content and the Style of the target robot motions.
The proposed Neural Policy Style Transfer TD3 (NPST3) alters the robot motion by introducing the trained style.
- Score: 15.947412070402878
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Neural Style Transfer (NST) refers to a class of algorithms able to
manipulate an element, most often images, to adopt the appearance or style of
another one. Each element is defined as a combination of Content and Style: the
Content can be conceptually defined as the what and the Style as the how of
said element. In this context, we propose a custom NST framework for
transferring a set of styles to the motion of a robotic manipulator, e.g., the
same robotic task can be carried out in an angry, happy, calm, or sad way. An
autoencoder architecture extracts and defines the Content and the Style of the
target robot motions. A Twin Delayed Deep Deterministic Policy Gradient (TD3)
network generates the robot control policy using the loss defined by the
autoencoder. The proposed Neural Policy Style Transfer TD3 (NPST3) alters the
robot motion by introducing the trained style. Such an approach can be
implemented either offline, for carrying out autonomous robot motions in
dynamic environments, or online, for adapting at runtime the style of a
teleoperated robot. The considered styles can be learned online from human
demonstrations. We carried out an evaluation with human subjects enrolling 73
volunteers, asking them to recognize the style behind some representative
robotic motions. Results show a good recognition rate, proving that it is
possible to convey different styles to a robot using this approach.
Related papers
- Track2Act: Predicting Point Tracks from Internet Videos enables Diverse Zero-shot Robot Manipulation [65.46610405509338]
Track2Act predicts tracks of how points in an image should move in future time-steps based on a goal.
We use these 2D track predictions to infer a sequence of rigid transforms of the object to be manipulated, and obtain robot end-effector poses.
We show that this approach of combining scalably learned track prediction with a residual policy enables zero-shot robot manipulation.
arXiv Detail & Related papers (2024-05-02T17:56:55Z) - Robot Learning with Sensorimotor Pre-training [98.7755895548928]
We present a self-supervised sensorimotor pre-training approach for robotics.
Our model, called RPT, is a Transformer that operates on sequences of sensorimotor tokens.
We find that sensorimotor pre-training consistently outperforms training from scratch, has favorable scaling properties, and enables transfer across different tasks, environments, and robots.
arXiv Detail & Related papers (2023-06-16T17:58:10Z) - Zero-Shot Robot Manipulation from Passive Human Videos [59.193076151832145]
We develop a framework for extracting agent-agnostic action representations from human videos.
Our framework is based on predicting plausible human hand trajectories.
We deploy the trained model zero-shot for physical robot manipulation tasks.
arXiv Detail & Related papers (2023-02-03T21:39:52Z) - Gesture2Path: Imitation Learning for Gesture-aware Navigation [54.570943577423094]
We present Gesture2Path, a novel social navigation approach that combines image-based imitation learning with model-predictive control.
We deploy our method on real robots and showcase the effectiveness of our approach for the four gestures-navigation scenarios.
arXiv Detail & Related papers (2022-09-19T23:05:36Z) - Robot Cooking with Stir-fry: Bimanual Non-prehensile Manipulation of
Semi-fluid Objects [13.847796949856457]
This letter describes an approach to achieve well-known Chinese cooking art stir-fry on a bimanual robot system.
We define a canonical stir-fry movement, then propose a decoupled framework for learning deformable object manipulation from human demonstration.
By adding visual feedback, our framework can adjust the movements automatically to achieve the desired stir-fry effect.
arXiv Detail & Related papers (2022-05-12T08:58:30Z) - Synthesis and Execution of Communicative Robotic Movements with
Generative Adversarial Networks [59.098560311521034]
We focus on how to transfer on two different robotic platforms the same kinematics modulation that humans adopt when manipulating delicate objects.
We choose to modulate the velocity profile adopted by the robots' end-effector, inspired by what humans do when transporting objects with different characteristics.
We exploit a novel Generative Adversarial Network architecture, trained with human kinematics examples, to generalize over them and generate new and meaningful velocity profiles.
arXiv Detail & Related papers (2022-03-29T15:03:05Z) - Target Reaching Behaviour for Unfreezing the Robot in a Semi-Static and
Crowded Environment [2.055949720959582]
We propose a robot behavior for a wheeled humanoid robot that complains with social norms for clearing its path when the robot is frozen due to the presence of humans.
The behavior consists of two modules: 1) A detection module, which make use of the Yolo v3 algorithm trained to detect human hands and human arms, and 2) A gesture module, which make use of a policy trained in simulation using the Proximal Policy Optimization algorithm.
arXiv Detail & Related papers (2020-12-02T13:43:59Z) - Counterfactual Explanation and Causal Inference in Service of Robustness
in Robot Control [15.104159722499366]
We propose an architecture for training generative models of counterfactual conditionals of the form, 'can we modify event A to cause B instead of C?'
In contrast to conventional control design approaches, where robustness is quantified in terms of the ability to reject noise, we explore the space of counterfactuals that might cause a certain requirement to be violated.
arXiv Detail & Related papers (2020-09-18T14:22:47Z) - Morphology-Agnostic Visual Robotic Control [76.44045983428701]
MAVRIC is an approach that works with minimal prior knowledge of the robot's morphology.
We demonstrate our method on visually-guided 3D point reaching, trajectory following, and robot-to-robot imitation.
arXiv Detail & Related papers (2019-12-31T15:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.