Make Bipedal Robots Learn How to Imitate
- URL: http://arxiv.org/abs/2105.07193v1
- Date: Sat, 15 May 2021 10:06:13 GMT
- Title: Make Bipedal Robots Learn How to Imitate
- Authors: Vishal Kumar and Sinnu Susan Thomas
- Abstract summary: We propose a method to train a bipedal robot to perform some basic movements with the help of imitation learning (IL)
An ingeniously written Deep Q Network (DQN) is trained with experience replay to make the robot learn to perform the movements as similar as the instructor.
- Score: 3.1981440103815717
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Bipedal robots do not perform well as humans since they do not learn to walk
like we do. In this paper we propose a method to train a bipedal robot to
perform some basic movements with the help of imitation learning (IL) in which
an instructor will perform the movement and the robot will try to mimic the
instructor movement. To the best of our knowledge, this is the first time we
train the robot to perform movements with a single video of the instructor and
as the training is done based on joint angles the robot will keep its joint
angles always in physical limits which in return help in faster training. The
joints of the robot are identified by OpenPose architecture and then joint
angle data is extracted with the help of angle between three points resulting
in a noisy solution. We smooth the data using Savitzky-Golay filter and
preserve the Simulatore data anatomy. An ingeniously written Deep Q Network
(DQN) is trained with experience replay to make the robot learn to perform the
movements as similar as the instructor. The implementation of the paper is made
publicly available.
Related papers
- Vid2Robot: End-to-end Video-conditioned Policy Learning with Cross-Attention Transformers [36.497624484863785]
We introduce Vid2Robot, an end-to-end video-conditioned policy that takes human videos demonstrating manipulation tasks as input and produces robot actions.
Our model is trained with a large dataset of prompt video-robot trajectory pairs to learn unified representations of human and robot actions from videos.
We evaluate Vid2Robot on real-world robots and observe over 20% improvement over BC-Z when using human prompt videos.
arXiv Detail & Related papers (2024-03-19T17:47:37Z) - Robot Learning with Sensorimotor Pre-training [98.7755895548928]
We present a self-supervised sensorimotor pre-training approach for robotics.
Our model, called RPT, is a Transformer that operates on sequences of sensorimotor tokens.
We find that sensorimotor pre-training consistently outperforms training from scratch, has favorable scaling properties, and enables transfer across different tasks, environments, and robots.
arXiv Detail & Related papers (2023-06-16T17:58:10Z) - Learning Video-Conditioned Policies for Unseen Manipulation Tasks [83.2240629060453]
Video-conditioned Policy learning maps human demonstrations of previously unseen tasks to robot manipulation skills.
We learn our policy to generate appropriate actions given current scene observations and a video of the target task.
We validate our approach on a set of challenging multi-task robot manipulation environments and outperform state of the art.
arXiv Detail & Related papers (2023-05-10T16:25:42Z) - Self-Improving Robots: End-to-End Autonomous Visuomotor Reinforcement
Learning [54.636562516974884]
In imitation and reinforcement learning, the cost of human supervision limits the amount of data that robots can be trained on.
In this work, we propose MEDAL++, a novel design for self-improving robotic systems.
The robot autonomously practices the task by learning to both do and undo the task, simultaneously inferring the reward function from the demonstrations.
arXiv Detail & Related papers (2023-03-02T18:51:38Z) - Learning Reward Functions for Robotic Manipulation by Observing Humans [92.30657414416527]
We use unlabeled videos of humans solving a wide range of manipulation tasks to learn a task-agnostic reward function for robotic manipulation policies.
The learned rewards are based on distances to a goal in an embedding space learned using a time-contrastive objective.
arXiv Detail & Related papers (2022-11-16T16:26:48Z) - ClipBot: an educational, physically impaired robot that learns to walk
via genetic algorithm optimization [0.0]
We propose ClipBot, a low-cost, do-it-yourself, robot whose skeleton is made of two paper clips.
An Arduino nano microcontroller actuates two servo motors that move the paper clips.
Students at the high school level were asked to implement a genetic algorithm to optimize the movements of the robot.
arXiv Detail & Related papers (2022-10-26T13:31:43Z) - Robotic Telekinesis: Learning a Robotic Hand Imitator by Watching Humans
on Youtube [24.530131506065164]
We build a system that enables any human to control a robot hand and arm, simply by demonstrating motions with their own hand.
The robot observes the human operator via a single RGB camera and imitates their actions in real-time.
We leverage this data to train a system that understands human hands and retargets a human video stream into a robot hand-arm trajectory that is smooth, swift, safe, and semantically similar to the guiding demonstration.
arXiv Detail & Related papers (2022-02-21T18:59:59Z) - Adaptation of Quadruped Robot Locomotion with Meta-Learning [64.71260357476602]
We demonstrate that meta-reinforcement learning can be used to successfully train a robot capable to solve a wide range of locomotion tasks.
The performance of the meta-trained robot is similar to that of a robot that is trained on a single task.
arXiv Detail & Related papers (2021-07-08T10:37:18Z) - Learning Bipedal Robot Locomotion from Human Movement [0.791553652441325]
We present a reinforcement learning based method for teaching a real world bipedal robot to perform movements directly from motion capture data.
Our method seamlessly transitions from training in a simulation environment to executing on a physical robot.
We demonstrate our method on an internally developed humanoid robot with movements ranging from a dynamic walk cycle to complex balancing and waving.
arXiv Detail & Related papers (2021-05-26T00:49:37Z) - PPMC RL Training Algorithm: Rough Terrain Intelligent Robots through
Reinforcement Learning [4.314956204483074]
This paper introduces a generic training algorithm teaching generalized PPMC in rough environments to any robot.
We show through experiments that the robot learns to generalize to new rough terrain maps, retaining a 100% success rate.
To the best of our knowledge, this is the first paper to introduce a generic training algorithm teaching generalized PPMC in rough environments to any robot.
arXiv Detail & Related papers (2020-03-02T10:14:52Z) - Morphology-Agnostic Visual Robotic Control [76.44045983428701]
MAVRIC is an approach that works with minimal prior knowledge of the robot's morphology.
We demonstrate our method on visually-guided 3D point reaching, trajectory following, and robot-to-robot imitation.
arXiv Detail & Related papers (2019-12-31T15:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.