Learning Bipedal Walking for Humanoid Robots in Challenging Environments with Obstacle Avoidance
- URL: http://arxiv.org/abs/2410.08212v1
- Date: Wed, 25 Sep 2024 07:02:04 GMT
- Title: Learning Bipedal Walking for Humanoid Robots in Challenging Environments with Obstacle Avoidance
- Authors: Marwan Hamze, Mitsuharu Morisawa, Eiichi Yoshida,
- Abstract summary: Deep reinforcement learning has seen successful implementations on humanoid robots to achieve dynamic walking.
In this paper, we aim to achieve bipedal locomotion in an environment where obstacles are present using a policy-based reinforcement learning.
- Score: 0.3481985817302898
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep reinforcement learning has seen successful implementations on humanoid robots to achieve dynamic walking. However, these implementations have been so far successful in simple environments void of obstacles. In this paper, we aim to achieve bipedal locomotion in an environment where obstacles are present using a policy-based reinforcement learning. By adding simple distance reward terms to a state of art reward function that can achieve basic bipedal locomotion, the trained policy succeeds in navigating the robot towards the desired destination without colliding with the obstacles along the way.
Related papers
- Deep Reinforcement Learning-based Obstacle Avoidance for Robot Movement in Warehouse Environments [6.061908707850057]
This paper proposes a deep reinforcement learning based on the warehouse environment, the mobile robot obstacle avoidance algorithm.
For the insufficient learning ability of the value function network in the deep reinforcement learning algorithm, the interaction information between pedestrians is extracted through the pedestrian angle grid.
The reward function of reinforcement learning is designed based on the spatial behaviour of pedestrians, and the robot is punished for the state where the angle changes too much.
arXiv Detail & Related papers (2024-09-23T12:42:35Z) - Infer and Adapt: Bipedal Locomotion Reward Learning from Demonstrations
via Inverse Reinforcement Learning [5.246548532908499]
This paper brings state-of-the-art Inverse Reinforcement Learning (IRL) techniques to solving bipedal locomotion problems over complex terrains.
We propose algorithms for learning expert reward functions, and we subsequently analyze the learned functions.
We empirically demonstrate that training a bipedal locomotion policy with the inferred reward functions enhances its walking performance on unseen terrains.
arXiv Detail & Related papers (2023-09-28T00:11:06Z) - Learning Human-to-Robot Handovers from Point Clouds [63.18127198174958]
We propose the first framework to learn control policies for vision-based human-to-robot handovers.
We show significant performance gains over baselines on a simulation benchmark, sim-to-sim transfer and sim-to-real transfer.
arXiv Detail & Related papers (2023-03-30T17:58:36Z) - Robust and Versatile Bipedal Jumping Control through Reinforcement
Learning [141.56016556936865]
This work aims to push the limits of agility for bipedal robots by enabling a torque-controlled bipedal robot to perform robust and versatile dynamic jumps in the real world.
We present a reinforcement learning framework for training a robot to accomplish a large variety of jumping tasks, such as jumping to different locations and directions.
We develop a new policy structure that encodes the robot's long-term input/output (I/O) history while also providing direct access to a short-term I/O history.
arXiv Detail & Related papers (2023-02-19T01:06:09Z) - Advanced Skills by Learning Locomotion and Local Navigation End-to-End [10.872193480485596]
In this work, we propose to solve the complete problem by training an end-to-end policy with deep reinforcement learning.
We demonstrate the successful deployment of policies on a real quadrupedal robot.
arXiv Detail & Related papers (2022-09-26T16:35:00Z) - Hierarchical Reinforcement Learning of Locomotion Policies in Response
to Approaching Objects: A Preliminary Study [11.919315372249802]
Deep reinforcement learning has enabled complex kinematic systems such as humanoid robots to move from point A to point B.
Inspired by the observation of the innate reactive behavior of animals in nature, we hope to extend this progress in robot locomotion.
We build a simulation environment in MuJoCo where a legged robot must avoid getting hit by a ball moving toward it.
arXiv Detail & Related papers (2022-03-20T18:24:18Z) - Reinforcement Learning for Robust Parameterized Locomotion Control of
Bipedal Robots [121.42930679076574]
We present a model-free reinforcement learning framework for training robust locomotion policies in simulation.
domain randomization is used to encourage the policies to learn behaviors that are robust across variations in system dynamics.
We demonstrate this on versatile walking behaviors such as tracking a target walking velocity, walking height, and turning yaw.
arXiv Detail & Related papers (2021-03-26T07:14:01Z) - Vision-Based Mobile Robotics Obstacle Avoidance With Deep Reinforcement
Learning [49.04274612323564]
Obstacle avoidance is a fundamental and challenging problem for autonomous navigation of mobile robots.
In this paper, we consider the problem of obstacle avoidance in simple 3D environments where the robot has to solely rely on a single monocular camera.
We tackle the obstacle avoidance problem as a data-driven end-to-end deep learning approach.
arXiv Detail & Related papers (2021-03-08T13:05:46Z) - Deep Reactive Planning in Dynamic Environments [20.319894237644558]
A robot can learn an end-to-end policy which can adapt to changes in the environment during execution.
We present a method that can achieve such behavior by combining traditional kinematic planning, deep learning, and deep reinforcement learning.
We demonstrate the proposed approach for several reaching and pick-and-place tasks in simulation, as well as on a real system of a 6-DoF industrial manipulator.
arXiv Detail & Related papers (2020-10-31T00:46:13Z) - Learning Quadrupedal Locomotion over Challenging Terrain [68.51539602703662]
Legged locomotion can dramatically expand the operational domains of robotics.
Conventional controllers for legged locomotion are based on elaborate state machines that explicitly trigger the execution of motion primitives and reflexes.
Here we present a radically robust controller for legged locomotion in challenging natural environments.
arXiv Detail & Related papers (2020-10-21T19:11:20Z) - Learning Agile Locomotion via Adversarial Training [59.03007947334165]
In this paper, we present a multi-agent learning system, in which a quadruped robot (protagonist) learns to chase another robot (adversary) while the latter learns to escape.
We find that this adversarial training process not only encourages agile behaviors but also effectively alleviates the laborious environment design effort.
In contrast to prior works that used only one adversary, we find that training an ensemble of adversaries, each of which specializes in a different escaping strategy, is essential for the protagonist to master agility.
arXiv Detail & Related papers (2020-08-03T01:20:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.