Kick-motion Training with DQN in AI Soccer Environment
- URL: http://arxiv.org/abs/2212.00389v1
- Date: Thu, 1 Dec 2022 09:35:36 GMT
- Title: Kick-motion Training with DQN in AI Soccer Environment
- Authors: Bumgeun Park, Jihui Lee, Taeyoung Kim, Dongsoo Har
- Abstract summary: This paper presents a technique to train a robot to perform kick-motion in AI soccer by using reinforcement learning (RL)
When training RL algorithms, a problem called the curse of dimensionality (COD) can occur if the dimension of the state is high and the number of training data is low.
In this paper, we attempt to use the relative coordinate system (RCS) as the state for training kick-motion of robot agent, instead of using the absolute coordinate system (ACS)
- Score: 2.464153570943062
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: This paper presents a technique to train a robot to perform kick-motion in AI
soccer by using reinforcement learning (RL). In RL, an agent interacts with an
environment and learns to choose an action in a state at each step. When
training RL algorithms, a problem called the curse of dimensionality (COD) can
occur if the dimension of the state is high and the number of training data is
low. The COD often causes degraded performance of RL models. In the situation
of the robot kicking the ball, as the ball approaches the robot, the robot
chooses the action based on the information obtained from the soccer field. In
order not to suffer COD, the training data, which are experiences in the case
of RL, should be collected evenly from all areas of the soccer field over
(theoretically infinite) time. In this paper, we attempt to use the relative
coordinate system (RCS) as the state for training kick-motion of robot agent,
instead of using the absolute coordinate system (ACS). Using the RCS eliminates
the necessity for the agent to know all the (state) information of entire
soccer field and reduces the dimension of the state that the agent needs to
know to perform kick-motion, and consequently alleviates COD. The training
based on the RCS is performed with the widely used Deep Q-network (DQN) and
tested in the AI Soccer environment implemented with Webots simulation
software.
Related papers
- Learning Robot Soccer from Egocentric Vision with Deep Reinforcement Learning [17.906144781244336]
We train end-to-end robot soccer policies with fully onboard computation and sensing via egocentric RGB vision.
This paper constitutes a first demonstration of end-to-end training for multi-agent robot soccer.
arXiv Detail & Related papers (2024-05-03T18:41:13Z) - Reinforcement Learning for Versatile, Dynamic, and Robust Bipedal Locomotion Control [106.32794844077534]
This paper presents a study on using deep reinforcement learning to create dynamic locomotion controllers for bipedal robots.
We develop a general control solution that can be used for a range of dynamic bipedal skills, from periodic walking and running to aperiodic jumping and standing.
This work pushes the limits of agility for bipedal robots through extensive real-world experiments.
arXiv Detail & Related papers (2024-01-30T10:48:43Z) - Hierarchical Reinforcement Learning for Precise Soccer Shooting Skills
using a Quadrupedal Robot [76.04391023228081]
We address the problem of enabling quadrupedal robots to perform precise shooting skills in the real world using reinforcement learning.
We propose a hierarchical framework that leverages deep reinforcement learning to train a robust motion control policy.
We deploy the proposed framework on an A1 quadrupedal robot and enable it to accurately shoot the ball to random targets in the real world.
arXiv Detail & Related papers (2022-08-01T22:34:51Z) - Accelerating Robotic Reinforcement Learning via Parameterized Action
Primitives [92.0321404272942]
Reinforcement learning can be used to build general-purpose robotic systems.
However, training RL agents to solve robotics tasks still remains challenging.
In this work, we manually specify a library of robot action primitives (RAPS), parameterized with arguments that are learned by an RL policy.
We find that our simple change to the action interface substantially improves both the learning efficiency and task performance.
arXiv Detail & Related papers (2021-10-28T17:59:30Z) - From Motor Control to Team Play in Simulated Humanoid Football [56.86144022071756]
We train teams of physically simulated humanoid avatars to play football in a realistic virtual environment.
In a sequence of stages, players first learn to control a fully articulated body to perform realistic, human-like movements.
They then acquire mid-level football skills such as dribbling and shooting.
Finally, they develop awareness of others and play as a team, bridging the gap between low-level motor control at a timescale of milliseconds.
arXiv Detail & Related papers (2021-05-25T20:17:10Z) - How to Train Your Robot with Deep Reinforcement Learning; Lessons We've
Learned [111.06812202454364]
We present a number of case studies involving robotic deep RL.
We discuss commonly perceived challenges in deep RL and how they have been addressed in these works.
We also provide an overview of other outstanding challenges, many of which are unique to the real-world robotics setting.
arXiv Detail & Related papers (2021-02-04T22:09:28Z) - Robust Reinforcement Learning-based Autonomous Driving Agent for
Simulation and Real World [0.0]
We present a DRL-based algorithm that is capable of performing autonomous robot control using Deep Q-Networks (DQN)
In our approach, the agent is trained in a simulated environment and it is able to navigate both in a simulated and real-world environment.
The trained agent is able to run on limited hardware resources and its performance is comparable to state-of-the-art approaches.
arXiv Detail & Related papers (2020-09-23T15:23:54Z) - A Framework for Studying Reinforcement Learning and Sim-to-Real in Robot
Soccer [1.1785354380793065]
This article introduces an open framework, called VSSS-RL, for studying Reinforcement Learning (RL) and sim-to-real in robot soccer.
We propose a simulated environment in which continuous or discrete control policies can be trained to control the complete behavior of soccer agents.
Our results show that the trained policies learned a broad repertoire of behaviors that are difficult to implement with handcrafted control policies.
arXiv Detail & Related papers (2020-08-18T23:52:32Z) - Learning to Play Table Tennis From Scratch using Muscular Robots [34.34824536814943]
This work is the first to (a) fail-safe learn of a safety-critical dynamic task using anthropomorphic robot arms, (b) learn a precision-demanding problem with a PAM-driven system, and (c) train robots to play table tennis without real balls.
Videos and datasets are available at muscularTT.embodied.ml.
arXiv Detail & Related papers (2020-06-10T16:43:27Z) - Meta-Reinforcement Learning for Robotic Industrial Insertion Tasks [70.56451186797436]
We study how to use meta-reinforcement learning to solve the bulk of the problem in simulation.
We demonstrate our approach by training an agent to successfully perform challenging real-world insertion tasks.
arXiv Detail & Related papers (2020-04-29T18:00:22Z) - Deep Adversarial Reinforcement Learning for Object Disentangling [36.66974848126079]
We present a novel adversarial reinforcement learning (ARL) framework for disentangling waste objects.
The ARL framework utilizes an adversary, which is trained to steer the original agent, the protagonist, to challenging states.
We show that our method can generalize from training to test scenarios by training an end-to-end system for robot control to solve a challenging object disentangling task.
arXiv Detail & Related papers (2020-03-08T13:20:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.