Pyrus Base: An Open Source Python Framework for the RoboCup 2D Soccer
Simulation
- URL: http://arxiv.org/abs/2307.16875v1
- Date: Sat, 22 Jul 2023 01:30:25 GMT
- Title: Pyrus Base: An Open Source Python Framework for the RoboCup 2D Soccer
Simulation
- Authors: Nader Zare, Aref Sayareh, Omid Amini, Mahtab Sarvmaili, Arad
Firouzkouhi, Stan Matwin, Amilcar Soares
- Abstract summary: Soccer Simulation 2D (SS2D) was one of the leagues initiated in the RoboCup competition.
In every SS2D game, two teams of 11 players and one coach connect to the RoboCup Soccer Simulation Server and compete against each other.
We introduce Pyrus, the first Python base code for SS2D.
- Score: 9.305564694066934
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Soccer, also known as football in some parts of the world, involves two teams
of eleven players whose objective is to score more goals than the opposing
team. To simulate this game and attract scientists from all over the world to
conduct research and participate in an annual computer-based soccer world cup,
Soccer Simulation 2D (SS2D) was one of the leagues initiated in the RoboCup
competition. In every SS2D game, two teams of 11 players and one coach connect
to the RoboCup Soccer Simulation Server and compete against each other. Over
the past few years, several C++ base codes have been employed to control
agents' behavior and their communication with the server. Although C++ base
codes have laid the foundation for the SS2D, developing them requires an
advanced level of C++ programming. C++ language complexity is a limiting
disadvantage of C++ base codes for all users, especially for beginners. To
conquer the challenges of C++ base codes and provide a powerful baseline for
developing machine learning concepts, we introduce Pyrus, the first Python base
code for SS2D. Pyrus is developed to encourage researchers to efficiently
develop their ideas and integrate machine learning algorithms into their teams.
Pyrus base is open-source code, and it is publicly available under MIT License
on GitHub
Related papers
- Game Development as Human-LLM Interaction [55.03293214439741]
This paper introduces the Interaction-driven Game Engine (IGE) powered by Human-LLM interaction.
We construct an IGE for poker games as a case study and evaluate it from two perspectives: interaction quality and code correctness.
arXiv Detail & Related papers (2024-08-18T07:06:57Z) - RobocupGym: A challenging continuous control benchmark in Robocup [7.926196208425107]
We introduce a Robocup-based RL environment based on the open source rcssserver3d soccer server.
In each task, an RL agent controls a simulated robot, and can interact with the ball or other agents.
arXiv Detail & Related papers (2024-07-03T15:26:32Z) - Cross Language Soccer Framework: An Open Source Framework for the RoboCup 2D Soccer Simulation [0.4660328753262075]
RoboCup Soccer Simulation 2D (SS2D) research is hampered by the complexity of existing Cpp-based codes like Helios, Cyrus, and Gliders.
This development paper introduces a transformative solution a g-based, language-agnostic framework that seamlessly integrates with the high-performance Helios base code.
arXiv Detail & Related papers (2024-06-09T03:11:40Z) - Denoising Opponents Position in Partial Observation Environment [0.4660328753262075]
Soccer Simulation 2D (SS2D) match involves two teams, including 11 players and a coach for each team, competing against each other.
We will explain our position prediction idea powered by Long Short-Term Memory models (LSTM) and Deep Neural Networks (DNN)
arXiv Detail & Related papers (2023-10-23T04:16:52Z) - Observation Denoising in CYRUS Soccer Simulation 2D Team For RoboCup
2023 [7.658318240235567]
This paper presents the latest research of the CYRUS soccer simulation 2D team, the champion of RoboCup 2021.
We will explain our denoising idea powered by long short-term memory networks (LSTM) and deep neural networks (DNN)
arXiv Detail & Related papers (2023-05-27T20:46:33Z) - DanZero: Mastering GuanDan Game with Reinforcement Learning [121.93690719186412]
Card game AI has always been a hot topic in the research of artificial intelligence.
In this paper, we are devoted to developing an AI program for a more complex card game, GuanDan.
We propose the first AI program DanZero for GuanDan using reinforcement learning technique.
arXiv Detail & Related papers (2022-10-31T06:29:08Z) - MineDojo: Building Open-Ended Embodied Agents with Internet-Scale
Knowledge [70.47759528596711]
We introduce MineDojo, a new framework built on the popular Minecraft game.
We propose a novel agent learning algorithm that leverages large pre-trained video-language models as a learned reward function.
Our agent is able to solve a variety of open-ended tasks specified in free-form language without any manually designed dense shaping reward.
arXiv Detail & Related papers (2022-06-17T15:53:05Z) - CYRUS Soccer Simulation 2D Team Description Paper 2022 [8.86121279277966]
This paper introduces the previous and current research of the CYRUS soccer simulation team.
We will present our idea about improving Unmarking Decisioning and Positioning by using Pass Prediction Deep Neural Network.
arXiv Detail & Related papers (2022-05-22T23:16:37Z) - COSEA: Convolutional Code Search with Layer-wise Attention [90.35777733464354]
We propose a new deep learning architecture, COSEA, which leverages convolutional neural networks with layer-wise attention to capture the code's intrinsic structural logic.
COSEA can achieve significant improvements over state-of-the-art methods on code search tasks.
arXiv Detail & Related papers (2020-10-19T13:53:38Z) - The NetHack Learning Environment [79.06395964379107]
We present the NetHack Learning Environment (NLE), a procedurally generated rogue-like environment for Reinforcement Learning research.
We argue that NetHack is sufficiently complex to drive long-term research on problems such as exploration, planning, skill acquisition, and language-conditioned RL.
We demonstrate empirical success for early stages of the game using a distributed Deep RL baseline and Random Network Distillation exploration.
arXiv Detail & Related papers (2020-06-24T14:12:56Z) - Suphx: Mastering Mahjong with Deep Reinforcement Learning [114.68233321904623]
We design an AI for Mahjong, named Suphx, based on deep reinforcement learning with some newly introduced techniques.
Suphx has demonstrated stronger performance than most top human players in terms of stable rank.
This is the first time that a computer program outperforms most top human players in Mahjong.
arXiv Detail & Related papers (2020-03-30T16:18:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.