A Review of Nine Physics Engines for Reinforcement Learning Research
- URL: http://arxiv.org/abs/2407.08590v2
- Date: Fri, 23 Aug 2024 07:16:52 GMT
- Title: A Review of Nine Physics Engines for Reinforcement Learning Research
- Authors: Michael Kaup, Cornelius Wolff, Hyerim Hwang, Julius Mayer, Elia Bruni,
- Abstract summary: Review aims to guide researchers in selecting tools for creating simulated physical environments for reinforcement learning (RL)
It evaluates nine frameworks based on their popularity, feature range, quality, usability, and RL capabilities.
- Score: 4.381263829108405
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a review of popular simulation engines and frameworks used in reinforcement learning (RL) research, aiming to guide researchers in selecting tools for creating simulated physical environments for RL and training setups. It evaluates nine frameworks (Brax, Chrono, Gazebo, MuJoCo, ODE, PhysX, PyBullet, Webots, and Unity) based on their popularity, feature range, quality, usability, and RL capabilities. We highlight the challenges in selecting and utilizing physics engines for RL research, including the need for detailed comparisons and an understanding of each framework's capabilities. Key findings indicate MuJoCo as the leading framework due to its performance and flexibility, despite usability challenges. Unity is noted for its ease of use but lacks scalability and simulation fidelity. The study calls for further development to improve simulation engines' usability and performance and stresses the importance of transparency and reproducibility in RL research. This review contributes to the RL community by offering insights into the selection process for simulation engines, facilitating informed decision-making.
Related papers
- Multi-Agent Reinforcement Learning for Autonomous Driving: A Survey [14.73689900685646]
Reinforcement Learning (RL) is a potent tool for sequential decision-making and has achieved performance surpassing human capabilities.
As the extension of RL in the multi-agent system domain, multi-agent RL (MARL) not only need to learn the control policy but also requires consideration regarding interactions with all other agents in the environment.
Simulators are crucial to obtain realistic data, which is the fundamentals of RL.
arXiv Detail & Related papers (2024-08-19T03:31:20Z) - A Benchmark Environment for Offline Reinforcement Learning in Racing Games [54.83171948184851]
Offline Reinforcement Learning (ORL) is a promising approach to reduce the high sample complexity of traditional Reinforcement Learning (RL)
This paper introduces OfflineMania, a novel environment for ORL research.
It is inspired by the iconic TrackMania series and developed using the Unity 3D game engine.
arXiv Detail & Related papers (2024-07-12T16:44:03Z) - Scilab-RL: A software framework for efficient reinforcement learning and
cognitive modeling research [0.0]
Scilab-RL is a software framework for efficient research in cognitive modeling and reinforcement learning for robotic agents.
It focuses on goal-conditioned reinforcement learning using Stable Baselines 3 and the OpenAI gym interface.
We describe how these features enable researchers to conduct experiments with minimal time effort, thus maximizing research output.
arXiv Detail & Related papers (2024-01-25T19:49:02Z) - Reinforcement-learning robotic sailboats: simulator and preliminary
results [0.37918614538294315]
This work focuses on the main challenges and problems in developing a virtual oceanic environment reproducing real experiments using Unmanned Surface Vehicles (USV) digital twins.
We introduce the key features for building virtual worlds, considering using Reinforcement Learning (RL) agents for autonomous navigation and control.
arXiv Detail & Related papers (2024-01-16T09:04:05Z) - Waymax: An Accelerated, Data-Driven Simulator for Large-Scale Autonomous
Driving Research [76.93956925360638]
Waymax is a new data-driven simulator for autonomous driving in multi-agent scenes.
It runs entirely on hardware accelerators such as TPUs/GPUs and supports in-graph simulation for training.
We benchmark a suite of popular imitation and reinforcement learning algorithms with ablation studies on different design decisions.
arXiv Detail & Related papers (2023-10-12T20:49:15Z) - Reinforcement Learning with Human Feedback for Realistic Traffic
Simulation [53.85002640149283]
Key element of effective simulation is the incorporation of realistic traffic models that align with human knowledge.
This study identifies two main challenges: capturing the nuances of human preferences on realism and the unification of diverse traffic simulation models.
arXiv Detail & Related papers (2023-09-01T19:29:53Z) - SAM-RL: Sensing-Aware Model-Based Reinforcement Learning via
Differentiable Physics-Based Simulation and Rendering [49.78647219715034]
We propose a sensing-aware model-based reinforcement learning system called SAM-RL.
With the sensing-aware learning pipeline, SAM-RL allows a robot to select an informative viewpoint to monitor the task process.
We apply our framework to real world experiments for accomplishing three manipulation tasks: robotic assembly, tool manipulation, and deformable object manipulation.
arXiv Detail & Related papers (2022-10-27T05:30:43Z) - PlasticineLab: A Soft-Body Manipulation Benchmark with Differentiable
Physics [89.81550748680245]
We introduce a new differentiable physics benchmark called PasticineLab.
In each task, the agent uses manipulators to deform the plasticine into the desired configuration.
We evaluate several existing reinforcement learning (RL) methods and gradient-based methods on this benchmark.
arXiv Detail & Related papers (2021-04-07T17:59:23Z) - Learning to Fly -- a Gym Environment with PyBullet Physics for
Reinforcement Learning of Multi-agent Quadcopter Control [0.0]
We propose an open-source environment for multiple quadcopters based on the Bullet physics engine.
Its multi-agent and vision based reinforcement learning interfaces, as well as the support of realistic collisions and aerodynamic effects, make it, to the best of our knowledge, a first of its kind.
arXiv Detail & Related papers (2021-03-03T02:47:59Z) - Learning Discrete Energy-based Models via Auxiliary-variable Local
Exploration [130.89746032163106]
We propose ALOE, a new algorithm for learning conditional and unconditional EBMs for discrete structured data.
We show that the energy function and sampler can be trained efficiently via a new variational form of power iteration.
We present an energy model guided fuzzer for software testing that achieves comparable performance to well engineered fuzzing engines like libfuzzer.
arXiv Detail & Related papers (2020-11-10T19:31:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.