Learning global control of underactuated systems with Model-Based Reinforcement Learning
- URL: http://arxiv.org/abs/2504.06721v1
- Date: Wed, 09 Apr 2025 09:20:37 GMT
- Title: Learning global control of underactuated systems with Model-Based Reinforcement Learning
- Authors: Niccolò Turcato, Marco Calì, Alberto Dalla Libera, Giulio Giacomuzzo, Ruggero Carli, Diego Romeres,
- Abstract summary: This paper describes our proposed solution for the third edition of the "AI Olympics with RealAIGym" competition, held at ICRA 2025.<n>We employ Monte-Carlo Probabilistic Inference for Learning Control (MC-PILCO) for its exceptional data efficiency across various low-dimensional robotic tasks.<n>MC-PILCO has won the first two editions of this competition, demonstrating its robustness in both simulated and real-world environments.
- Score: 7.4278142555507065
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This short paper describes our proposed solution for the third edition of the "AI Olympics with RealAIGym" competition, held at ICRA 2025. We employed Monte-Carlo Probabilistic Inference for Learning Control (MC-PILCO), an MBRL algorithm recognized for its exceptional data efficiency across various low-dimensional robotic tasks, including cart-pole, ball \& plate, and Furuta pendulum systems. MC-PILCO optimizes a system dynamics model using interaction data, enabling policy refinement through simulation rather than direct system data optimization. This approach has proven highly effective in physical systems, offering greater data efficiency than Model-Free (MF) alternatives. Notably, MC-PILCO has previously won the first two editions of this competition, demonstrating its robustness in both simulated and real-world environments. Besides briefly reviewing the algorithm, we discuss the most critical aspects of the MC-PILCO implementation in the tasks at hand: learning a global policy for the pendubot and acrobot systems.
Related papers
- Robotic World Model: A Neural Network Simulator for Robust Policy Optimization in Robotics [50.191655141020505]
We introduce a novel framework for learning world models.<n>By providing a scalable and robust framework, we pave the way for adaptive and efficient robotic systems in real-world applications.
arXiv Detail & Related papers (2025-01-17T10:39:09Z) - Automatic AI Model Selection for Wireless Systems: Online Learning via Digital Twinning [50.332027356848094]
AI-based applications are deployed at intelligent controllers to carry out functionalities like scheduling or power control.
The mapping between context and AI model parameters is ideally done in a zero-shot fashion.
This paper introduces a general methodology for the online optimization of AMS mappings.
arXiv Detail & Related papers (2024-06-22T11:17:50Z) - Dropout MPC: An Ensemble Neural MPC Approach for Systems with Learned Dynamics [0.0]
We propose a novel sampling-based ensemble neural MPC algorithm that employs the Monte-Carlo dropout technique on the learned system model.
The method aims in general at uncertain systems with complex dynamics, where models derived from first principles are hard to infer.
arXiv Detail & Related papers (2024-06-04T17:15:25Z) - Machine Learning Insides OptVerse AI Solver: Design Principles and
Applications [74.67495900436728]
We present a comprehensive study on the integration of machine learning (ML) techniques into Huawei Cloud's OptVerse AI solver.
We showcase our methods for generating complex SAT and MILP instances utilizing generative models that mirror multifaceted structures of real-world problem.
We detail the incorporation of state-of-the-art parameter tuning algorithms which markedly elevate solver performance.
arXiv Detail & Related papers (2024-01-11T15:02:15Z) - SALMON: Self-Alignment with Instructable Reward Models [80.83323636730341]
This paper presents a novel approach, namely SALMON, to align base language models with minimal human supervision.
We develop an AI assistant named Dromedary-2 with only 6 exemplars for in-context learning and 31 human-defined principles.
arXiv Detail & Related papers (2023-10-09T17:56:53Z) - PILOT: A Pre-Trained Model-Based Continual Learning Toolbox [65.57123249246358]
This paper introduces a pre-trained model-based continual learning toolbox known as PILOT.
On the one hand, PILOT implements some state-of-the-art class-incremental learning algorithms based on pre-trained models, such as L2P, DualPrompt, and CODA-Prompt.
On the other hand, PILOT fits typical class-incremental learning algorithms within the context of pre-trained models to evaluate their effectiveness.
arXiv Detail & Related papers (2023-09-13T17:55:11Z) - Active Learning of Discrete-Time Dynamics for Uncertainty-Aware Model Predictive Control [46.81433026280051]
We present a self-supervised learning approach that actively models the dynamics of nonlinear robotic systems.
Our approach showcases high resilience and generalization capabilities by consistently adapting to unseen flight conditions.
arXiv Detail & Related papers (2022-10-23T00:45:05Z) - Gradient-Based Trajectory Optimization With Learned Dynamics [80.41791191022139]
We use machine learning techniques to learn a differentiable dynamics model of the system from data.
We show that a neural network can model highly nonlinear behaviors accurately for large time horizons.
In our hardware experiments, we demonstrate that our learned model can represent complex dynamics for both the Spot and Radio-controlled (RC) car.
arXiv Detail & Related papers (2022-04-09T22:07:34Z) - KNODE-MPC: A Knowledge-based Data-driven Predictive Control Framework
for Aerial Robots [5.897728689802829]
We make use of a deep learning tool, knowledge-based neural ordinary differential equations (KNODE), to augment a model obtained from first principles.
The resulting hybrid model encompasses both a nominal first-principle model and a neural network learnt from simulated or real-world experimental data.
To improve closed-loop performance, the hybrid model is integrated into a novel MPC framework, known as KNODE-MPC.
arXiv Detail & Related papers (2021-09-10T12:09:18Z) - Model-Based Policy Search Using Monte Carlo Gradient Estimation with
Real Systems Application [12.854118767247453]
We present a Model-Based Reinforcement Learning (MBRL) algorithm named emphMonte Carlo Probabilistic Inference for Learning COntrol (MC-PILCO)
The algorithm relies on Gaussian Processes (GPs) to model the system dynamics and on a Monte Carlo approach to estimate the policy gradient.
Numerical comparisons in a simulated cart-pole environment show that MC-PILCO exhibits better data efficiency and control performance.
arXiv Detail & Related papers (2021-01-28T17:01:15Z) - Reconfigurable Intelligent Surface Assisted Mobile Edge Computing with
Heterogeneous Learning Tasks [53.1636151439562]
Mobile edge computing (MEC) provides a natural platform for AI applications.
We present an infrastructure to perform machine learning tasks at an MEC with the assistance of a reconfigurable intelligent surface (RIS)
Specifically, we minimize the learning error of all participating users by jointly optimizing transmit power of mobile users, beamforming vectors of the base station, and the phase-shift matrix of the RIS.
arXiv Detail & Related papers (2020-12-25T07:08:50Z) - Reinforcement Learning Control of Robotic Knee with Human in the Loop by
Flexible Policy Iteration [17.365135977882215]
This study fills important voids by introducing innovative features to the policy algorithm.
We show system level performances including convergence of the approximate value function, (sub)optimality of the solution, and stability of the system.
arXiv Detail & Related papers (2020-06-16T09:09:48Z) - Information Theoretic Model Predictive Q-Learning [64.74041985237105]
We present a novel theoretical connection between information theoretic MPC and entropy regularized RL.
We develop a Q-learning algorithm that can leverage biased models.
arXiv Detail & Related papers (2019-12-31T00:29:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.