Bayesian Optimization for Automatic Tuning of Torque-Level Nonlinear Model Predictive Control
- URL: http://arxiv.org/abs/2512.03772v1
- Date: Wed, 03 Dec 2025 13:19:42 GMT
- Title: Bayesian Optimization for Automatic Tuning of Torque-Level Nonlinear Model Predictive Control
- Authors: Gabriele Fadini, Deepak Ingole, Tong Duy Son, Alisa Rupenyan,
- Abstract summary: This paper presents an auto-tuning framework for torque-based Model Predictive Control (nMPC)<n>The MPC serves as a real-time controller for optimal joint torque commands.
- Score: 2.907225673486874
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents an auto-tuning framework for torque-based Nonlinear Model Predictive Control (nMPC), where the MPC serves as a real-time controller for optimal joint torque commands. The MPC parameters, including cost function weights and low-level controller gains, are optimized using high-dimensional Bayesian Optimization (BO) techniques, specifically Sparse Axis-Aligned Subspace (SAASBO) with a digital twin (DT) to achieve precise end-effector trajectory real-time tracking on an UR10e robot arm. The simulation model allows efficient exploration of the high-dimensional parameter space, and it ensures safe transfer to hardware. Our simulation results demonstrate significant improvements in tracking performance (+41.9%) and reduction in solve times (-2.5%) compared to manually-tuned parameters. Moreover, experimental validation on the real robot follows the trend (with a +25.8% improvement), emphasizing the importance of digital twin-enabled automated parameter optimization for robotic operations.
Related papers
- CRoSS: A Continual Robotic Simulation Suite for Scalable Reinforcement Learning with High Task Diversity and Realistic Physics Simulation [46.950823663585425]
Continual reinforcement learning requires agents to learn from a sequence of tasks without forgetting previously acquired policies.<n>We introduce a novel benchmark suite for CRL based on realistically simulated robots in the Gazebo simulator.
arXiv Detail & Related papers (2026-02-04T18:54:26Z) - Data-Driven Dynamic Parameter Learning of manipulator robots [0.8679862302950613]
We propose a Transformer-based approach for dynamic parameter estimation.<n>The dataset consists of 8,192 robots with varied inertial and frictional properties.<n>Our model effectively captures both temporal and spatial dependencies.
arXiv Detail & Related papers (2025-12-09T16:15:58Z) - Fantastic Pretraining Optimizers and Where to Find Them [59.56075036649332]
AdamW has long been the dominant gradients in language model pretraining.<n>Speedup of matrix-based matrices is inversely proportional to model scale.
arXiv Detail & Related papers (2025-09-02T07:43:22Z) - Automated Optimization of Laser Fields for Quantum State Manipulation [0.0]
gradient-based optimization approach combined with automatic differentiation is employed to ensure high accuracy and scalability.<n>The framework serves as a universal and experimentally applicable tool for automated control pulse design in quantum systems.
arXiv Detail & Related papers (2025-06-10T06:17:37Z) - Towards hyperparameter-free optimization with differential privacy [9.193537596304669]
Differential privacy (DP) is a privacy-preserving paradigm that protects the training data when training deep learning models.<n>In this work, we adapt the automatic learning rate schedule to DP optimization for any models and achieves state-of-the-art DP performance on various language and vision tasks.
arXiv Detail & Related papers (2025-03-02T02:59:52Z) - Autotuning Bipedal Locomotion MPC with GRFM-Net for Efficient Sim-to-Real Transfer [10.52309107195141]
We address the challenges of parameter selection in bipedal locomotion control using DiffTune.
A major difficulty lies in balancing model fidelity with differentiability.
We validate the parameters learned by DiffTune with GRFM-Net in hardware experiments.
arXiv Detail & Related papers (2024-09-24T03:58:18Z) - Learning IMM Filter Parameters from Measurements using Gradient Descent [45.335821132209766]
In intrinsic parameters of targets under track can be completely unobservable until the system is deployed.
With state-of-the-art sensor systems growing more and more complex, the number of parameters naturally increases.
In this paper, the parameters of an interacting multiple model (IMM) filter are optimized solely using measurements.
arXiv Detail & Related papers (2023-07-13T08:35:40Z) - Tuning Legged Locomotion Controllers via Safe Bayesian Optimization [47.87675010450171]
This paper presents a data-driven strategy to streamline the deployment of model-based controllers in legged robotic hardware platforms.
We leverage a model-free safe learning algorithm to automate the tuning of control gains, addressing the mismatch between the simplified model used in the control formulation and the real system.
arXiv Detail & Related papers (2023-06-12T13:10:14Z) - Sensitivity-Aware Visual Parameter-Efficient Fine-Tuning [91.5113227694443]
We propose a novel visual.
sensuous-aware fine-Tuning (SPT) scheme.
SPT allocates trainable parameters to task-specific important positions.
Experiments on a wide range of downstream recognition tasks show that our SPT is complementary to the existing PEFT methods.
arXiv Detail & Related papers (2023-03-15T12:34:24Z) - Hyper-Parameter Auto-Tuning for Sparse Bayesian Learning [72.83293818245978]
We design and learn a neural network (NN)-based auto-tuner for hyper- parameter tuning in sparse Bayesian learning.
We show that considerable improvement in convergence rate and recovery performance can be achieved.
arXiv Detail & Related papers (2022-11-09T12:34:59Z) - AUTOMATA: Gradient Based Data Subset Selection for Compute-Efficient
Hyper-parameter Tuning [72.54359545547904]
We propose a gradient-based subset selection framework for hyper- parameter tuning.
We show that using gradient-based data subsets for hyper- parameter tuning achieves significantly faster turnaround times and speedups of 3$times$-30$times$.
arXiv Detail & Related papers (2022-03-15T19:25:01Z) - Bayesian Optimization Meets Hybrid Zero Dynamics: Safe Parameter
Learning for Bipedal Locomotion Control [17.37169551675587]
We propose a multi-domain control parameter learning framework for locomotion control of bipedal robots.
We leverage BO to learn the control parameters used in the HZD-based controller.
Next, the learning process is applied on the physical robot to learn for corrections to the control parameters learned in simulation.
arXiv Detail & Related papers (2022-03-04T20:48:17Z) - Amortized Auto-Tuning: Cost-Efficient Transfer Optimization for
Hyperparameter Recommendation [83.85021205445662]
We propose an instantiation--amortized auto-tuning (AT2) to speed up tuning of machine learning models.
We conduct a thorough analysis of the multi-task multi-fidelity Bayesian optimization framework, which leads to the best instantiation--amortized auto-tuning (AT2)
arXiv Detail & Related papers (2021-06-17T00:01:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.