TARC: Time-Adaptive Robotic Control
- URL: http://arxiv.org/abs/2510.23176v1
- Date: Mon, 27 Oct 2025 10:10:19 GMT
- Title: TARC: Time-Adaptive Robotic Control
- Authors: Arnav Sukhija, Lenart Treven, Jin Cheng, Florian Dörfler, Stelian Coros, Andreas Krause,
- Abstract summary: Fixed-frequency control in robotics imposes a trade-off between the efficiency of low-frequency control and the robustness of high-frequency control.<n>We address this with a reinforcement learning approach in which policies jointly select control actions and their application durations.<n>We validate our method with zero-shot sim-to-real experiments on two distinct hardware platforms.
- Score: 48.61871569444481
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fixed-frequency control in robotics imposes a trade-off between the efficiency of low-frequency control and the robustness of high-frequency control, a limitation not seen in adaptable biological systems. We address this with a reinforcement learning approach in which policies jointly select control actions and their application durations, enabling robots to autonomously modulate their control frequency in response to situational demands. We validate our method with zero-shot sim-to-real experiments on two distinct hardware platforms: a high-speed RC car and a quadrupedal robot. Our method matches or outperforms fixed-frequency baselines in terms of rewards while significantly reducing the control frequency and exhibiting adaptive frequency control under real-world conditions.
Related papers
- RT-HCP: Dealing with Inference Delays and Sample Efficiency to Learn Directly on Robotic Platforms [16.18687520299694]
Learning a controller directly on the robot requires extreme sample efficiency.<n>We propose RT-HCP, an algorithm that offers an excellent trade-off between performance, sample efficiency and inference time.<n>We validate the superiority of RT-HCP with experiments where we learn a controller directly on a simple but high frequency pendulum platform.
arXiv Detail & Related papers (2025-09-08T14:09:33Z) - Feedback-MPPI: Fast Sampling-Based MPC via Rollout Differentiation -- Adios low-level controllers [0.9674641730446749]
Model Predictive Path Integral control is a powerful sampling-based approach suitable for complex robotic tasks.<n>This paper introduces robust feedback gains derived from sensitivity used in gradient-based MPC.<n>We demonstrate the effectiveness of F-MPPI in simulations through real-world experiments on two robotic platforms.
arXiv Detail & Related papers (2025-06-17T07:47:33Z) - Real Time Control of Tandem-Wing Experimental Platform Using Concerto Reinforcement Learning [0.0]
This paper introduces the CRL2RT algorithm, an advanced reinforcement learning method aimed at improving the real-time control performance of the Direct-Drive Tandem-Wing Experimental Platform (DDTWEP)<n>Results show that CRL2RT achieves a control frequency surpassing 2500 Hz on standard CPUs.
arXiv Detail & Related papers (2025-02-08T03:46:40Z) - Growing Q-Networks: Solving Continuous Control Tasks with Adaptive Control Resolution [51.83951489847344]
In robotics applications, smooth control signals are commonly preferred to reduce system wear and energy efficiency.
In this work, we aim to bridge this performance gap by growing discrete action spaces from coarse to fine control resolution.
Our work indicates that an adaptive control resolution in combination with value decomposition yields simple critic-only algorithms that yield surprisingly strong performance on continuous control tasks.
arXiv Detail & Related papers (2024-04-05T17:58:37Z) - Deployable Reinforcement Learning with Variable Control Rate [14.838483990647697]
We propose a variant of Reinforcement Learning (RL) with variable control rate.
In this approach, the policy decides the action the agent should take as well as the duration of the time step associated with that action.
We show the efficacy of SEAC through a proof-of-concept simulation driving an agent with Newtonian kinematics.
arXiv Detail & Related papers (2024-01-17T15:40:11Z) - Tuning Legged Locomotion Controllers via Safe Bayesian Optimization [47.87675010450171]
This paper presents a data-driven strategy to streamline the deployment of model-based controllers in legged robotic hardware platforms.
We leverage a model-free safe learning algorithm to automate the tuning of control gains, addressing the mismatch between the simplified model used in the control formulation and the real system.
arXiv Detail & Related papers (2023-06-12T13:10:14Z) - Adaptive PD Control using Deep Reinforcement Learning for Local-Remote
Teleoperation with Stochastic Time Delays [5.977871949434069]
Local-remote systems allow robots to execute complex tasks in hazardous environments.
Time delays can compromise system performance and stability.
We introduce an adaptive control method employing reinforcement learning to tackle the time-delayed control problem.
arXiv Detail & Related papers (2023-05-26T14:34:45Z) - Policy Search for Model Predictive Control with Application to Agile
Drone Flight [56.24908013905407]
We propose a policy-search-for-model-predictive-control framework for MPC.
Specifically, we formulate the MPC as a parameterized controller, where the hard-to-optimize decision variables are represented as high-level policies.
Experiments show that our controller achieves robust and real-time control performance in both simulation and the real world.
arXiv Detail & Related papers (2021-12-07T17:39:24Z) - Learning a Contact-Adaptive Controller for Robust, Efficient Legged
Locomotion [95.1825179206694]
We present a framework that synthesizes robust controllers for a quadruped robot.
A high-level controller learns to choose from a set of primitives in response to changes in the environment.
A low-level controller that utilizes an established control method to robustly execute the primitives.
arXiv Detail & Related papers (2020-09-21T16:49:26Z) - Guided Constrained Policy Optimization for Dynamic Quadrupedal Robot
Locomotion [78.46388769788405]
We introduce guided constrained policy optimization (GCPO), an RL framework based upon our implementation of constrained policy optimization (CPPO)
We show that guided constrained RL offers faster convergence close to the desired optimum resulting in an optimal, yet physically feasible, robotic control behavior without the need for precise reward function tuning.
arXiv Detail & Related papers (2020-02-22T10:15:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.