Volumetric Ergodic Control
- URL: http://arxiv.org/abs/2511.11533v1
- Date: Fri, 14 Nov 2025 18:10:40 GMT
- Title: Volumetric Ergodic Control
- Authors: Jueun Kwon, Max M. Sun, Todd Murphey,
- Abstract summary: We introduce a new ergodic control formulation that optimize spatial coverage using a volumetric state representation.<n>Our method preserves the coverage guarantees of ergodic control, adds minimal computational overhead for real-time control, and supports arbitrary sample-based volumetric models.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ergodic control synthesizes optimal coverage behaviors over spatial distributions for nonlinear systems. However, existing formulations model the robot as a non-volumetric point, but in practice a robot interacts with the environment through its body and sensors with physical volume. In this work, we introduce a new ergodic control formulation that optimizes spatial coverage using a volumetric state representation. Our method preserves the asymptotic coverage guarantees of ergodic control, adds minimal computational overhead for real-time control, and supports arbitrary sample-based volumetric models. We evaluate our method across search and manipulation tasks -- with multiple robot dynamics and end-effector geometries or sensor models -- and show that it improves coverage efficiency by more than a factor of two while maintaining a 100% task completion rate across all experiments, outperforming the standard ergodic control method. Finally, we demonstrate the effectiveness of our method on a robot arm performing mechanical erasing tasks.
Related papers
- ULTRA: Unified Multimodal Control for Autonomous Humanoid Whole-Body Loco-Manipulation [55.467742403416175]
We introduce a physics-driven neural algorithm that translates large-scale motion capture to humanoid embodiments.<n>We learn a unified multimodal controller that supports both dense references and sparse task specifications.<n>Results show that ULTRA generalizes to autonomous, goal-conditioned whole-body loco-manipulation from egocentric perception.
arXiv Detail & Related papers (2026-03-03T18:59:29Z) - Efficient Surgical Robotic Instrument Pose Reconstruction in Real World Conditions Using Unified Feature Detection [21.460727996614704]
MIS robots have long kinematic chains and partial visibility of their degrees of freedom in the camera.<n>We propose a novel framework that unifies the detection of geometric primitives through a shared encoding.<n>This architecture detects both keypoints and edges in a single inference and is trained on large-scale synthetic data with projective labeling.
arXiv Detail & Related papers (2025-10-03T22:03:28Z) - Hysteresis-Aware Neural Network Modeling and Whole-Body Reinforcement Learning Control of Soft Robots [14.02771001060961]
We present a soft robotic system designed for surgical applications.<n>We propose a whole-body neural network model that accurately captures and predicts the soft robot's whole-body motion.<n>The proposed method showed strong performance in phantom-based surgical experiments.
arXiv Detail & Related papers (2025-04-18T09:34:56Z) - Robotic World Model: A Neural Network Simulator for Robust Policy Optimization in Robotics [50.191655141020505]
This work advances model-based reinforcement learning by addressing the challenges of long-horizon prediction, error accumulation, and sim-to-real transfer.<n>By providing a scalable and robust framework, the introduced methods pave the way for adaptive and efficient robotic systems in real-world applications.
arXiv Detail & Related papers (2025-01-17T10:39:09Z) - Distributed Robust Learning based Formation Control of Mobile Robots based on Bioinspired Neural Dynamics [14.149584412213269]
We first introduce a distributed estimator using a variable structure and cascaded design technique, eliminating the need for derivative information to improve the real time performance.
Then, a kinematic tracking control method is developed utilizing a bioinspired neural dynamic-based approach aimed at providing smooth control inputs and effectively resolving the speed jump issue.
To address the challenges for robots operating with completely unknown dynamics and disturbances, a learning-based robust dynamic controller is developed.
arXiv Detail & Related papers (2024-03-23T04:36:12Z) - ManiGaussian: Dynamic Gaussian Splatting for Multi-task Robotic Manipulation [58.615616224739654]
Conventional robotic manipulation methods usually learn semantic representation of the observation for prediction.
We propose a dynamic Gaussian Splatting method named ManiGaussian for multi-temporal robotic manipulation.
Our framework can outperform the state-of-the-art methods by 13.1% in average success rate.
arXiv Detail & Related papers (2024-03-13T08:06:41Z) - Tuning Legged Locomotion Controllers via Safe Bayesian Optimization [47.87675010450171]
This paper presents a data-driven strategy to streamline the deployment of model-based controllers in legged robotic hardware platforms.
We leverage a model-free safe learning algorithm to automate the tuning of control gains, addressing the mismatch between the simplified model used in the control formulation and the real system.
arXiv Detail & Related papers (2023-06-12T13:10:14Z) - Safe Machine-Learning-supported Model Predictive Force and Motion
Control in Robotics [0.0]
Many robotic tasks, such as human-robot interactions or the handling of fragile objects, require tight control and limitation of appearing forces and moments alongside motion control to achieve safe yet high-performance operation.
We propose a learning-supported model predictive force and motion control scheme that provides safety guarantees while adapting to changing situations.
arXiv Detail & Related papers (2023-03-08T13:30:02Z) - Nonprehensile Riemannian Motion Predictive Control [57.295751294224765]
We introduce a novel Real-to-Sim reward analysis technique to reliably imagine and predict the outcome of taking possible actions for a real robotic platform.
We produce a closed-loop controller to reactively push objects in a continuous action space.
We observe that RMPC is robust in cluttered as well as occluded environments and outperforms the baselines.
arXiv Detail & Related papers (2021-11-15T18:50:04Z) - OSCAR: Data-Driven Operational Space Control for Adaptive and Robust
Robot Manipulation [50.59541802645156]
Operational Space Control (OSC) has been used as an effective task-space controller for manipulation.
We propose OSC for Adaptation and Robustness (OSCAR), a data-driven variant of OSC that compensates for modeling errors.
We evaluate our method on a variety of simulated manipulation problems, and find substantial improvements over an array of controller baselines.
arXiv Detail & Related papers (2021-10-02T01:21:38Z) - Attainment Regions in Feature-Parameter Space for High-Level Debugging
in Autonomous Robots [8.147652597876862]
A performance function gives us insights into the behaviour of the robot.
In high-dimensionality systems, where the actionstate space is large, fine-tuning a controller is non-trivial.
We propose a performance function whose domain is defined by external features and parameters of the controller.
arXiv Detail & Related papers (2021-08-06T14:45:57Z) - Online Body Schema Adaptation through Cost-Sensitive Active Learning [63.84207660737483]
The work was implemented in a simulation environment, using the 7DoF arm of the iCub robot simulator.
A cost-sensitive active learning approach is used to select optimal joint configurations.
The results show cost-sensitive active learning has similar accuracy to the standard active learning approach, while reducing in about half the executed movement.
arXiv Detail & Related papers (2021-01-26T16:01:02Z) - Learning Compliance Adaptation in Contact-Rich Manipulation [81.40695846555955]
We propose a novel approach for learning predictive models of force profiles required for contact-rich tasks.
The approach combines an anomaly detection based on Bidirectional Gated Recurrent Units (Bi-GRU) and an adaptive force/impedance controller.
arXiv Detail & Related papers (2020-05-01T05:23:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.