Interaction-Aware Whole-Body Control for Compliant Object Transport
- URL: http://arxiv.org/abs/2603.03751v1
- Date: Wed, 04 Mar 2026 05:50:40 GMT
- Title: Interaction-Aware Whole-Body Control for Compliant Object Transport
- Authors: Hao Zhang, Yves Tseng, Ding Zhao, H. Eric Tseng,
- Abstract summary: This paper proposes an interaction-oriented whole-body control (IO-WBC) that functions as an artificial cerebellum.<n>IO-WBC translates upstream (skill-level) commands into stable, physically consistent whole-body behavior under contact.
- Score: 33.233203393813376
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cooperative object transport in unstructured environments remains challenging for assistive humanoids because strong, time-varying interaction forces can make tracking-centric whole-body control unreliable, especially in close-contact support tasks. This paper proposes a bio-inspired, interaction-oriented whole-body control (IO-WBC) that functions as an artificial cerebellum - an adaptive motor agent that translates upstream (skill-level) commands into stable, physically consistent whole-body behavior under contact. This work structurally separates upper-body interaction execution from lower-body support control, enabling the robot to maintain balance while shaping force exchange in a tightly coupled robot-object system. A trajectory-optimized reference generator (RG) provides a kinematic prior, while a reinforcement learning (RL) policy governs body responses under heavy-load interactions and disturbances. The policy is trained in simulation with randomized payload mass/inertia and external perturbations, and deployed via asymmetric teacher-student distillation so that the student relies only on proprioceptive histories at runtime. Extensive experiments demonstrate that IO-WBC maintains stable whole-body behavior and physical interaction even when precise velocity tracking becomes infeasible, enabling compliant object transport across a wide range of scenarios.
Related papers
- ULTRA: Unified Multimodal Control for Autonomous Humanoid Whole-Body Loco-Manipulation [55.467742403416175]
We introduce a physics-driven neural algorithm that translates large-scale motion capture to humanoid embodiments.<n>We learn a unified multimodal controller that supports both dense references and sparse task specifications.<n>Results show that ULTRA generalizes to autonomous, goal-conditioned whole-body loco-manipulation from egocentric perception.
arXiv Detail & Related papers (2026-03-03T18:59:29Z) - InterPrior: Scaling Generative Control for Physics-Based Human-Object Interactions [58.329946838699044]
Humans rarely plan whole-body interactions with objects at the level of explicit whole-body movements.<n>Scaling such priors is key to enabling humanoids to compose and generalize loco-manipulation skills.<n>We introduce InterPrior, a framework that learns a unified generative controller through large-scale imitation pretraining and post-training by reinforcement learning.
arXiv Detail & Related papers (2026-02-05T18:59:27Z) - CHIP: Adaptive Compliance for Humanoid Control through Hindsight Perturbation [70.5382178207975]
hIsight Perturbation (CHIP) is a plug-and-play module that enables controllable end-effector stiffness.<n>CHIP is easy to implement and requires neither data augmentation nor additional reward tuning.<n>We show that a generalist motion-tracking controller trained with CHIP can perform a diverse set of forceful manipulation tasks.
arXiv Detail & Related papers (2025-12-16T18:56:04Z) - GentleHumanoid: Learning Upper-body Compliance for Contact-rich Human and Object Interaction [14.278503723930998]
GentleHumanoid is a framework that integrates impedance control into a whole-body motion tracking policy to achieve upper-body compliance.<n>We evaluate our approach in both simulation and on the Unitree G1 humanoid across tasks requiring different levels of compliance.
arXiv Detail & Related papers (2025-11-06T18:59:33Z) - HHI-Assist: A Dataset and Benchmark of Human-Human Interaction in Physical Assistance Scenario [63.77482302352545]
HHI-Assist is a dataset comprising motion capture clips of human-human interactions in assistive tasks.<n>Our work has the potential to significantly enhance robotic assistance policies.
arXiv Detail & Related papers (2025-09-12T09:38:17Z) - Uncertainty Aware-Predictive Control Barrier Functions: Safer Human Robot Interaction through Probabilistic Motion Forecasting [13.020006323600251]
Uncertainty-Aware Predictive Control Barrier Functions fuses probabilistic human hand motion forecasting with the formal safety guarantees of Control Barrier Functions.<n> UA-PCBFs empower collaborative robots with a deeper understanding of future human states.<n>Relative to state-of-the-art HRI architectures, UA-PCBFs show better performance in task-critical metrics.
arXiv Detail & Related papers (2025-08-28T14:11:26Z) - Towards Immersive Human-X Interaction: A Real-Time Framework for Physically Plausible Motion Synthesis [51.95817740348585]
Human-X is a novel framework designed to enable immersive and physically plausible human interactions across diverse entities.<n>Our method jointly predicts actions and reactions in real-time using an auto-regressive reaction diffusion planner.<n>Our framework is validated in real-world applications, including virtual reality interface for human-robot interaction.
arXiv Detail & Related papers (2025-08-04T06:35:48Z) - Integrating DeepRL with Robust Low-Level Control in Robotic Manipulators for Non-Repetitive Reaching Tasks [0.24578723416255746]
In robotics, contemporary strategies are learning-based, characterized by a complex black-box nature and a lack of interpretability.
We propose integrating a collision-free trajectory planner based on deep reinforcement learning (DRL) with a novel auto-tuning low-level control strategy.
arXiv Detail & Related papers (2024-02-04T15:54:03Z) - Nonprehensile Riemannian Motion Predictive Control [57.295751294224765]
We introduce a novel Real-to-Sim reward analysis technique to reliably imagine and predict the outcome of taking possible actions for a real robotic platform.
We produce a closed-loop controller to reactively push objects in a continuous action space.
We observe that RMPC is robust in cluttered as well as occluded environments and outperforms the baselines.
arXiv Detail & Related papers (2021-11-15T18:50:04Z) - Decentralized Motion Planning for Multi-Robot Navigation using Deep
Reinforcement Learning [0.41998444721319217]
This work presents a decentralized motion planning framework for addressing the task of multi-robot navigation using deep reinforcement learning.
The notion of decentralized motion planning with common and shared policy learning was adopted, which allowed robust training and testing of this approach.
arXiv Detail & Related papers (2020-11-11T07:35:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.