Prof. Robot: Differentiable Robot Rendering Without Static and Self-Collisions
- URL: http://arxiv.org/abs/2503.11269v2
- Date: Mon, 17 Mar 2025 09:17:10 GMT
- Title: Prof. Robot: Differentiable Robot Rendering Without Static and Self-Collisions
- Authors: Quanyuan Ruan, Jiabao Lei, Wenhao Yuan, Yanglin Zhang, Dekun Lu, Guiliang Liu, Kui Jia,
- Abstract summary: We introduce a novel improvement on previous efforts by incorporating physical awareness of collisions through the learning of a neural robotic collision classifier.<n>This enables the optimization of actions that avoid collisions with static, non-interactable environments as well as the robot itself.
- Score: 40.15055686410145
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Differentiable rendering has gained significant attention in the field of robotics, with differentiable robot rendering emerging as an effective paradigm for learning robotic actions from image-space supervision. However, the lack of physical world perception in this approach may lead to potential collisions during action optimization. In this work, we introduce a novel improvement on previous efforts by incorporating physical awareness of collisions through the learning of a neural robotic collision classifier. This enables the optimization of actions that avoid collisions with static, non-interactable environments as well as the robot itself. To facilitate effective gradient optimization with the classifier, we identify the underlying issue and propose leveraging Eikonal regularization to ensure consistent gradients for optimization. Our solution can be seamlessly integrated into existing differentiable robot rendering frameworks, utilizing gradients for optimization and providing a foundation for future applications of differentiable rendering in robotics with improved reliability of interactions with the physical world. Both qualitative and quantitative experiments demonstrate the necessity and effectiveness of our method compared to previous solutions.
Related papers
- Action Flow Matching for Continual Robot Learning [57.698553219660376]
Continual learning in robotics seeks systems that can constantly adapt to changing environments and tasks.
We introduce a generative framework leveraging flow matching for online robot dynamics model alignment.
We find that by transforming the actions themselves rather than exploring with a misaligned model, the robot collects informative data more efficiently.
arXiv Detail & Related papers (2025-04-25T16:26:15Z) - Robotic World Model: A Neural Network Simulator for Robust Policy Optimization in Robotics [50.191655141020505]
We introduce a novel framework for learning world models.<n>By providing a scalable and robust framework, we pave the way for adaptive and efficient robotic systems in real-world applications.
arXiv Detail & Related papers (2025-01-17T10:39:09Z) - Bayesian optimization for robust robotic grasping using a sensorized compliant hand [6.693397171872655]
We analyze different grasp metrics to provide realistic grasp optimization in a real system including tactile sensors.
An experimental evaluation in the robotic system shows the usefulness of the method for performing unknown object grasping.
arXiv Detail & Related papers (2024-10-23T19:33:14Z) - Robot Navigation with Entity-Based Collision Avoidance using Deep Reinforcement Learning [0.0]
We present a novel methodology that enhances the robot's interaction with different types of agents and obstacles.
This approach uses information about the entity types, improving collision avoidance and ensuring safer navigation.
We introduce a new reward function that penalizes the robot for collisions with different entities such as adults, bicyclists, children, and static obstacles.
arXiv Detail & Related papers (2024-08-26T11:16:03Z) - DiffGen: Robot Demonstration Generation via Differentiable Physics Simulation, Differentiable Rendering, and Vision-Language Model [72.66465487508556]
DiffGen is a novel framework that integrates differentiable physics simulation, differentiable rendering, and a vision-language model.
It can generate realistic robot demonstrations by minimizing the distance between the embedding of the language instruction and the embedding of the simulated observation.
Experiments demonstrate that with DiffGen, we could efficiently and effectively generate robot data with minimal human effort or training time.
arXiv Detail & Related papers (2024-05-12T15:38:17Z) - DiffuseBot: Breeding Soft Robots With Physics-Augmented Generative
Diffusion Models [102.13968267347553]
We present DiffuseBot, a physics-augmented diffusion model that generates soft robot morphologies capable of excelling in a wide spectrum of tasks.
We showcase a range of simulated and fabricated robots along with their capabilities.
arXiv Detail & Related papers (2023-11-28T18:58:48Z) - SERA: Safe and Efficient Reactive Obstacle Avoidance for Collaborative
Robotic Planning in Unstructured Environments [1.5229257192293197]
We propose a novel methodology for reactive whole-body obstacle avoidance.
Our approach allows a robotic arm to proactively avoid obstacles of arbitrary 3D shapes without direct contact.
Our methodology provides a robust and effective solution for safe human-robot collaboration in non-stationary environments.
arXiv Detail & Related papers (2022-03-24T21:11:43Z) - Domain Adaptive Robotic Gesture Recognition with Unsupervised
Kinematic-Visual Data Alignment [60.31418655784291]
We propose a novel unsupervised domain adaptation framework which can simultaneously transfer multi-modality knowledge, i.e., both kinematic and visual data, from simulator to real robot.
It remedies the domain gap with enhanced transferable features by using temporal cues in videos, and inherent correlations in multi-modal towards recognizing gesture.
Results show that our approach recovers the performance with great improvement gains, up to 12.91% in ACC and 20.16% in F1score without using any annotations in real robot.
arXiv Detail & Related papers (2021-03-06T09:10:03Z) - Bayesian Meta-Learning for Few-Shot Policy Adaptation Across Robotic
Platforms [60.59764170868101]
Reinforcement learning methods can achieve significant performance but require a large amount of training data collected on the same robotic platform.
We formulate it as a few-shot meta-learning problem where the goal is to find a model that captures the common structure shared across different robotic platforms.
We experimentally evaluate our framework on a simulated reaching and a real-robot picking task using 400 simulated robots.
arXiv Detail & Related papers (2021-03-05T14:16:20Z) - Simultaneous View and Feature Selection for Collaborative Multi-Robot
Perception [9.266151962328548]
Collaborative multi-robot perception provides multiple views of an environment.
These multiple observations must be intelligently fused for accurate recognition.
We propose a novel approach to collaborative multi-robot perception that simultaneously integrates view selection, feature selection, and object recognition.
arXiv Detail & Related papers (2020-12-17T00:01:05Z) - Neural fidelity warping for efficient robot morphology design [40.85746315602933]
We present a continuous multi-fidelity Bayesian Optimization framework that efficiently utilizes computational resources via low-fidelity evaluations.
Our method can utilize the low-fidelity evaluations to efficiently search for the optimal robot morphology, outperforming state-of-the-art methods.
arXiv Detail & Related papers (2020-12-08T03:41:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.