Robot Vitals and Robot Health: Towards Systematically Quantifying
Runtime Performance Degradation in Robots Under Adverse Conditions
- URL: http://arxiv.org/abs/2207.01684v1
- Date: Mon, 4 Jul 2022 19:26:13 GMT
- Title: Robot Vitals and Robot Health: Towards Systematically Quantifying
Runtime Performance Degradation in Robots Under Adverse Conditions
- Authors: Aniketh Ramesh, Rustam Stolkin, Manolis Chiou
- Abstract summary: "Robot vitals" are indicators that estimate the extent of performance degradation faced by a robot.
"Robot health" is a metric that combines robot vitals into a single scalar value estimate of performance degradation.
- Score: 2.0625936401496237
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper addresses the problem of automatically detecting and quantifying
performance degradation in remote mobile robots during task execution. A robot
may encounter a variety of uncertainties and adversities during task execution,
which can impair its ability to carry out tasks effectively and cause its
performance to degrade. Such situations can be mitigated or averted by timely
detection and intervention (e.g., by a remote human supervisor taking over
control in teleoperation mode). Inspired by patient triaging systems in
hospitals, we introduce the framework of "robot vitals" for estimating overall
"robot health". A robot's vitals are a set of indicators that estimate the
extent of performance degradation faced by a robot at a given point in time.
Robot health is a metric that combines robot vitals into a single scalar value
estimate of performance degradation. Experiments, both in simulation and on a
real mobile robot, demonstrate that the proposed robot vitals and robot health
can be used effectively to estimate robot performance degradation during
runtime.
Related papers
- Multi-Task Interactive Robot Fleet Learning with Visual World Models [25.001148860168477]
Sirius-Fleet is a multi-task interactive robot fleet learning framework.
It monitors robot performance during deployment and involves humans to correct the robot's actions when necessary.
As the robot autonomy improves, anomaly predictors automatically adapt their prediction criteria.
arXiv Detail & Related papers (2024-10-30T04:49:39Z) - Learning Object Properties Using Robot Proprioception via Differentiable Robot-Object Interaction [52.12746368727368]
Differentiable simulation has become a powerful tool for system identification.
Our approach calibrates object properties by using information from the robot, without relying on data from the object itself.
We demonstrate the effectiveness of our method on a low-cost robotic platform.
arXiv Detail & Related papers (2024-10-04T20:48:38Z) - Human-Agent Joint Learning for Efficient Robot Manipulation Skill Acquisition [48.65867987106428]
We introduce a novel system for joint learning between human operators and robots.
It enables human operators to share control of a robot end-effector with a learned assistive agent.
It reduces the need for human adaptation while ensuring the collected data is of sufficient quality for downstream tasks.
arXiv Detail & Related papers (2024-06-29T03:37:29Z) - RoboScript: Code Generation for Free-Form Manipulation Tasks across Real
and Simulation [77.41969287400977]
This paper presents textbfRobotScript, a platform for a deployable robot manipulation pipeline powered by code generation.
We also present a benchmark for a code generation benchmark for robot manipulation tasks in free-form natural language.
We demonstrate the adaptability of our code generation framework across multiple robot embodiments, including the Franka and UR5 robot arms.
arXiv Detail & Related papers (2024-02-22T15:12:00Z) - General-purpose foundation models for increased autonomy in
robot-assisted surgery [4.155479231940454]
This perspective article aims to provide a path toward increasing robot autonomy in robot-assisted surgery.
We argue that surgical robots are uniquely positioned to benefit from general-purpose models and provide three guiding actions toward increased autonomy in robot-assisted surgery.
arXiv Detail & Related papers (2024-01-01T06:15:16Z) - Self-Improving Robots: End-to-End Autonomous Visuomotor Reinforcement
Learning [54.636562516974884]
In imitation and reinforcement learning, the cost of human supervision limits the amount of data that robots can be trained on.
In this work, we propose MEDAL++, a novel design for self-improving robotic systems.
The robot autonomously practices the task by learning to both do and undo the task, simultaneously inferring the reward function from the demonstrations.
arXiv Detail & Related papers (2023-03-02T18:51:38Z) - Revisiting the Adversarial Robustness-Accuracy Tradeoff in Robot
Learning [121.9708998627352]
Recent work has shown that, in practical robot learning applications, the effects of adversarial training do not pose a fair trade-off.
This work revisits the robustness-accuracy trade-off in robot learning by analyzing if recent advances in robust training methods and theory can make adversarial training suitable for real-world robot applications.
arXiv Detail & Related papers (2022-04-15T08:12:15Z) - Show Me What You Can Do: Capability Calibration on Reachable Workspace
for Human-Robot Collaboration [83.4081612443128]
We show that a short calibration using REMP can effectively bridge the gap between what a non-expert user thinks a robot can reach and the ground-truth.
We show that this calibration procedure not only results in better user perception, but also promotes more efficient human-robot collaborations.
arXiv Detail & Related papers (2021-03-06T09:14:30Z) - Fault-Aware Robust Control via Adversarial Reinforcement Learning [35.16413579212691]
We propose an adversarial reinforcement learning framework, which significantly increases robot fragility over joint damage cases.
We validate our algorithm on a three-fingered robot hand and a quadruped robot.
Our algorithm can be trained only in simulation and directly deployed on a real robot without any fine-tuning.
arXiv Detail & Related papers (2020-11-17T16:01:06Z) - Supportive Actions for Manipulation in Human-Robot Coworker Teams [15.978389978586414]
We term actions that support interaction by reducing future interference with others as supportive robot actions.
We compare two robot modes in a shared table pick-and-place task: (1) Task-oriented: the robot only takes actions to further its own task objective and (2) Supportive: the robot sometimes prefers supportive actions to task-oriented ones.
Our experiments in simulation, using a simplified human model, reveal that supportive actions reduce the interference between agents, especially in more difficult tasks, but also cause the robot to take longer to complete the task.
arXiv Detail & Related papers (2020-05-02T09:37:10Z) - An Attention Transfer Model for Human-Assisted Failure Avoidance in
Robot Manipulations [2.745883395089022]
A novel human-to-robot attention transfer (textittextbfH2R-AT) method was developed to identify robot manipulation errors.
textittextbfH2R-AT was developed by fusing attention mapping mechanism into a novel stacked neural networks model.
The method effectiveness was validated by the high accuracy of $73.68%$ in transferring attention, and the high accuracy of $66.86%$ in avoiding grasping failures.
arXiv Detail & Related papers (2020-02-11T07:58:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.