Learn and Transfer Knowledge of Preferred Assistance Strategies in
Semi-autonomous Telemanipulation
- URL: http://arxiv.org/abs/2003.03516v2
- Date: Sat, 19 Dec 2020 20:21:40 GMT
- Title: Learn and Transfer Knowledge of Preferred Assistance Strategies in
Semi-autonomous Telemanipulation
- Authors: Lingfeng Tao, Michael Bowman, Xu Zhou, Jiucai Zhang, Xiaoli Zhang
- Abstract summary: We develop a novel preference-aware assistance knowledge learning approach.
An assistance preference model learns what assistance is preferred by a human.
We also develop knowledge transfer methods to transfer the preference knowledge across different robot hand structures.
- Score: 16.28164706104047
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Enabling robots to provide effective assistance yet still accommodating the
operator's commands for telemanipulation of an object is very challenging
because robot's assistive action is not always intuitive for human operators
and human behaviors and preferences are sometimes ambiguous for the robot to
interpret. Although various assistance approaches are being developed to
improve the control quality from different optimization perspectives, the
problem still remains in determining the appropriate approach that satisfies
the fine motion constraints for the telemanipulation task and preference of the
operator. To address these problems, we developed a novel preference-aware
assistance knowledge learning approach. An assistance preference model learns
what assistance is preferred by a human, and a stagewise model updating method
ensures the learning stability while dealing with the ambiguity of human
preference data. Such a preference-aware assistance knowledge enables a
teleoperated robot hand to provide more active yet preferred assistance toward
manipulation success. We also developed knowledge transfer methods to transfer
the preference knowledge across different robot hand structures to avoid
extensive robot-specific training. Experiments to telemanipulate a 3-finger
hand and 2-finger hand, respectively, to use, move, and hand over a cup have
been conducted. Results demonstrated that the methods enabled the robots to
effectively learn the preference knowledge and allowed knowledge transfer
between robots with less training effort.
Related papers
- Human-Agent Joint Learning for Efficient Robot Manipulation Skill Acquisition [48.65867987106428]
We introduce a novel system for joint learning between human operators and robots.
It enables human operators to share control of a robot end-effector with a learned assistive agent.
It reduces the need for human adaptation while ensuring the collected data is of sufficient quality for downstream tasks.
arXiv Detail & Related papers (2024-06-29T03:37:29Z) - Dynamic Hand Gesture-Featured Human Motor Adaptation in Tool Delivery
using Voice Recognition [5.13619372598999]
This paper introduces an innovative human-robot collaborative framework.
It seamlessly integrates hand gesture and dynamic movement recognition, voice recognition, and a switchable control adaptation strategy.
Experiment results have demonstrated superior performance in hand gesture recognition.
arXiv Detail & Related papers (2023-09-20T14:51:09Z) - "No, to the Right" -- Online Language Corrections for Robotic
Manipulation via Shared Autonomy [70.45420918526926]
We present LILAC, a framework for incorporating and adapting to natural language corrections online during execution.
Instead of discrete turn-taking between a human and robot, LILAC splits agency between the human and robot.
We show that our corrections-aware approach obtains higher task completion rates, and is subjectively preferred by users.
arXiv Detail & Related papers (2023-01-06T15:03:27Z) - Dexterous Manipulation from Images: Autonomous Real-World RL via Substep
Guidance [71.36749876465618]
We describe a system for vision-based dexterous manipulation that provides a "programming-free" approach for users to define new tasks.
Our system includes a framework for users to define a final task and intermediate sub-tasks with image examples.
experimental results with a four-finger robotic hand learning multi-stage object manipulation tasks directly in the real world.
arXiv Detail & Related papers (2022-12-19T22:50:40Z) - Learning Reward Functions for Robotic Manipulation by Observing Humans [92.30657414416527]
We use unlabeled videos of humans solving a wide range of manipulation tasks to learn a task-agnostic reward function for robotic manipulation policies.
The learned rewards are based on distances to a goal in an embedding space learned using a time-contrastive objective.
arXiv Detail & Related papers (2022-11-16T16:26:48Z) - Revisiting the Adversarial Robustness-Accuracy Tradeoff in Robot
Learning [121.9708998627352]
Recent work has shown that, in practical robot learning applications, the effects of adversarial training do not pose a fair trade-off.
This work revisits the robustness-accuracy trade-off in robot learning by analyzing if recent advances in robust training methods and theory can make adversarial training suitable for real-world robot applications.
arXiv Detail & Related papers (2022-04-15T08:12:15Z) - Learning Agile Robotic Locomotion Skills by Imitating Animals [72.36395376558984]
Reproducing the diverse and agile locomotion skills of animals has been a longstanding challenge in robotics.
We present an imitation learning system that enables legged robots to learn agile locomotion skills by imitating real-world animals.
arXiv Detail & Related papers (2020-04-02T02:56:16Z) - The State of Lifelong Learning in Service Robots: Current Bottlenecks in
Object Perception and Manipulation [3.7858180627124463]
State-of-the-art continues to improve to make a proper coupling between object perception and manipulation.
In most of the cases, robots are able to recognize various objects, and quickly plan a collision-free trajectory to grasp a target object.
In such environments, no matter how extensive the training data used for batch learning, a robot will always face new objects.
apart from robot self-learning, non-expert users could interactively guide the process of experience acquisition.
arXiv Detail & Related papers (2020-03-18T11:00:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.