Non-invasive Cognitive-level Human Interfacing for the Robotic
Restoration of Reaching & Grasping
- URL: http://arxiv.org/abs/2102.12980v1
- Date: Thu, 25 Feb 2021 16:32:04 GMT
- Title: Non-invasive Cognitive-level Human Interfacing for the Robotic
Restoration of Reaching & Grasping
- Authors: Ali Shafti and A. Aldo Faisal
- Abstract summary: We present a robotic system for human augmentation, capable of actuating the user's arm and fingers for them.
We combine wearable eye tracking, the visual context of the environment and the structural grammar of human actions to create a cognitive-level assistive robotic setup.
- Score: 5.985098076571228
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Assistive and Wearable Robotics have the potential to support humans with
different types of motor impairments to become independent and fulfil their
activities of daily living successfully. The success of these robot systems,
however, relies on the ability to meaningfully decode human action intentions
and carry them out appropriately. Neural interfaces have been explored for use
in such system with several successes, however, they tend to be invasive and
require training periods in the order of months. We present a robotic system
for human augmentation, capable of actuating the user's arm and fingers for
them, effectively restoring the capability of reaching, grasping and
manipulating objects; controlled solely through the user's eye movements. We
combine wearable eye tracking, the visual context of the environment and the
structural grammar of human actions to create a cognitive-level assistive
robotic setup that enables the users in fulfilling activities of daily living,
while conserving interpretability, and the agency of the user. The interface is
worn, calibrated and ready to use within 5 minutes. Users learn to control and
make successful use of the system with an additional 5 minutes of interaction.
The system is tested with 5 healthy participants, showing an average success
rate of $96.6\%$ on first attempt across 6 tasks.
Related papers
- Human-Agent Joint Learning for Efficient Robot Manipulation Skill Acquisition [48.65867987106428]
We introduce a novel system for joint learning between human operators and robots.
It enables human operators to share control of a robot end-effector with a learned assistive agent.
It reduces the need for human adaptation while ensuring the collected data is of sufficient quality for downstream tasks.
arXiv Detail & Related papers (2024-06-29T03:37:29Z) - Dynamic Hand Gesture-Featured Human Motor Adaptation in Tool Delivery
using Voice Recognition [5.13619372598999]
This paper introduces an innovative human-robot collaborative framework.
It seamlessly integrates hand gesture and dynamic movement recognition, voice recognition, and a switchable control adaptation strategy.
Experiment results have demonstrated superior performance in hand gesture recognition.
arXiv Detail & Related papers (2023-09-20T14:51:09Z) - Dexterous Manipulation from Images: Autonomous Real-World RL via Substep
Guidance [71.36749876465618]
We describe a system for vision-based dexterous manipulation that provides a "programming-free" approach for users to define new tasks.
Our system includes a framework for users to define a final task and intermediate sub-tasks with image examples.
experimental results with a four-finger robotic hand learning multi-stage object manipulation tasks directly in the real world.
arXiv Detail & Related papers (2022-12-19T22:50:40Z) - Generalizable Human-Robot Collaborative Assembly Using Imitation
Learning and Force Control [17.270360447188196]
We present a system for human-robot collaborative assembly using learning from demonstration and pose estimation.
The proposed system is demonstrated using a physical 6 DoF manipulator in a collaborative human-robot assembly scenario.
arXiv Detail & Related papers (2022-12-02T20:35:55Z) - Cognitive architecture aided by working-memory for self-supervised
multi-modal humans recognition [54.749127627191655]
The ability to recognize human partners is an important social skill to build personalized and long-term human-robot interactions.
Deep learning networks have achieved state-of-the-art results and demonstrated to be suitable tools to address such a task.
One solution is to make robots learn from their first-hand sensory data with self-supervision.
arXiv Detail & Related papers (2021-03-16T13:50:24Z) - Show Me What You Can Do: Capability Calibration on Reachable Workspace
for Human-Robot Collaboration [83.4081612443128]
We show that a short calibration using REMP can effectively bridge the gap between what a non-expert user thinks a robot can reach and the ground-truth.
We show that this calibration procedure not only results in better user perception, but also promotes more efficient human-robot collaborations.
arXiv Detail & Related papers (2021-03-06T09:14:30Z) - Careful with That! Observation of Human Movements to Estimate Objects
Properties [106.925705883949]
We focus on the features of human motor actions that communicate insights on the weight of an object.
Our final goal is to enable a robot to autonomously infer the degree of care required in object handling.
arXiv Detail & Related papers (2021-03-02T08:14:56Z) - Human Grasp Classification for Reactive Human-to-Robot Handovers [50.91803283297065]
We propose an approach for human-to-robot handovers in which the robot meets the human halfway.
We collect a human grasp dataset which covers typical ways of holding objects with various hand shapes and poses.
We present a planning and execution approach that takes the object from the human hand according to the detected grasp and hand position.
arXiv Detail & Related papers (2020-03-12T19:58:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.