Spatiotemporal modeling of grip forces captures proficiency in manual
robot control
- URL: http://arxiv.org/abs/2303.01995v1
- Date: Fri, 3 Mar 2023 15:08:00 GMT
- Title: Spatiotemporal modeling of grip forces captures proficiency in manual
robot control
- Authors: Rongrong Liu, John M. Wandeto, Florent Nageotte, Philippe Zanne,
Michel de Mathelin, Birgitta Dresp-Langley
- Abstract summary: This paper builds on our previous work by exploiting Artificial Intelligence to predict individual grip force variability in manual robot control.
Statistical analyses bring to the fore skill specific temporal variations in thousands of grip forces of a complete novice and a highly proficient expert.
- Score: 5.504040521972806
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper builds on our previous work by exploiting Artificial Intelligence
to predict individual grip force variability in manual robot control. Grip
forces were recorded from various loci in the dominant and non dominant hands
of individuals by means of wearable wireless sensor technology. Statistical
analyses bring to the fore skill specific temporal variations in thousands of
grip forces of a complete novice and a highly proficient expert in manual robot
control. A brain inspired neural network model that uses the output metric of a
Self Organizing Map with unsupervised winner take all learning was run on the
sensor output from both hands of each user. The neural network metric expresses
the difference between an input representation and its model representation at
any given moment in time t and reliably captures the differences between novice
and expert performance in terms of grip force variability.Functionally
motivated spatiotemporal analysis of individual average grip forces, computed
for time windows of constant size in the output of a restricted amount of
task-relevant sensors in the dominant (preferred) hand, reveal finger-specific
synergies reflecting robotic task skill. The analyses lead the way towards grip
force monitoring in real time to permit tracking task skill evolution in
trainees, or identify individual proficiency levels in human robot interaction
in environmental contexts of high sensory uncertainty. Parsimonious Artificial
Intelligence (AI) assistance will contribute to the outcome of new types of
surgery, in particular single-port approaches such as NOTES (Natural Orifice
Transluminal Endoscopic Surgery) and SILS (Single Incision Laparoscopic
Surgery).
Related papers
- Human-Agent Joint Learning for Efficient Robot Manipulation Skill Acquisition [48.65867987106428]
We introduce a novel system for joint learning between human operators and robots.
It enables human operators to share control of a robot end-effector with a learned assistive agent.
It reduces the need for human adaptation while ensuring the collected data is of sufficient quality for downstream tasks.
arXiv Detail & Related papers (2024-06-29T03:37:29Z) - Exploring of Discrete and Continuous Input Control for AI-enhanced
Assistive Robotic Arms [5.371337604556312]
Collaborative robots require users to manage multiple Degrees-of-Freedom (DoFs) for tasks like grasping and manipulating objects.
This study explores three different input devices by integrating them into an established XR framework for assistive robotics.
arXiv Detail & Related papers (2024-01-13T16:57:40Z) - Human-oriented Representation Learning for Robotic Manipulation [64.59499047836637]
Humans inherently possess generalizable visual representations that empower them to efficiently explore and interact with the environments in manipulation tasks.
We formalize this idea through the lens of human-oriented multi-task fine-tuning on top of pre-trained visual encoders.
Our Task Fusion Decoder consistently improves the representation of three state-of-the-art visual encoders for downstream manipulation policy-learning.
arXiv Detail & Related papers (2023-10-04T17:59:38Z) - Agile gesture recognition for capacitive sensing devices: adapting
on-the-job [55.40855017016652]
We demonstrate a hand gesture recognition system that uses signals from capacitive sensors embedded into the etee hand controller.
The controller generates real-time signals from each of the wearer five fingers.
We use a machine learning technique to analyse the time series signals and identify three features that can represent 5 fingers within 500 ms.
arXiv Detail & Related papers (2023-05-12T17:24:02Z) - Artificial Intelligence Enables Real-Time and Intuitive Control of
Prostheses via Nerve Interface [25.870454492249863]
The next generation prosthetic hand that moves and feels like a real hand requires a robust neural interconnection between the human minds and machines.
Here we present a neuroprosthetic system to demonstrate that principle by employing an artificial intelligence (AI) agent to translate the amputee's movement intent through a peripheral nerve interface.
arXiv Detail & Related papers (2022-03-16T14:33:38Z) - Vision-Based Manipulators Need to Also See from Their Hands [58.398637422321976]
We study how the choice of visual perspective affects learning and generalization in the context of physical manipulation from raw sensor observations.
We find that a hand-centric (eye-in-hand) perspective affords reduced observability, but it consistently improves training efficiency and out-of-distribution generalization.
arXiv Detail & Related papers (2022-03-15T18:46:18Z) - Surgical task expertise detected by a self-organizing neural network map [0.0]
Grip force variability in a true expert and a complete novice executing a robot assisted surgical simulator task reveal statistically significant differences as a function of task expertise.
We show that the skill specific differences in local grip forces are predicted by the output metric of a Self Organizing neural network Map.
arXiv Detail & Related papers (2021-06-03T10:48:10Z) - HANDS: A Multimodal Dataset for Modeling Towards Human Grasp Intent
Inference in Prosthetic Hands [3.7886097009023376]
Advanced prosthetic hands of the future are anticipated to benefit from improved shared control between a robotic hand and its human user.
multimodal sensor data may include various environment sensors including vision, as well as human physiology and behavior sensors.
A fusion methodology for environmental state and human intent estimation can combine these sources of evidence in order to help prosthetic hand motion planning and control.
arXiv Detail & Related papers (2021-03-08T15:51:03Z) - Where is my hand? Deep hand segmentation for visual self-recognition in
humanoid robots [129.46920552019247]
We propose the use of a Convolution Neural Network (CNN) to segment the robot hand from an image in an egocentric view.
We fine-tuned the Mask-RCNN network for the specific task of segmenting the hand of the humanoid robot Vizzy.
arXiv Detail & Related papers (2021-02-09T10:34:32Z) - Human Haptic Gesture Interpretation for Robotic Systems [3.888848425698769]
Physical human-robot interactions (pHRI) are less efficient and communicative than human-human interactions.
A key reason is a lack of informative sense of touch in robotic systems.
This work presents four proposed touch gesture classes that cover the majority of the gesture characteristics identified in the literature.
arXiv Detail & Related papers (2020-12-03T14:33:57Z) - Task-relevant Representation Learning for Networked Robotic Perception [74.0215744125845]
This paper presents an algorithm to learn task-relevant representations of sensory data that are co-designed with a pre-trained robotic perception model's ultimate objective.
Our algorithm aggressively compresses robotic sensory data by up to 11x more than competing methods.
arXiv Detail & Related papers (2020-11-06T07:39:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.