Towards Predicting Fine Finger Motions from Ultrasound Images via
Kinematic Representation
- URL: http://arxiv.org/abs/2202.05204v1
- Date: Thu, 10 Feb 2022 18:05:09 GMT
- Title: Towards Predicting Fine Finger Motions from Ultrasound Images via
Kinematic Representation
- Authors: Dean Zadok, Oren Salzman, Alon Wolf and Alex M. Bronstein
- Abstract summary: We study the inference problem of identifying the activation of specific fingers from a sequence of US images.
We consider this task as an important step towards higher adoption rates of robotic prostheses among arm amputees.
- Score: 12.49914980193329
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A central challenge in building robotic prostheses is the creation of a
sensor-based system able to read physiological signals from the lower limb and
instruct a robotic hand to perform various tasks. Existing systems typically
perform discrete gestures such as pointing or grasping, by employing
electromyography (EMG) or ultrasound (US) technologies to analyze the state of
the muscles. In this work, we study the inference problem of identifying the
activation of specific fingers from a sequence of US images when performing
dexterous tasks such as keyboard typing or playing the piano. While estimating
finger gestures has been done in the past by detecting prominent gestures, we
are interested in classification done in the context of fine motions that
evolve over time. We consider this task as an important step towards higher
adoption rates of robotic prostheses among arm amputees, as it has the
potential to dramatically increase functionality in performing daily tasks. Our
key observation, motivating this work, is that modeling the hand as a robotic
manipulator allows to encode an intermediate representation wherein US images
are mapped to said configurations. Given a sequence of such learned
configurations, coupled with a neural-network architecture that exploits
temporal coherence, we are able to infer fine finger motions. We evaluated our
method by collecting data from a group of subjects and demonstrating how our
framework can be used to replay music played or text typed. To the best of our
knowledge, this is the first study demonstrating these downstream tasks within
an end-to-end system.
Related papers
- Intelligent Robotic Sonographer: Mutual Information-based Disentangled
Reward Learning from Few Demonstrations [42.731081399649916]
This work proposes an intelligent robotic sonographer to autonomously "explore" target anatomies and navigate a US probe to a relevant 2D plane by learning from the expert.
The underlying high-level physiological knowledge from experts is inferred by a neural reward function.
The proposed advanced framework can robustly work on a variety of seen and unseen phantoms as well as in-vivo human carotid data.
arXiv Detail & Related papers (2023-07-07T16:30:50Z) - Robot Learning with Sensorimotor Pre-training [98.7755895548928]
We present a self-supervised sensorimotor pre-training approach for robotics.
Our model, called RPT, is a Transformer that operates on sequences of sensorimotor tokens.
We find that sensorimotor pre-training consistently outperforms training from scratch, has favorable scaling properties, and enables transfer across different tasks, environments, and robots.
arXiv Detail & Related papers (2023-06-16T17:58:10Z) - Agile gesture recognition for capacitive sensing devices: adapting
on-the-job [55.40855017016652]
We demonstrate a hand gesture recognition system that uses signals from capacitive sensors embedded into the etee hand controller.
The controller generates real-time signals from each of the wearer five fingers.
We use a machine learning technique to analyse the time series signals and identify three features that can represent 5 fingers within 500 ms.
arXiv Detail & Related papers (2023-05-12T17:24:02Z) - Tactile-Filter: Interactive Tactile Perception for Part Mating [54.46221808805662]
Humans rely on touch and tactile sensing for a lot of dexterous manipulation tasks.
vision-based tactile sensors are being widely used for various robotic perception and control tasks.
We present a method for interactive perception using vision-based tactile sensors for a part mating task.
arXiv Detail & Related papers (2023-03-10T16:27:37Z) - Robotic Navigation Autonomy for Subretinal Injection via Intelligent
Real-Time Virtual iOCT Volume Slicing [88.99939660183881]
We propose a framework for autonomous robotic navigation for subretinal injection.
Our method consists of an instrument pose estimation method, an online registration between the robotic and the i OCT system, and trajectory planning tailored for navigation to an injection target.
Our experiments on ex-vivo porcine eyes demonstrate the precision and repeatability of the method.
arXiv Detail & Related papers (2023-01-17T21:41:21Z) - See, Hear, and Feel: Smart Sensory Fusion for Robotic Manipulation [49.925499720323806]
We study how visual, auditory, and tactile perception can jointly help robots to solve complex manipulation tasks.
We build a robot system that can see with a camera, hear with a contact microphone, and feel with a vision-based tactile sensor.
arXiv Detail & Related papers (2022-12-07T18:55:53Z) - Simultaneous Estimation of Hand Configurations and Finger Joint Angles
using Forearm Ultrasound [8.753262480814493]
Forearm ultrasound images provide a musculoskeletal visualization that can be used to understand hand motion.
We propose a CNN based deep learning pipeline for predicting the MCP joint angles.
A low latency pipeline has been proposed for estimating both MCP joint angles and hand configuration aimed at real-time control of human-machine interfaces.
arXiv Detail & Related papers (2022-11-29T02:06:19Z) - Efficient Gesture Recognition for the Assistance of Visually Impaired
People using Multi-Head Neural Networks [5.883916678819684]
This paper proposes an interactive system for mobile devices controlled by hand gestures aimed at helping people with visual impairments.
This system allows the user to interact with the device by making simple static and dynamic hand gestures.
Each gesture triggers a different action in the system, such as object recognition, scene description or image scaling.
arXiv Detail & Related papers (2022-05-14T06:01:47Z) - HANDS: A Multimodal Dataset for Modeling Towards Human Grasp Intent
Inference in Prosthetic Hands [3.7886097009023376]
Advanced prosthetic hands of the future are anticipated to benefit from improved shared control between a robotic hand and its human user.
multimodal sensor data may include various environment sensors including vision, as well as human physiology and behavior sensors.
A fusion methodology for environmental state and human intent estimation can combine these sources of evidence in order to help prosthetic hand motion planning and control.
arXiv Detail & Related papers (2021-03-08T15:51:03Z) - Relational Graph Learning on Visual and Kinematics Embeddings for
Accurate Gesture Recognition in Robotic Surgery [84.73764603474413]
We propose a novel online approach of multi-modal graph network (i.e., MRG-Net) to dynamically integrate visual and kinematics information.
The effectiveness of our method is demonstrated with state-of-the-art results on the public JIGSAWS dataset.
arXiv Detail & Related papers (2020-11-03T11:00:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.