Intelligent upper-limb exoskeleton integrated with soft wearable
bioelectronics and deep-learning for human intention-driven strength
augmentation based on sensory feedback
- URL: http://arxiv.org/abs/2309.04655v2
- Date: Fri, 26 Jan 2024 05:40:50 GMT
- Title: Intelligent upper-limb exoskeleton integrated with soft wearable
bioelectronics and deep-learning for human intention-driven strength
augmentation based on sensory feedback
- Authors: Jinwoo Lee, Kangkyu Kwon, Ira Soltis, Jared Matthews, Yoonjae Lee,
Hojoong Kim, Lissette Romero, Nathan Zavanelli, Youngjin Kwon, Shinjae Kwon,
Jimin Lee, Yewon Na, Sung Hoon Lee, Ki Jun Yu, Minoru Shinohara, Frank L.
Hammond, Woon-Hong Yeo
- Abstract summary: The age and stroke-associated decline in musculoskeletal strength degrades the ability to perform daily human tasks using the upper extremities.
Here, we introduce an upper-limb exoskeleton system that uses cloud-based deep learning to predict human intention for strength augmentation.
The intent-driven exoskeleton can augment human strength by 5.15 times on average compared to the unassisted exoskeleton.
- Score: 5.447052101190182
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The age and stroke-associated decline in musculoskeletal strength degrades
the ability to perform daily human tasks using the upper extremities. Although
there are a few examples of exoskeletons, they need manual operations due to
the absence of sensor feedback and no intention prediction of movements. Here,
we introduce an intelligent upper-limb exoskeleton system that uses cloud-based
deep learning to predict human intention for strength augmentation. The
embedded soft wearable sensors provide sensory feedback by collecting real-time
muscle signals, which are simultaneously computed to determine the user's
intended movement. The cloud-based deep-learning predicts four upper-limb joint
motions with an average accuracy of 96.2% at a 200-250 millisecond response
rate, suggesting that the exoskeleton operates just by human intention. In
addition, an array of soft pneumatics assists the intended movements by
providing 897 newton of force and 78.7 millimeter of displacement at maximum.
Collectively, the intent-driven exoskeleton can augment human strength by 5.15
times on average compared to the unassisted exoskeleton. This report
demonstrates an exoskeleton robot that augments the upper-limb joint movements
by human intention based on a machine-learning cloud computing and sensory
feedback.
Related papers
- Digitizing Touch with an Artificial Multimodal Fingertip [51.7029315337739]
Humans and robots both benefit from using touch to perceive and interact with the surrounding environment.
Here, we describe several conceptual and technological innovations to improve the digitization of touch.
These advances are embodied in an artificial finger-shaped sensor with advanced sensing capabilities.
arXiv Detail & Related papers (2024-11-04T18:38:50Z) - Continual Imitation Learning for Prosthetic Limbs [0.7922558880545526]
Motorized bionic limbs offer promise, but their utility depends on mimicking the evolving synergy of human movement in various settings.
We present a novel model for bionic prostheses' application that leverages camera-based motion capture and wearable sensor data.
We propose a model that can multitask, adapt continually, anticipate movements, and refine locomotion.
arXiv Detail & Related papers (2024-05-02T09:22:54Z) - Self Model for Embodied Intelligence: Modeling Full-Body Human Musculoskeletal System and Locomotion Control with Hierarchical Low-Dimensional Representation [22.925312305575183]
We build a musculoskeletal model (MS-Human-700) with 90 body segments, 206 joints, and 700 muscle-tendon units.
We develop a new algorithm using low-dimensional representation and hierarchical deep reinforcement learning to achieve state-of-the-art full-body control.
arXiv Detail & Related papers (2023-12-09T05:42:32Z) - Robot Learning with Sensorimotor Pre-training [98.7755895548928]
We present a self-supervised sensorimotor pre-training approach for robotics.
Our model, called RPT, is a Transformer that operates on sequences of sensorimotor tokens.
We find that sensorimotor pre-training consistently outperforms training from scratch, has favorable scaling properties, and enables transfer across different tasks, environments, and robots.
arXiv Detail & Related papers (2023-06-16T17:58:10Z) - An Overview of Artificial Intelligence-based Soft Upper Limb Exoskeleton
for Rehabilitation: A Descriptive Review [0.0]
The upper limb robotic exoskeleton is an electromechanical device which use to recover a patients motor dysfunction in the rehabilitation field.
It can provide repetitive, comprehensive, focused, positive, and precise training to regain the joints and muscles capability.
arXiv Detail & Related papers (2023-01-11T07:13:25Z) - Ultra-sensitive Flexible Sponge-Sensor Array for Muscle Activities
Detection and Human Limb Motion Recognition [8.26625796934816]
Human limb motion tracking and recognition plays an important role in medical rehabilitation training, lower limb assistance, prosthetics design for amputees, etc.
This work demonstrates a portable wearable muscle activity detection device with a lower limb motion recognition application.
arXiv Detail & Related papers (2022-04-30T06:44:26Z) - GIMO: Gaze-Informed Human Motion Prediction in Context [75.52839760700833]
We propose a large-scale human motion dataset that delivers high-quality body pose sequences, scene scans, and ego-centric views with eye gaze.
Our data collection is not tied to specific scenes, which further boosts the motion dynamics observed from our subjects.
To realize the full potential of gaze, we propose a novel network architecture that enables bidirectional communication between the gaze and motion branches.
arXiv Detail & Related papers (2022-04-20T13:17:39Z) - Synthesis and Execution of Communicative Robotic Movements with
Generative Adversarial Networks [59.098560311521034]
We focus on how to transfer on two different robotic platforms the same kinematics modulation that humans adopt when manipulating delicate objects.
We choose to modulate the velocity profile adopted by the robots' end-effector, inspired by what humans do when transporting objects with different characteristics.
We exploit a novel Generative Adversarial Network architecture, trained with human kinematics examples, to generalize over them and generate new and meaningful velocity profiles.
arXiv Detail & Related papers (2022-03-29T15:03:05Z) - From Movement Kinematics to Object Properties: Online Recognition of
Human Carefulness [112.28757246103099]
We show how a robot can infer online, from vision alone, whether or not the human partner is careful when moving an object.
We demonstrated that a humanoid robot could perform this inference with high accuracy (up to 81.3%) even with a low-resolution camera.
The prompt recognition of movement carefulness from observing the partner's action will allow robots to adapt their actions on the object to show the same degree of care as their human partners.
arXiv Detail & Related papers (2021-09-01T16:03:13Z) - Careful with That! Observation of Human Movements to Estimate Objects
Properties [106.925705883949]
We focus on the features of human motor actions that communicate insights on the weight of an object.
Our final goal is to enable a robot to autonomously infer the degree of care required in object handling.
arXiv Detail & Related papers (2021-03-02T08:14:56Z) - Reinforcement Learning Control of a Biomechanical Model of the Upper
Extremity [0.0]
We learn a control policy using a motor babbling approach as implemented in reinforcement learning.
We use a state-of-the-art biomechanical model, which includes seven actuated degrees of freedom.
To deal with the curse of dimensionality, we use a simplified second-order muscle model, acting at each degree of freedom instead of individual muscles.
arXiv Detail & Related papers (2020-11-13T19:49:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.