Joint Action is a Framework for Understanding Partnerships Between
Humans and Upper Limb Prostheses
- URL: http://arxiv.org/abs/2212.14124v1
- Date: Wed, 28 Dec 2022 23:27:32 GMT
- Title: Joint Action is a Framework for Understanding Partnerships Between
Humans and Upper Limb Prostheses
- Authors: Michael R. Dawson, Adam S. R. Parker, Heather E. Williams, Ahmed W.
Shehata, Jacqueline S. Hebert, Craig S. Chapman, Patrick M. Pilarski
- Abstract summary: We compare different prosthesis controllers (proportional electromyography with sequential switching, pattern recognition, and adaptive switching) in terms of how they present the hallmarks of joint action.
The results of the comparison lead to a new perspective for understanding how existing myoelectric systems relate to each other.
- Score: 0.6649973446180738
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advances in upper limb prostheses have led to significant improvements
in the number of movements provided by the robotic limb. However, the method
for controlling multiple degrees of freedom via user-generated signals remains
challenging. To address this issue, various machine learning controllers have
been developed to better predict movement intent. As these controllers become
more intelligent and take on more autonomy in the system, the traditional
approach of representing the human-machine interface as a human controlling a
tool becomes limiting. One possible approach to improve the understanding of
these interfaces is to model them as collaborative, multi-agent systems through
the lens of joint action. The field of joint action has been commonly applied
to two human partners who are trying to work jointly together to achieve a
task, such as singing or moving a table together, by effecting coordinated
change in their shared environment. In this work, we compare different
prosthesis controllers (proportional electromyography with sequential
switching, pattern recognition, and adaptive switching) in terms of how they
present the hallmarks of joint action. The results of the comparison lead to a
new perspective for understanding how existing myoelectric systems relate to
each other, along with recommendations for how to improve these systems by
increasing the collaborative communication between each partner.
Related papers
- Two-Person Interaction Augmentation with Skeleton Priors [16.65884142618145]
We propose a new deep learning method for two-body skeletal interaction motion augmentation.
Our system can learn effectively from a relatively small amount of data and generalize to drastically different skeleton sizes.
arXiv Detail & Related papers (2024-04-08T13:11:57Z) - Learning Mutual Excitation for Hand-to-Hand and Human-to-Human
Interaction Recognition [22.538114033191313]
We propose a mutual excitation graph convolutional network (me-GCN) by stacking mutual excitation graph convolution layers.
Me-GC learns mutual information in each layer and each stage of graph convolution operations.
Our proposed me-GC outperforms state-of-the-art GCN-based and Transformer-based methods.
arXiv Detail & Related papers (2024-02-04T10:00:00Z) - InterControl: Zero-shot Human Interaction Generation by Controlling Every Joint [67.6297384588837]
We introduce a novel controllable motion generation method, InterControl, to encourage the synthesized motions maintaining the desired distance between joint pairs.
We demonstrate that the distance between joint pairs for human-wise interactions can be generated using an off-the-shelf Large Language Model.
arXiv Detail & Related papers (2023-11-27T14:32:33Z) - Persistent-Transient Duality: A Multi-mechanism Approach for Modeling
Human-Object Interaction [58.67761673662716]
Humans are highly adaptable, swiftly switching between different modes to handle different tasks, situations and contexts.
In Human-object interaction (HOI) activities, these modes can be attributed to two mechanisms: (1) the large-scale consistent plan for the whole activity and (2) the small-scale children interactive actions that start and end along the timeline.
This work proposes to model two concurrent mechanisms that jointly control human motion.
arXiv Detail & Related papers (2023-07-24T12:21:33Z) - InterGen: Diffusion-based Multi-human Motion Generation under Complex Interactions [49.097973114627344]
We present InterGen, an effective diffusion-based approach that incorporates human-to-human interactions into the motion diffusion process.
We first contribute a multimodal dataset, named InterHuman. It consists of about 107M frames for diverse two-person interactions, with accurate skeletal motions and 23,337 natural language descriptions.
We propose a novel representation for motion input in our interaction diffusion model, which explicitly formulates the global relations between the two performers in the world frame.
arXiv Detail & Related papers (2023-04-12T08:12:29Z) - Multi-robot Social-aware Cooperative Planning in Pedestrian Environments
Using Multi-agent Reinforcement Learning [2.7716102039510564]
We propose a novel multi-robot social-aware efficient cooperative planner that on the basis of off-policy multi-agent reinforcement learning (MARL)
We adopt temporal-spatial graph (TSG)-based social encoder to better extract the importance of social relation between each robot and the pedestrians in its field of view (FOV)
arXiv Detail & Related papers (2022-11-29T03:38:47Z) - Rethinking Trajectory Prediction via "Team Game" [118.59480535826094]
We present a novel formulation for multi-agent trajectory prediction, which explicitly introduces the concept of interactive group consensus.
On two multi-agent settings, i.e. team sports and pedestrians, the proposed framework consistently achieves superior performance compared to existing methods.
arXiv Detail & Related papers (2022-10-17T07:16:44Z) - COUCH: Towards Controllable Human-Chair Interactions [44.66450508317131]
We study the problem of synthesizing scene interactions conditioned on different contact positions on the object.
We propose a novel synthesis framework COUCH that plans ahead the motion by predicting contact-aware control signals of the hands.
Our method shows significant quantitative and qualitative improvements over existing methods for human-object interactions.
arXiv Detail & Related papers (2022-05-01T19:14:22Z) - Contact-Aware Retargeting of Skinned Motion [49.71236739408685]
This paper introduces a motion estimation method that preserves self-contacts and prevents interpenetration.
The method identifies self-contacts and ground contacts in the input motion, and optimize the motion to apply to the output skeleton.
In experiments, our results quantitatively outperform previous methods and we conduct a user study where our retargeted motions are rated as higher-quality than those produced by recent works.
arXiv Detail & Related papers (2021-09-15T17:05:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.