An adaptive closed-loop ECoG decoder for long-term and stable bimanual
control of an exoskeleton by a tetraplegic
- URL: http://arxiv.org/abs/2201.10449v1
- Date: Tue, 25 Jan 2022 16:51:29 GMT
- Title: An adaptive closed-loop ECoG decoder for long-term and stable bimanual
control of an exoskeleton by a tetraplegic
- Authors: Alexandre Moly, Thomas Costecalde, Felix Martel, Christelle Larzabal,
Serpil Karakas, Alexandre Verney, Guillaume Charvet, Stephan Chabardes, Alim
Louis Benabid, Tetiana Aksenova
- Abstract summary: High performance control of diverse effectors for complex tasks must be robust over time and of high decoding performance without continuous recalibration of the decoders.
We developed an adaptive online tensor-based decoder: the Recursive Exponentially Weighted Markov-Switching multi- Linear Model (REW-MSLM)
We demonstrated over a period of 6 months the stability of the 8-dimensional alternative bimanual control of the exoskeleton and its virtual avatar using REW-MSLM without recalibration of the decoder.
- Score: 91.6474995587871
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Brain-computer interfaces (BCIs) still face many challenges to step out of
laboratories to be used in real-life applications. A key one persists in the
high performance control of diverse effectors for complex tasks, using chronic
and safe recorders. This control must be robust over time and of high decoding
performance without continuous recalibration of the decoders. In the article,
asynchronous control of an exoskeleton by a tetraplegic patient using a
chronically implanted epidural electrocorticography (EpiCoG) implant is
demonstrated. For this purpose, an adaptive online tensor-based decoder: the
Recursive Exponentially Weighted Markov-Switching multi-Linear Model (REW-MSLM)
was developed. We demonstrated over a period of 6 months the stability of the
8-dimensional alternative bimanual control of the exoskeleton and its virtual
avatar using REW-MSLM without recalibration of the decoder.
Related papers
- Online Adaptation for Myographic Control of Natural Dexterous Hand and Finger Movements [0.6741087029030101]
This work redefines the state-of-the-art in myographic decoding in terms of the reliability, responsiveness, and movement complexity available from prosthesis control systems.
arXiv Detail & Related papers (2024-12-23T21:20:32Z) - Long-Term Upper-Limb Prosthesis Myocontrol via High-Density sEMG and Incremental Learning [1.5383266953224775]
We introduce a novel myoelectric prosthetic system integrating a high density-sEMG (HD-sEMG) setup and incremental learning methods.
First, we present a newly designed, compact HD-sEMG interface equipped with 64 dry electrodes positioned over the forearm.
Then, we introduce an efficient incremental learning system enabling model adaptation on a stream of data.
arXiv Detail & Related papers (2024-12-20T15:37:10Z) - MambaAD: Exploring State Space Models for Multi-class Unsupervised Anomaly Detection [53.03687787922032]
Mamba-based models with superior long-range modeling and linear efficiency have garnered substantial attention.
MambaAD consists of a pre-trained encoder and a Mamba decoder featuring (Locality-Enhanced State Space) LSS modules at multi-scales.
The proposed LSS module, integrating parallel cascaded (Hybrid State Space) HSS blocks and multi- kernel convolutions operations, effectively captures both long-range and local information.
arXiv Detail & Related papers (2024-04-09T18:28:55Z) - Temporally-Consistent Koopman Autoencoders for Forecasting Dynamical Systems [38.36312939874359]
We introduce the Temporally-Consistent Koopman Autoencoder (tcKAE)
tcKAE generates accurate long-term predictions even with limited and noisy training data.
We demonstrate tcKAE's superior performance over state-of-the-art KAE models across a variety of test cases.
arXiv Detail & Related papers (2024-03-19T00:48:25Z) - Vivim: a Video Vision Mamba for Medical Video Segmentation [52.11785024350253]
This paper presents a Video Vision Mamba-based framework, dubbed as Vivim, for medical video segmentation tasks.
Our Vivim can effectively compress the long-term representation into sequences at varying scales.
Experiments on thyroid segmentation, breast lesion segmentation in ultrasound videos, and polyp segmentation in colonoscopy videos demonstrate the effectiveness and efficiency of our Vivim.
arXiv Detail & Related papers (2024-01-25T13:27:03Z) - The online learning architecture with edge computing for high-level
control for assisting patients [3.1084001733555584]
The prevalence of mobility impairments due to conditions such as spinal cord injuries, strokes, and degenerative diseases is on the rise globally.
Lower-limb exoskeletons have been increasingly recognized as a viable solution for enhancing mobility and rehabilitation for individuals with such impairments.
Existing exoskeleton control systems often suffer from limitations such as latency, lack of adaptability, and computational inefficiency.
This paper introduces a novel online adversarial learning architecture integrated with edge computing for high-level lower-limb exoskeleton control.
arXiv Detail & Related papers (2023-09-10T20:30:03Z) - Diagnostic Spatio-temporal Transformer with Faithful Encoding [54.02712048973161]
This paper addresses the task of anomaly diagnosis when the underlying data generation process has a complex-temporal (ST) dependency.
We formalize the problem as supervised dependency discovery, where the ST dependency is learned as a side product of time-series classification.
We show that temporal positional encoding used in existing ST transformer works has a serious limitation capturing frequencies in higher frequencies (short time scales)
We also propose a new ST dependency discovery framework, which can provide readily consumable diagnostic information in both spatial and temporal directions.
arXiv Detail & Related papers (2023-05-26T05:31:23Z) - Perpetual Humanoid Control for Real-time Simulated Avatars [77.05287269685911]
We present a physics-based humanoid controller that achieves high-fidelity motion imitation and fault-tolerant behavior.
Our controller scales up to learning ten thousand motion clips without using any external stabilizing forces.
We demonstrate the effectiveness of our controller by using it to imitate noisy poses from video-based pose estimators and language-based motion generators in a live and real-time multi-person avatar use case.
arXiv Detail & Related papers (2023-05-10T20:51:37Z) - Multi-Scale Control Signal-Aware Transformer for Motion Synthesis
without Phase [72.01862340497314]
We propose a task-agnostic deep learning method, namely Multi-scale Control Signal-aware Transformer (MCS-T)
MCS-T is able to successfully generate motions comparable to those generated by the methods using auxiliary information.
arXiv Detail & Related papers (2023-03-03T02:56:44Z) - ST-MTL: Spatio-Temporal Multitask Learning Model to Predict Scanpath
While Tracking Instruments in Robotic Surgery [14.47768738295518]
Learning of the task-oriented attention while tracking instrument holds vast potential in image-guided robotic surgery.
We propose an end-to-end Multi-Task Learning (ST-MTL) model with a shared encoder and Sink-temporal decoders for the real-time surgical instrument segmentation and task-oriented saliency detection.
We tackle the problem with a novel asynchronous-temporal optimization technique by calculating independent gradients for each decoder.
Compared to the state-of-the-art segmentation and saliency methods, our model most outperforms the evaluation metrics and produces an outstanding performance in challenge
arXiv Detail & Related papers (2021-12-10T15:20:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.