Hybrid Paradigm-based Brain-Computer Interface for Robotic Arm Control
- URL: http://arxiv.org/abs/2212.08122v1
- Date: Wed, 14 Dec 2022 08:13:10 GMT
- Title: Hybrid Paradigm-based Brain-Computer Interface for Robotic Arm Control
- Authors: Byeong-Hoo Lee, Jeong-Hyun Cho, and Byung-Hee Kwon
- Abstract summary: Brain-computer interface (BCI) uses brain signals to communicate with external devices without actual control.
We propose a knowledge distillation-based framework to manipulate robotic arm through hybrid paradigm induced EEG signals for practical use.
- Score: 0.9176056742068814
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Brain-computer interface (BCI) uses brain signals to communicate with
external devices without actual control. Particularly, BCI is one of the
interfaces for controlling the robotic arm. In this study, we propose a
knowledge distillation-based framework to manipulate robotic arm through hybrid
paradigm induced EEG signals for practical use. The teacher model is designed
to decode input data hierarchically and transfer knowledge to student model. To
this end, soft labels and distillation loss functions are applied to the
student model training. According to experimental results, student model
achieved the best performance among the singular architecture-based methods. It
is confirmed that using hierarchical models and knowledge distillation, the
performance of a simple architecture can be improved. Since it is uncertain
what knowledge is transferred, it is important to clarify this part in future
studies.
Related papers
- Body Transformer: Leveraging Robot Embodiment for Policy Learning [51.531793239586165]
Body Transformer (BoT) is an architecture that leverages the robot embodiment by providing an inductive bias that guides the learning process.
We represent the robot body as a graph of sensors and actuators, and rely on masked attention to pool information throughout the architecture.
The resulting architecture outperforms the vanilla transformer, as well as the classical multilayer perceptron, in terms of task completion, scaling properties, and computational efficiency.
arXiv Detail & Related papers (2024-08-12T17:31:28Z) - Learning Manipulation by Predicting Interaction [85.57297574510507]
We propose a general pre-training pipeline that learns Manipulation by Predicting the Interaction.
The experimental results demonstrate that MPI exhibits remarkable improvement by 10% to 64% compared with previous state-of-the-art in real-world robot platforms.
arXiv Detail & Related papers (2024-06-01T13:28:31Z) - Scheduled Knowledge Acquisition on Lightweight Vector Symbolic Architectures for Brain-Computer Interfaces [18.75591257735207]
Classical feature engineering is computationally efficient but has low accuracy, whereas the recent neural networks (DNNs) improve accuracy but are computationally expensive and incur high latency.
As a promising alternative, the low-dimensional computing (LDC) classifier based on vector symbolic architecture (VSA), achieves small model size yet higher accuracy than classical feature engineering methods.
arXiv Detail & Related papers (2024-03-18T01:06:29Z) - A comparison of controller architectures and learning mechanisms for
arbitrary robot morphologies [2.884244918665901]
What combination of a robot controller and a learning method should be used, if the morphology of the learning robot is not known in advance?
We perform an experimental comparison of three controller-and-learner combinations.
We compare their efficacy, efficiency, and robustness.
arXiv Detail & Related papers (2023-09-25T07:11:43Z) - Directed Acyclic Graph Factorization Machines for CTR Prediction via
Knowledge Distillation [65.62538699160085]
We propose a Directed Acyclic Graph Factorization Machine (KD-DAGFM) to learn the high-order feature interactions from existing complex interaction models for CTR prediction via Knowledge Distillation.
KD-DAGFM achieves the best performance with less than 21.5% FLOPs of the state-of-the-art method on both online and offline experiments.
arXiv Detail & Related papers (2022-11-21T03:09:42Z) - FingerFlex: Inferring Finger Trajectories from ECoG signals [68.8204255655161]
FingerFlex model is a convolutional encoder-decoder architecture adapted for finger movement regression on electrocorticographic (ECoG) brain data.
State-of-the-art performance was achieved on a publicly available BCI competition IV dataset 4 with a correlation coefficient between true and predicted trajectories up to 0.74.
arXiv Detail & Related papers (2022-10-23T16:26:01Z) - An Adaptive Contrastive Learning Model for Spike Sorting [12.043679000694258]
In neuroscience research, it is important to separate out the activity of individual neurons.
With the development of large-scale silicon technology, artificially interpreting and labeling spikes is becoming increasingly impractical.
We propose a novel modeling framework that learns representations from spikes through contrastive learning.
arXiv Detail & Related papers (2022-05-24T09:18:46Z) - Learning-Based UE Classification in Millimeter-Wave Cellular Systems
With Mobility [67.81523988596841]
Millimeter-wave cellular communication requires beamforming procedures that enable alignment of the transmitter and receiver beams as the user equipment (UE) moves.
For efficient beam tracking it is advantageous to classify users according to their traffic and mobility patterns.
Research to date has demonstrated efficient ways of machine learning based UE classification.
arXiv Detail & Related papers (2021-09-13T12:00:45Z) - DRL: Deep Reinforcement Learning for Intelligent Robot Control --
Concept, Literature, and Future [0.0]
Combination of machine learning, computer vision, and robotic systems motivates this work toward proposing a vision-based learning framework for intelligent robot control as the ultimate goal (vision-based learning robot)
This work specifically introduces deep reinforcement learning as the the learning framework, a General-purpose framework for AI (AGI) meaning application-independent and platform-independent.
arXiv Detail & Related papers (2021-04-20T15:26:10Z) - Deep Imitation Learning for Bimanual Robotic Manipulation [70.56142804957187]
We present a deep imitation learning framework for robotic bimanual manipulation.
A core challenge is to generalize the manipulation skills to objects in different locations.
We propose to (i) decompose the multi-modal dynamics into elemental movement primitives, (ii) parameterize each primitive using a recurrent graph neural network to capture interactions, and (iii) integrate a high-level planner that composes primitives sequentially and a low-level controller to combine primitive dynamics and inverse kinematics control.
arXiv Detail & Related papers (2020-10-11T01:40:03Z) - Unsupervised Multi-Modal Representation Learning for Affective Computing
with Multi-Corpus Wearable Data [16.457778420360537]
We propose an unsupervised framework to reduce the reliance on human supervision.
The proposed framework utilizes two stacked convolutional autoencoders to learn latent representations from wearable electrocardiogram (ECG) and electrodermal activity (EDA) signals.
Our method outperforms current state-of-the-art results that have performed arousal detection on the same datasets.
arXiv Detail & Related papers (2020-08-24T22:01:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.