Sequential Best-Arm Identification with Application to Brain-Computer
Interface
- URL: http://arxiv.org/abs/2305.11908v1
- Date: Wed, 17 May 2023 18:49:44 GMT
- Title: Sequential Best-Arm Identification with Application to Brain-Computer
Interface
- Authors: Xin Zhou, Botao Hao, Jian Kang, Tor Lattimore, Lexin Li
- Abstract summary: A brain-computer interface (BCI) is a technology that enables direct communication between the brain and an external device or computer system.
An electroencephalogram (EEG) and event-related potential (ERP)-based speller system is a type of BCI that allows users to spell words without using a physical keyboard.
We propose a sequential top-two Thompson sampling (STTS) algorithm under the fixed-confidence setting and the fixed-budget setting.
- Score: 34.87975833920409
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A brain-computer interface (BCI) is a technology that enables direct
communication between the brain and an external device or computer system. It
allows individuals to interact with the device using only their thoughts, and
holds immense potential for a wide range of applications in medicine,
rehabilitation, and human augmentation. An electroencephalogram (EEG) and
event-related potential (ERP)-based speller system is a type of BCI that allows
users to spell words without using a physical keyboard, but instead by
recording and interpreting brain signals under different stimulus presentation
paradigms. Conventional non-adaptive paradigms treat each word selection
independently, leading to a lengthy learning process. To improve the sampling
efficiency, we cast the problem as a sequence of best-arm identification tasks
in multi-armed bandits. Leveraging pre-trained large language models (LLMs), we
utilize the prior knowledge learned from previous tasks to inform and
facilitate subsequent tasks. To do so in a coherent way, we propose a
sequential top-two Thompson sampling (STTS) algorithm under the
fixed-confidence setting and the fixed-budget setting. We study the theoretical
property of the proposed algorithm, and demonstrate its substantial empirical
improvement through both synthetic data analysis as well as a P300 BCI speller
simulator example.
Related papers
- BrainECHO: Semantic Brain Signal Decoding through Vector-Quantized Spectrogram Reconstruction for Whisper-Enhanced Text Generation [29.78480739360263]
We propose a new multi-stage strategy for semantic brain signal decoding via vEctor-quantized speCtrogram reconstruction.
BrainECHO successively conducts: 1) autoencoding of the audio spectrogram; 2) Brain-audio latent space alignment; and 3) Semantic text generation via Whisper finetuning.
BrainECHO outperforms state-of-the-art methods under the same data split settings on two widely accepted resources.
arXiv Detail & Related papers (2024-10-19T04:29:03Z) - MindSpeech: Continuous Imagined Speech Decoding using High-Density fNIRS and Prompt Tuning for Advanced Human-AI Interaction [0.0]
This paper reports a novel method for human-AI interaction by developing a direct brain-AI interface.
We discuss a novel AI model, called MindSpeech, which enables open-vocabulary, continuous decoding for imagined speech.
We demonstrate significant improvements in key metrics, such as BLEU-1 and BERT P scores, for three out of four participants.
arXiv Detail & Related papers (2024-07-25T16:39:21Z) - Language Generation from Brain Recordings [68.97414452707103]
We propose a generative language BCI that utilizes the capacity of a large language model and a semantic brain decoder.
The proposed model can generate coherent language sequences aligned with the semantic content of visual or auditory language stimuli.
Our findings demonstrate the potential and feasibility of employing BCIs in direct language generation.
arXiv Detail & Related papers (2023-11-16T13:37:21Z) - Toward a realistic model of speech processing in the brain with
self-supervised learning [67.7130239674153]
Self-supervised algorithms trained on the raw waveform constitute a promising candidate.
We show that Wav2Vec 2.0 learns brain-like representations with as little as 600 hours of unlabelled speech.
arXiv Detail & Related papers (2022-06-03T17:01:46Z) - An Adaptive Contrastive Learning Model for Spike Sorting [12.043679000694258]
In neuroscience research, it is important to separate out the activity of individual neurons.
With the development of large-scale silicon technology, artificially interpreting and labeling spikes is becoming increasingly impractical.
We propose a novel modeling framework that learns representations from spikes through contrastive learning.
arXiv Detail & Related papers (2022-05-24T09:18:46Z) - 2021 BEETL Competition: Advancing Transfer Learning for Subject
Independence & Heterogenous EEG Data Sets [89.84774119537087]
We design two transfer learning challenges around diagnostics and Brain-Computer-Interfacing (BCI)
Task 1 is centred on medical diagnostics, addressing automatic sleep stage annotation across subjects.
Task 2 is centred on Brain-Computer Interfacing (BCI), addressing motor imagery decoding across both subjects and data sets.
arXiv Detail & Related papers (2022-02-14T12:12:20Z) - Open Vocabulary Electroencephalography-To-Text Decoding and Zero-shot
Sentiment Classification [78.120927891455]
State-of-the-art brain-to-text systems have achieved great success in decoding language directly from brain signals using neural networks.
In this paper, we extend the problem to open vocabulary Electroencephalography(EEG)-To-Text Sequence-To-Sequence decoding and zero-shot sentence sentiment classification on natural reading tasks.
Our model achieves a 40.1% BLEU-1 score on EEG-To-Text decoding and a 55.6% F1 score on zero-shot EEG-based ternary sentiment classification, which significantly outperforms supervised baselines.
arXiv Detail & Related papers (2021-12-05T21:57:22Z) - Speech Command Recognition in Computationally Constrained Environments
with a Quadratic Self-organized Operational Layer [92.37382674655942]
We propose a network layer to enhance the speech command recognition capability of a lightweight network.
The employed method borrows the ideas of Taylor expansion and quadratic forms to construct a better representation of features in both input and hidden layers.
This richer representation results in recognition accuracy improvement as shown by extensive experiments on Google speech commands (GSC) and synthetic speech commands (SSC) datasets.
arXiv Detail & Related papers (2020-11-23T14:40:18Z) - Transfer Learning for EEG-Based Brain-Computer Interfaces: A Review of
Progress Made Since 2016 [35.68916211292525]
A brain-computer interface (BCI) enables a user to communicate with a computer directly using brain signals.
EEG is sensitive to noise/artifact and suffers between-subject/within-subject non-stationarity.
It is difficult to build a generic pattern recognition model in an EEG-based BCI system that is optimal for different subjects.
arXiv Detail & Related papers (2020-04-13T16:44:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.