Online Optimization of Stimulation Speed in an Auditory Brain-Computer
Interface under Time Constraints
- URL: http://arxiv.org/abs/2109.06011v1
- Date: Thu, 26 Aug 2021 08:18:03 GMT
- Title: Online Optimization of Stimulation Speed in an Auditory Brain-Computer
Interface under Time Constraints
- Authors: Jan Sosulski, David H\"ubner, Aaron Klein, Michael Tangermann
- Abstract summary: We propose an approach to exploit the benefits of individualized experimental protocols and evaluated it in an auditory BCI.
Our work proposes an approach to exploit the benefits of individualized experimental protocols and evaluated it in an auditory BCI.
- Score: 5.695163312473305
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The decoding of brain signals recorded via, e.g., an electroencephalogram,
using machine learning is key to brain-computer interfaces (BCIs). Stimulation
parameters or other experimental settings of the BCI protocol typically are
chosen according to the literature. The decoding performance directly depends
on the choice of parameters, as they influence the elicited brain signals and
optimal parameters are subject-dependent. Thus a fast and automated selection
procedure for experimental parameters could greatly improve the usability of
BCIs.
We evaluate a standalone random search and a combined Bayesian optimization
with random search in a closed-loop auditory event-related potential protocol.
We aimed at finding the individually best stimulation speed -- also known as
stimulus onset asynchrony (SOA) -- that maximizes the classification
performance of a regularized linear discriminant analysis. To make the Bayesian
optimization feasible under noise and the time pressure posed by an online BCI
experiment, we first used offline simulations to initialize and constrain the
internal optimization model. Then we evaluated our approach online with 13
healthy subjects.
We could show that for 8 out of 13 subjects, the proposed approach using
Bayesian optimization succeeded to select the individually optimal SOA out of
multiple evaluated SOA values. Our data suggests, however, that subjects were
influenced to very different degrees by the SOA parameter. This makes the
automatic parameter selection infeasible for subjects where the influence is
limited.
Our work proposes an approach to exploit the benefits of individualized
experimental protocols and evaluated it in an auditory BCI. When applied to
other experimental parameters our approach could enhance the usability of BCI
for different target groups -- specifically if an individual disease progress
may prevent the use of standard parameters.
Related papers
- An incremental preference elicitation-based approach to learning potentially non-monotonic preferences in multi-criteria sorting [53.36437745983783]
We first construct a max-margin optimization-based model to model potentially non-monotonic preferences.
We devise information amount measurement methods and question selection strategies to pinpoint the most informative alternative in each iteration.
Two incremental preference elicitation-based algorithms are developed to learn potentially non-monotonic preferences.
arXiv Detail & Related papers (2024-09-04T14:36:20Z) - Adaptive Experimentation When You Can't Experiment [55.86593195947978]
This paper introduces the emphconfounded pure exploration transductive linear bandit (textttCPET-LB) problem.
Online services can employ a properly randomized encouragement that incentivizes users toward a specific treatment.
arXiv Detail & Related papers (2024-06-15T20:54:48Z) - Adaptive Preference Scaling for Reinforcement Learning with Human Feedback [103.36048042664768]
Reinforcement learning from human feedback (RLHF) is a prevalent approach to align AI systems with human values.
We propose a novel adaptive preference loss, underpinned by distributionally robust optimization (DRO)
Our method is versatile and can be readily adapted to various preference optimization frameworks.
arXiv Detail & Related papers (2024-06-04T20:33:22Z) - Online Sensitivity Optimization in Differentially Private Learning [8.12606646175019]
We present a novel approach to dynamically optimize the clipping threshold.
We treat this threshold as an additional learnable parameter, establishing a clean relationship between the threshold and the cost function.
Our method is thoroughly assessed against alternative fixed and adaptive strategies across diverse datasets, tasks, model dimensions, and privacy levels.
arXiv Detail & Related papers (2023-10-02T00:30:49Z) - Self-Correcting Bayesian Optimization through Bayesian Active Learning [46.235017111395344]
We present two acquisition functions that explicitly prioritize hyperparameter learning.
We then introduce Self-Correcting Bayesian Optimization (SCoreBO), which extends SAL to perform Bayesian optimization and active learning simultaneously.
arXiv Detail & Related papers (2023-04-21T14:50:53Z) - OptBA: Optimizing Hyperparameters with the Bees Algorithm for Improved Medical Text Classification [0.0]
We propose OptBA to fine-tune the hyperparameters of deep learning models by leveraging the Bees Algorithm.
Experimental results demonstrate a noteworthy enhancement in accuracy with approximately 1.4%.
arXiv Detail & Related papers (2023-03-14T16:04:13Z) - The Role of Adaptive Optimizers for Honest Private Hyperparameter
Selection [12.38071940409141]
We show that standard composition tools outperform more advanced techniques in many settings.
We draw upon limiting behaviour of Adam in the DP setting to design a new and more efficient tool.
arXiv Detail & Related papers (2021-11-09T01:56:56Z) - Resource Planning for Hospitals Under Special Consideration of the
COVID-19 Pandemic: Optimization and Sensitivity Analysis [87.31348761201716]
Crises like the COVID-19 pandemic pose a serious challenge to health-care institutions.
BaBSim.Hospital is a tool for capacity planning based on discrete event simulation.
We aim to investigate and optimize these parameters to improve BaBSim.Hospital.
arXiv Detail & Related papers (2021-05-16T12:38:35Z) - An Asymptotically Optimal Multi-Armed Bandit Algorithm and
Hyperparameter Optimization [48.5614138038673]
We propose an efficient and robust bandit-based algorithm called Sub-Sampling (SS) in the scenario of hyper parameter search evaluation.
We also develop a novel hyper parameter optimization algorithm called BOSS.
Empirical studies validate our theoretical arguments of SS and demonstrate the superior performance of BOSS on a number of applications.
arXiv Detail & Related papers (2020-07-11T03:15:21Z) - Optimization of Genomic Classifiers for Clinical Deployment: Evaluation
of Bayesian Optimization to Select Predictive Models of Acute Infection and
In-Hospital Mortality [0.0]
characterization of a patient's immune response by quantifying expression levels of specific genes from blood represents a potentially more timely and precise means of accomplishing both tasks.
Machine learning methods provide a platform to leverage this 'host response' for development of deployment-ready classification models.
We compare HO approaches for the development of diagnostic classifiers of acute infection and in-hospital mortality from gene expression of 29 diagnostic markers.
arXiv Detail & Related papers (2020-03-27T10:22:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.