Enhancing the Security & Privacy of Wearable Brain-Computer Interfaces
- URL: http://arxiv.org/abs/2201.07711v1
- Date: Wed, 19 Jan 2022 16:51:18 GMT
- Title: Enhancing the Security & Privacy of Wearable Brain-Computer Interfaces
- Authors: Zahra Tarkhani, Lorena Qendro, Malachy O'Connor Brown, Oscar Hill,
Cecilia Mascolo, Anil Madhavapeddy
- Abstract summary: Brain computing interfaces (BCI) are used in a plethora of safety/privacy-critical applications.
They are susceptible to a multiplicity of attacks across the hardware, software, and networking stacks used.
We introduce Argus, the first information flow control system for wearable BCI applications that mitigates these attacks.
- Score: 7.972396153788745
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Brain computing interfaces (BCI) are used in a plethora of
safety/privacy-critical applications, ranging from healthcare to smart
communication and control. Wearable BCI setups typically involve a head-mounted
sensor connected to a mobile device, combined with ML-based data processing.
Consequently, they are susceptible to a multiplicity of attacks across the
hardware, software, and networking stacks used that can leak users' brainwave
data or at worst relinquish control of BCI-assisted devices to remote
attackers. In this paper, we: (i) analyse the whole-system security and privacy
threats to existing wearable BCI products from an operating system and
adversarial machine learning perspective; and (ii) introduce Argus, the first
information flow control system for wearable BCI applications that mitigates
these attacks. Argus' domain-specific design leads to a lightweight
implementation on Linux ARM platforms suitable for existing BCI use-cases. Our
proof of concept attacks on real-world BCI devices (Muse, NeuroSky, and
OpenBCI) led us to discover more than 300 vulnerabilities across the stacks of
six major attack vectors. Our evaluation shows Argus is highly effective in
tracking sensitive dataflows and restricting these attacks with an acceptable
memory and performance overhead (<15%).
Related papers
- A Systematization of Security Vulnerabilities in Computer Use Agents [1.3560089220432787]
We conduct a systematic threat analysis and testing of real-world CUAs under adversarial conditions.<n>We identify seven classes of risks unique to the CUA paradigm, and analyze three concrete exploit scenarios in depth.<n>These case studies reveal deeper architectural flaws across current CUA implementations.
arXiv Detail & Related papers (2025-07-07T19:50:21Z) - VPI-Bench: Visual Prompt Injection Attacks for Computer-Use Agents [74.6761188527948]
Computer-Use Agents (CUAs) with full system access pose significant security and privacy risks.<n>We investigate Visual Prompt Injection (VPI) attacks, where malicious instructions are visually embedded within rendered user interfaces.<n>Our empirical study shows that current CUAs and BUAs can be deceived at rates of up to 51% and 100%, respectively, on certain platforms.
arXiv Detail & Related papers (2025-06-03T05:21:50Z) - CANTXSec: A Deterministic Intrusion Detection and Prevention System for CAN Bus Monitoring ECU Activations [53.036288487863786]
We propose CANTXSec, the first deterministic Intrusion Detection and Prevention system based on physical ECU activations.<n>It detects and prevents classical attacks in the CAN bus, while detecting advanced attacks that have been less investigated in the literature.<n>We prove the effectiveness of our solution on a physical testbed, where we achieve 100% detection accuracy in both classes of attacks while preventing 100% of FIAs.
arXiv Detail & Related papers (2025-05-14T13:37:07Z) - The Cost of Performance: Breaking ThreadX with Kernel Object Masquerading Attacks [16.54210795506388]
We show that popular real-time operating systems (RTOSs) lack essential security protections.
We identify a performance optimization practice in ThreadX that introduces security vulnerabilities, allowing for the circumvention of parameter sanitization processes.
We introduce an automated approach involving under-constrained symbolic execution to identify the Kernel Object Masquerading (KOM) Attack.
arXiv Detail & Related papers (2025-04-28T05:01:35Z) - Adversarial Filtering Based Evasion and Backdoor Attacks to EEG-Based Brain-Computer Interfaces [16.426546510800335]
A brain-computer interface (BCI) enables direct communication between the brain and an external device.
Recent studies have shown that machine learning models in BCIs are vulnerable to adversarial attacks.
This paper proposes adversarial filtering based evasion and backdoor attacks to EEG-based BCIs.
arXiv Detail & Related papers (2024-12-10T06:42:46Z) - MeMoir: A Software-Driven Covert Channel based on Memory Usage [7.424928818440549]
MeMoir is a novel software-driven covert channel that, for the first time, utilizes memory usage as the medium for the channel.
We implement a machine learning-based detector that can predict whether an attack is present in the system with an accuracy of more than 95%.
We introduce a noise-based countermeasure that effectively mitigates the attack while inducing a low power overhead in the system.
arXiv Detail & Related papers (2024-09-20T08:10:36Z) - Rethinking the Vulnerabilities of Face Recognition Systems:From a Practical Perspective [53.24281798458074]
Face Recognition Systems (FRS) have increasingly integrated into critical applications, including surveillance and user authentication.
Recent studies have revealed vulnerabilities in FRS to adversarial (e.g., adversarial patch attacks) and backdoor attacks (e.g., training data poisoning)
arXiv Detail & Related papers (2024-05-21T13:34:23Z) - Mitigating and Analysis of Memory Usage Attack in IoE System [1.515687944002438]
Internet of Everything (IoE) is a newly emerging trend, especially in homes.
Memory corruption vulnerabilities constitute a significant class of vulnerabilities in software security.
This paper aims to analyze and explain the resource usage attack and create a low-cost simulation environment.
arXiv Detail & Related papers (2024-04-30T11:48:13Z) - Information Leakage through Physical Layer Supply Voltage Coupling Vulnerability [2.6490401904186758]
We introduce a novel side-channel vulnerability that leaks data-dependent power variations through physical layer supply voltage coupling (PSVC)
Unlike traditional power side-channel attacks, the proposed vulnerability allows an adversary to mount an attack and extract information without modifying the device.
arXiv Detail & Related papers (2024-03-12T23:39:54Z) - SISSA: Real-time Monitoring of Hardware Functional Safety and
Cybersecurity with In-vehicle SOME/IP Ethernet Traffic [49.549771439609046]
We propose SISSA, a SOME/IP communication traffic-based approach for modeling and analyzing in-vehicle functional safety and cyber security.
Specifically, SISSA models hardware failures with the Weibull distribution and addresses five potential attacks on SOME/IP communication.
Extensive experimental results show the effectiveness and efficiency of SISSA.
arXiv Detail & Related papers (2024-02-21T03:31:40Z) - Is Your Kettle Smarter Than a Hacker? A Scalable Tool for Assessing Replay Attack Vulnerabilities on Consumer IoT Devices [1.5612101323427952]
ENISA and NIST security guidelines emphasize the importance of enabling default local communication for safety and reliability.
We propose a tool, named REPLIOT, able to test whether a replay attack is successful or not, without prior knowledge of the target devices.
We find that 75% of the remaining devices are vulnerable to replay attacks with REPLIOT having a detection accuracy of 0.98-1.
arXiv Detail & Related papers (2024-01-22T18:24:41Z) - HasTEE+ : Confidential Cloud Computing and Analytics with Haskell [50.994023665559496]
Confidential computing enables the protection of confidential code and data in a co-tenanted cloud deployment using specialized hardware isolation units called Trusted Execution Environments (TEEs)
TEEs offer low-level C/C++-based toolchains that are susceptible to inherent memory safety vulnerabilities and lack language constructs to monitor explicit and implicit information-flow leaks.
We address the above with HasTEE+, a domain-specific language (cla) embedded in Haskell that enables programming TEEs in a high-level language with strong type-safety.
arXiv Detail & Related papers (2024-01-17T00:56:23Z) - Not what you've signed up for: Compromising Real-World LLM-Integrated
Applications with Indirect Prompt Injection [64.67495502772866]
Large Language Models (LLMs) are increasingly being integrated into various applications.
We show how attackers can override original instructions and employed controls using Prompt Injection attacks.
We derive a comprehensive taxonomy from a computer security perspective to systematically investigate impacts and vulnerabilities.
arXiv Detail & Related papers (2023-02-23T17:14:38Z) - SHARKS: Smart Hacking Approaches for RisK Scanning in Internet-of-Things
and Cyber-Physical Systems based on Machine Learning [5.265938973293016]
Cyber-physical systems (CPS) and Internet-of-Things (IoT) devices are increasingly being deployed across multiple functionalities.
These devices are inherently not secure across their comprehensive software, hardware, and network stacks.
We present an innovative technique for detecting unknown system vulnerabilities, managing these vulnerabilities, and improving incident response.
arXiv Detail & Related papers (2021-01-07T22:01:30Z) - EEG-Based Brain-Computer Interfaces Are Vulnerable to Backdoor Attacks [68.01125081367428]
Recent studies have shown that machine learning algorithms are vulnerable to adversarial attacks.
This article proposes to use narrow period pulse for poisoning attack of EEG-based BCIs, which is implementable in practice and has never been considered before.
arXiv Detail & Related papers (2020-10-30T20:49:42Z) - Adversarial EXEmples: A Survey and Experimental Evaluation of Practical
Attacks on Machine Learning for Windows Malware Detection [67.53296659361598]
adversarial EXEmples can bypass machine learning-based detection by perturbing relatively few input bytes.
We develop a unifying framework that does not only encompass and generalize previous attacks against machine-learning models, but also includes three novel attacks.
These attacks, named Full DOS, Extend and Shift, inject the adversarial payload by respectively manipulating the DOS header, extending it, and shifting the content of the first section.
arXiv Detail & Related papers (2020-08-17T07:16:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.