Neural cyberattacks applied to the vision under realistic visual stimuli
- URL: http://arxiv.org/abs/2503.08284v1
- Date: Tue, 11 Mar 2025 10:58:58 GMT
- Title: Neural cyberattacks applied to the vision under realistic visual stimuli
- Authors: Victoria Magdalena López Madejska, Sergio López Bernal, Gregorio Martínez Pérez, Alberto Huertas Celdrán,
- Abstract summary: Brain-Computer Interfaces (BCIs) are systems traditionally used in medicine and designed to interact with the brain to record or stimulate neurons.<n>Previous work validated neural cyberattacks able to disrupt spontaneous neural activity by performing neural overstimulation or inhibition.<n>This work analyzed the impact of two existing neural attacks, Neuronal Flooding (FLO) and Neuronal Jamming (JAM) on a complex neuronal topology of mice.
- Score: 3.0748861313823
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Brain-Computer Interfaces (BCIs) are systems traditionally used in medicine and designed to interact with the brain to record or stimulate neurons. Despite their benefits, the literature has demonstrated that invasive BCIs focused on neurostimulation present vulnerabilities allowing attackers to gain control. In this context, neural cyberattacks emerged as threats able to disrupt spontaneous neural activity by performing neural overstimulation or inhibition. Previous work validated these attacks in small-scale simulations with a reduced number of neurons, lacking real-world complexity. Thus, this work tackles this limitation by analyzing the impact of two existing neural attacks, Neuronal Flooding (FLO) and Neuronal Jamming (JAM), on a complex neuronal topology of the primary visual cortex of mice consisting of approximately 230,000 neurons, tested on three realistic visual stimuli: flash effect, movie, and drifting gratings. Each attack was evaluated over three relevant events per stimulus, also testing the impact of attacking 25% and 50% of the neurons. The results, based on the number of spikes and shift percentages metrics, showed that the attacks caused the greatest impact on the movie, while dark and fixed events are the most robust. Although both attacks can significantly affect neural activity, JAM was generally more damaging, producing longer temporal delays, and had a larger prevalence. Finally, JAM did not require to alter many neurons to significantly affect neural activity, while the impact in FLO increased with the number of neurons attacked.
Related papers
- Artificial Kuramoto Oscillatory Neurons [65.16453738828672]
It has long been known in both neuroscience and AI that ''binding'' between neurons leads to a form of competitive learning.
We introduce Artificial rethinking together with arbitrary connectivity designs such as fully connected convolutional, or attentive mechanisms.
We show that this idea provides performance improvements across a wide spectrum of tasks such as unsupervised object discovery, adversarial robustness, uncertainty, and reasoning.
arXiv Detail & Related papers (2024-10-17T17:47:54Z) - Interpreting the Second-Order Effects of Neurons in CLIP [73.54377859089801]
We interpret the function of individual neurons in CLIP by automatically describing them using text.<n>We present the "second-order lens", analyzing the effect flowing from a neuron through the later attention heads, directly to the output.<n>Our results indicate that an automated interpretation of neurons can be used for model deception and for introducing new model capabilities.
arXiv Detail & Related papers (2024-06-06T17:59:52Z) - Learning with Chemical versus Electrical Synapses -- Does it Make a
Difference? [61.85704286298537]
Bio-inspired neural networks have the potential to advance our understanding of neural computation and improve the state-of-the-art of AI systems.
We conduct experiments with autonomous lane-keeping through a photorealistic autonomous driving simulator to evaluate their performance under diverse conditions.
arXiv Detail & Related papers (2023-11-21T13:07:20Z) - Neuroscience inspired scientific machine learning (Part-1): Variable
spiking neuron for regression [2.1756081703276]
We introduce in this paper a novel spiking neuron, termed Variable Spiking Neuron (VSN)
It can reduce the redundant firing using lessons from biological neuron inspired Leaky Integrate and Fire Spiking Neurons (LIF-SN)
arXiv Detail & Related papers (2023-11-15T08:59:06Z) - Visual Analytics of Neuron Vulnerability to Adversarial Attacks on
Convolutional Neural Networks [28.081328051535618]
Adversarial attacks on a convolutional neural network (CNN) could fool a high-performance CNN into making incorrect predictions.
Our work introduces a visual analytics approach to understanding adversarial attacks.
A visual analytics system is designed to incorporate visual reasoning for interpreting adversarial attacks.
arXiv Detail & Related papers (2023-03-06T01:01:56Z) - Constraints on the design of neuromorphic circuits set by the properties
of neural population codes [61.15277741147157]
In the brain, information is encoded, transmitted and used to inform behaviour.
Neuromorphic circuits need to encode information in a way compatible to that used by populations of neuron in the brain.
arXiv Detail & Related papers (2022-12-08T15:16:04Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Analysis of Power-Oriented Fault Injection Attacks on Spiking Neural
Networks [5.7494562086770955]
Spiking Neural Networks (SNNs) are quickly gaining traction as a viable alternative to Deep Neural Networks (DNNs)
SNNs contain security-sensitive assets (e.g., neuron threshold voltage) and vulnerabilities that adversaries can exploit.
We investigate global fault injection attacks by employing external power supplies and laser-induced local power glitches.
We find that in the worst-case scenario, classification accuracy is reduced by 85.65%.
arXiv Detail & Related papers (2022-04-10T20:48:46Z) - Adversarial Robustness in Deep Learning: Attacks on Fragile Neurons [0.6899744489931016]
We identify fragile and robust neurons of deep learning architectures using nodal dropouts of the first convolutional layer.
We correlate these neurons with the distribution of adversarial attacks on the network.
arXiv Detail & Related papers (2022-01-31T14:34:07Z) - Fight Perturbations with Perturbations: Defending Adversarial Attacks via Neuron Influence [14.817015950058915]
We propose emphNeuron-level Inverse Perturbation (NIP), a novel defense against general adversarial attacks.
It calculates neuron influence from benign examples and then modifies input examples by generating inverse perturbations.
arXiv Detail & Related papers (2021-12-24T13:37:42Z) - Overcoming the Domain Gap in Contrastive Learning of Neural Action
Representations [60.47807856873544]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior.
We generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies.
This dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
arXiv Detail & Related papers (2021-11-29T15:27:51Z) - Continuous Learning and Adaptation with Membrane Potential and
Activation Threshold Homeostasis [91.3755431537592]
This paper presents the Membrane Potential and Activation Threshold Homeostasis (MPATH) neuron model.
The model allows neurons to maintain a form of dynamic equilibrium by automatically regulating their activity when presented with input.
Experiments demonstrate the model's ability to adapt to and continually learn from its input.
arXiv Detail & Related papers (2021-04-22T04:01:32Z) - And/or trade-off in artificial neurons: impact on adversarial robustness [91.3755431537592]
Presence of sufficient number of OR-like neurons in a network can lead to classification brittleness and increased vulnerability to adversarial attacks.
We define AND-like neurons and propose measures to increase their proportion in the network.
Experimental results on the MNIST dataset suggest that our approach holds promise as a direction for further exploration.
arXiv Detail & Related papers (2021-02-15T08:19:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.