Neuromorphic Mimicry Attacks Exploiting Brain-Inspired Computing for Covert Cyber Intrusions
- URL: http://arxiv.org/abs/2505.17094v1
- Date: Wed, 21 May 2025 03:21:51 GMT
- Title: Neuromorphic Mimicry Attacks Exploiting Brain-Inspired Computing for Covert Cyber Intrusions
- Authors: Hemanth Ravipati,
- Abstract summary: This paper proposes Neuromorphic Mimicry Attacks (NMAs)<n>NMAs exploit the probabilistic and non-deterministic nature of neuromorphic chips to execute covert intrusions.<n>By mimicking legitimate neural activity through techniques such as synaptic weight tampering and sensory input poisoning, NMAs evade traditional intrusion detection systems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neuromorphic computing, inspired by the human brain's neural architecture, is revolutionizing artificial intelligence and edge computing with its low-power, adaptive, and event-driven designs. However, these unique characteristics introduce novel cybersecurity risks. This paper proposes Neuromorphic Mimicry Attacks (NMAs), a groundbreaking class of threats that exploit the probabilistic and non-deterministic nature of neuromorphic chips to execute covert intrusions. By mimicking legitimate neural activity through techniques such as synaptic weight tampering and sensory input poisoning, NMAs evade traditional intrusion detection systems, posing risks to applications such as autonomous vehicles, smart medical implants, and IoT networks. This research develops a theoretical framework for NMAs, evaluates their impact using a simulated neuromorphic chip dataset, and proposes countermeasures, including neural-specific anomaly detection and secure synaptic learning protocols. The findings underscore the critical need for tailored cybersecurity measures to protect brain-inspired computing, offering a pioneering exploration of this emerging threat landscape.
Related papers
- Robust Spiking Neural Networks Against Adversarial Attacks [49.08210314590693]
Spiking Neural Networks (SNNs) represent a promising paradigm for energy-efficient neuromorphic computing.<n>In this study, we theoretically demonstrate that threshold-neighboring spiking neurons are the key factors limiting the robustness of directly trained SNNs.<n>We find that these neurons set the upper limits for the maximum potential strength of adversarial attacks and are prone to state-flipping under minor disturbances.
arXiv Detail & Related papers (2026-02-24T05:06:12Z) - Emerging Threats and Countermeasures in Neuromorphic Systems: A Survey [21.739165659812073]
Neuromorphic computing mimics brain-inspired mechanisms through spiking neurons and energy-efficient processing.<n>These advancements raise critical security and privacy concerns.<n>This survey systematically analyzes the security landscape of neuromorphic systems.
arXiv Detail & Related papers (2026-01-23T09:43:26Z) - Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - Constraint-based Adversarial Example Synthesis [1.2548803788632799]
This study focuses on enhancing Concolic Testing, a specialized technique for testing Python programs implementing neural networks.
The extended tool, PyCT, now accommodates a broader range of neural network operations, including floating-point and activation function computations.
arXiv Detail & Related papers (2024-06-03T11:35:26Z) - A Neuromorphic Approach to Obstacle Avoidance in Robot Manipulation [16.696524554516294]
We develop a neuromorphic approach to obstacle avoidance on a camera-equipped manipulator.
Our approach adapts high-level trajectory plans with reactive maneuvers by processing emulated event data in a convolutional SNN.
Our results motivate incorporating SNN learning, utilizing neuromorphic processors, and further exploring the potential of neuromorphic methods.
arXiv Detail & Related papers (2024-04-08T20:42:10Z) - BrainLeaks: On the Privacy-Preserving Properties of Neuromorphic Architectures against Model Inversion Attacks [3.4673556247932225]
Conventional artificial neural networks (ANNs) have been found vulnerable to several attacks that can leak sensitive data.
Our study is motivated by the intuition that the non-differentiable aspect of spiking neural networks (SNNs) might result in inherent privacy-preserving properties.
We develop novel inversion attack strategies that are comprehensively designed to target SNNs.
arXiv Detail & Related papers (2024-02-01T03:16:40Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Toward stochastic neural computing [11.955322183964201]
We propose a theory of neural computing in which streams of noisy inputs are transformed and processed through populations of spiking neurons.
We demonstrate the application of our method to Intel's Loihi neuromorphic hardware.
arXiv Detail & Related papers (2023-05-23T12:05:35Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Constraints on the design of neuromorphic circuits set by the properties
of neural population codes [61.15277741147157]
In the brain, information is encoded, transmitted and used to inform behaviour.
Neuromorphic circuits need to encode information in a way compatible to that used by populations of neuron in the brain.
arXiv Detail & Related papers (2022-12-08T15:16:04Z) - Searching for the Essence of Adversarial Perturbations [73.96215665913797]
We show that adversarial perturbations contain human-recognizable information, which is the key conspirator responsible for a neural network's erroneous prediction.
This concept of human-recognizable information allows us to explain key features related to adversarial perturbations.
arXiv Detail & Related papers (2022-05-30T18:04:57Z) - Adversarial Robustness in Deep Learning: Attacks on Fragile Neurons [0.6899744489931016]
We identify fragile and robust neurons of deep learning architectures using nodal dropouts of the first convolutional layer.
We correlate these neurons with the distribution of adversarial attacks on the network.
arXiv Detail & Related papers (2022-01-31T14:34:07Z) - POPPINS : A Population-Based Digital Spiking Neuromorphic Processor with
Integer Quadratic Integrate-and-Fire Neurons [50.591267188664666]
We propose a population-based digital spiking neuromorphic processor in 180nm process technology with two hierarchy populations.
The proposed approach enables the developments of biomimetic neuromorphic system and various low-power, and low-latency inference processing applications.
arXiv Detail & Related papers (2022-01-19T09:26:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.