Eclipse Attack Detection on a Blockchain Network as a Non-Parametric Change Detection Problem
- URL: http://arxiv.org/abs/2404.00538v2
- Date: Thu, 30 May 2024 12:09:38 GMT
- Title: Eclipse Attack Detection on a Blockchain Network as a Non-Parametric Change Detection Problem
- Authors: Anurag Gupta, Vikram Krishnamurthy, Brian M. Sadler,
- Abstract summary: This paper introduces a novel non-parametric change detection algorithm to identify eclipse attacks on a blockchain network.
Our detector can be implemented as a smart contract on the blockchain, offering a tamper-proof and reliable solution.
- Score: 21.556680840805768
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: This paper introduces a novel non-parametric change detection algorithm to identify eclipse attacks on a blockchain network; the non-parametric algorithm relies only on the empirical mean and variance of the dataset, making it highly adaptable. An eclipse attack occurs when malicious actors isolate blockchain users, disrupting their ability to reach consensus with the broader network, thereby distorting their local copy of the ledger. To detect an eclipse attack, we monitor changes in the Fr\'echet mean and variance of the evolving blockchain communication network connecting blockchain users. First, we leverage the Johnson-Lindenstrauss lemma to project large-dimensional networks into a lower-dimensional space, preserving essential statistical properties. Subsequently, we employ a non-parametric change detection procedure, leading to a test statistic that converges weakly to a Brownian bridge process in the absence of an eclipse attack. This enables us to quantify the false alarm rate of the detector. Our detector can be implemented as a smart contract on the blockchain, offering a tamper-proof and reliable solution. Finally, we use numerical examples to compare the proposed eclipse attack detector with a detector based on the random forest model.
Related papers
- Safeguarding Blockchain Ecosystem: Understanding and Detecting Attack Transactions on Cross-chain Bridges [3.07869141026886]
Attacks on cross-chain bridges have resulted in losses of nearly 4.3 billion dollars since 2021.
This paper collects the largest number of cross-chain bridge attack incidents to date, including 49 attacks that occurred between June 2021 and September 2024.
We propose the BridgeGuard tool to detect attacks against cross-chain business logic.
arXiv Detail & Related papers (2024-10-18T14:25:05Z) - BlockScan: Detecting Anomalies in Blockchain Transactions [16.73896087813861]
BlockScan is a customized Transformer for anomaly detection in blockchain transactions.<n>This work sets a new benchmark for applying Transformer-based approaches in blockchain data analysis.
arXiv Detail & Related papers (2024-10-05T05:11:34Z) - Blockchain Amplification Attack [13.13413794919346]
We show that an attacker can amplify network traffic at modified nodes by a factor of 3,600, and cause economic damages of approximately 13,800 times the amount needed to carry out the attack.
Despite these risks, aggressive latency reduction may still be profitable enough for various providers to justify the existence of modified nodes.
arXiv Detail & Related papers (2024-08-02T18:06:33Z) - Securing Proof of Stake Blockchains: Leveraging Multi-Agent Reinforcement Learning for Detecting and Mitigating Malicious Nodes [0.2982610402087727]
MRL-PoS+ is a novel consensus algorithm to enhance the security of PoS blockchains.
We show that MRL-PoS+ significantly improves the attack resilience of PoS blockchains.
arXiv Detail & Related papers (2024-07-30T17:18:03Z) - BoBa: Boosting Backdoor Detection through Data Distribution Inference in Federated Learning [26.714674251814586]
Federated learning is susceptible to poisoning attacks due to its decentralized nature.
We propose a novel distribution-aware anomaly detection mechanism, BoBa, to address this problem.
arXiv Detail & Related papers (2024-07-12T19:38:42Z) - Can We Trust the Unlabeled Target Data? Towards Backdoor Attack and Defense on Model Adaptation [120.42853706967188]
We explore the potential backdoor attacks on model adaptation launched by well-designed poisoning target data.
We propose a plug-and-play method named MixAdapt, combining it with existing adaptation algorithms.
arXiv Detail & Related papers (2024-01-11T16:42:10Z) - A Geometrical Approach to Evaluate the Adversarial Robustness of Deep
Neural Networks [52.09243852066406]
Adversarial Converging Time Score (ACTS) measures the converging time as an adversarial robustness metric.
We validate the effectiveness and generalization of the proposed ACTS metric against different adversarial attacks on the large-scale ImageNet dataset.
arXiv Detail & Related papers (2023-10-10T09:39:38Z) - Collaborative Learning Framework to Detect Attacks in Transactions and Smart Contracts [26.70294159598272]
This paper presents a novel collaborative learning framework designed to detect attacks in blockchain transactions and smart contracts.
Our framework exhibits the capability to classify various types of blockchain attacks, including intricate attacks at the machine code level.
Our framework achieves a detection accuracy of approximately 94% through extensive simulations and 91% in real-time experiments with a throughput of over 2,150 transactions per second.
arXiv Detail & Related papers (2023-08-30T07:17:20Z) - Spatial-Frequency Discriminability for Revealing Adversarial Perturbations [53.279716307171604]
Vulnerability of deep neural networks to adversarial perturbations has been widely perceived in the computer vision community.
Current algorithms typically detect adversarial patterns through discriminative decomposition for natural and adversarial data.
We propose a discriminative detector relying on a spatial-frequency Krawtchouk decomposition.
arXiv Detail & Related papers (2023-05-18T10:18:59Z) - DAAIN: Detection of Anomalous and Adversarial Input using Normalizing
Flows [52.31831255787147]
We introduce a novel technique, DAAIN, to detect out-of-distribution (OOD) inputs and adversarial attacks (AA)
Our approach monitors the inner workings of a neural network and learns a density estimator of the activation distribution.
Our model can be trained on a single GPU making it compute efficient and deployable without requiring specialized accelerators.
arXiv Detail & Related papers (2021-05-30T22:07:13Z) - Targeted Attack against Deep Neural Networks via Flipping Limited Weight
Bits [55.740716446995805]
We study a novel attack paradigm, which modifies model parameters in the deployment stage for malicious purposes.
Our goal is to misclassify a specific sample into a target class without any sample modification.
By utilizing the latest technique in integer programming, we equivalently reformulate this BIP problem as a continuous optimization problem.
arXiv Detail & Related papers (2021-02-21T03:13:27Z) - A generalized efficiency mismatch attack to bypass detection-scrambling
countermeasure [0.0]
We show that the proposed countermeasure can be bypassed if the attack is generalized by including more attack variables.
Our result and methodology could be used to security-certify a free-space quantum communication receiver against all types of detector-efficiency-mismatch type attacks.
arXiv Detail & Related papers (2021-01-07T05:02:24Z) - Learning to Separate Clusters of Adversarial Representations for Robust
Adversarial Detection [50.03939695025513]
We propose a new probabilistic adversarial detector motivated by a recently introduced non-robust feature.
In this paper, we consider the non-robust features as a common property of adversarial examples, and we deduce it is possible to find a cluster in representation space corresponding to the property.
This idea leads us to probability estimate distribution of adversarial representations in a separate cluster, and leverage the distribution for a likelihood based adversarial detector.
arXiv Detail & Related papers (2020-12-07T07:21:18Z) - Scalable Backdoor Detection in Neural Networks [61.39635364047679]
Deep learning models are vulnerable to Trojan attacks, where an attacker can install a backdoor during training time to make the resultant model misidentify samples contaminated with a small trigger patch.
We propose a novel trigger reverse-engineering based approach whose computational complexity does not scale with the number of labels, and is based on a measure that is both interpretable and universal across different network and patch types.
In experiments, we observe that our method achieves a perfect score in separating Trojaned models from pure models, which is an improvement over the current state-of-the art method.
arXiv Detail & Related papers (2020-06-10T04:12:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.