BugMagnifier: TON Transaction Simulator for Revealing Smart Contract Vulnerabilities
- URL: http://arxiv.org/abs/2509.24444v1
- Date: Mon, 29 Sep 2025 08:26:52 GMT
- Title: BugMagnifier: TON Transaction Simulator for Revealing Smart Contract Vulnerabilities
- Authors: Yury Yanovich, Victoria Kovalevskaya, Maksim Egorov, Elizaveta Smirnova, Matvey Mishuris, Yash Madhwal, Kirill Ziborov, Vladimir Gorgadze, Subodh Sharma,
- Abstract summary: We present BugMagnifier, a transaction simulation framework that systematically reveals vulnerabilities in TON smart contracts.<n>Our tool combines precise message queue manipulation with differential state analysis and probabilistic permutation testing to detect asynchronous execution flaws.
- Score: 0.12640882896302838
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Open Network (TON) blockchain employs an asynchronous execution model that introduces unique security challenges for smart contracts, particularly race conditions arising from unpredictable message processing order. While previous work established vulnerability patterns through static analysis of audit reports, dynamic detection of temporal dependencies through systematic testing remains an open problem. We present BugMagnifier, a transaction simulation framework that systematically reveals vulnerabilities in TON smart contracts through controlled message orchestration. Built atop TON Sandbox and integrated with the TON Virtual Machine (TVM), our tool combines precise message queue manipulation with differential state analysis and probabilistic permutation testing to detect asynchronous execution flaws. Experimental evaluation demonstrates BugMagnifier's effectiveness through extensive parametric studies on purpose-built vulnerable contracts, revealing message ratio-dependent detection complexity that aligns with theoretical predictions. This quantitative model enables predictive vulnerability assessment while shifting discovery from manual expert analysis to automated evidence generation. By providing reproducible test scenarios for temporal vulnerabilities, BugMagnifier addresses a critical gap in the TON security tooling, offering practical support for safer smart contract development in asynchronous blockchain environments.
Related papers
- CREDIT: Certified Ownership Verification of Deep Neural Networks Against Model Extraction Attacks [54.04030169323115]
We introduce CREDIT, a certified ownership verification against Model Extraction Attacks (MEAs)<n>We quantify the similarity between DNN models, propose a practical verification threshold, and provide rigorous theoretical guarantees for ownership verification based on this threshold.<n>We extensively evaluate our approach on several mainstream datasets across different domains and tasks, achieving state-of-the-art performance.
arXiv Detail & Related papers (2026-02-23T23:36:25Z) - Scaling Agentic Verifier for Competitive Coding [66.11758166379092]
Large language models (LLMs) have demonstrated strong coding capabilities but still struggle to solve competitive programming problems correctly in a single attempt.<n>Execution-based re-ranking offers a promising test-time scaling strategy, yet existing methods are constrained by either difficult test case generation or inefficient random input sampling.<n>We propose Agentic Verifier, an execution-based agent that actively reasons about program behaviors and searches for highly discriminative test inputs.
arXiv Detail & Related papers (2026-02-04T06:30:40Z) - ARTIS: Agentic Risk-Aware Test-Time Scaling via Iterative Simulation [72.78362530982109]
ARTIS, Agentic Risk-Aware Test-Time Scaling via Iterative Simulation, is a framework that decouples exploration from commitment.<n>We show that naive LLM-based simulators struggle to capture rare but high-impact failure modes.<n>We introduce a risk-aware tool simulator that emphasizes fidelity on failure-inducing actions.
arXiv Detail & Related papers (2026-02-02T06:33:22Z) - Examining the Effectiveness of Transformer-Based Smart Contract Vulnerability Scan [0.0]
We evaluate deep learning-based approaches for vulnerability scanning of smart contracts.<n>We propose VASCOT, a Vulnerability Analyzer for Smart COntracts using Transformers.<n>VASCOT's performance is compared against a state-of-the-art LSTM-based vulnerability detection model.
arXiv Detail & Related papers (2026-01-12T09:00:42Z) - ETrace:Event-Driven Vulnerability Detection in Smart Contracts via LLM-Based Trace Analysis [14.24781559851732]
We present ETrace, a novel event-driven vulnerability detection framework for smart contracts.<n>By extracting fine-grained event sequences from transaction logs, the framework leverages Large Language Models (LLMs) as adaptive semantic interpreters.<n>ETrace implements pattern-matching to establish causal links between transaction behavior patterns and known attack behaviors.
arXiv Detail & Related papers (2025-06-18T18:18:19Z) - Interpretable Anomaly Detection in Encrypted Traffic Using SHAP with Machine Learning Models [0.0]
This study aims to develop an interpretable machine learning-based framework for anomaly detection in encrypted network traffic.<n>Models are trained and evaluated on three benchmark encrypted traffic datasets.<n> SHAP visualizations successfully revealed the most influential traffic features contributing to anomaly predictions.
arXiv Detail & Related papers (2025-05-22T05:50:39Z) - EthCluster: An Unsupervised Static Analysis Method for Ethereum Smart Contract [1.1923665587866032]
We train a model using unsupervised learning to identify vulnerabilities in the Solidity source code of smart contracts.<n>To address the challenges associated with real-world smart contracts, our training data is derived from actual vulnerability samples.
arXiv Detail & Related papers (2025-04-14T08:36:21Z) - ContractTrace: Retracing Smart Contract Versions for Security Analyses [4.126275271359132]
We introduce ContractTrace, an automated infrastructure that accurately identifies and links versions of smart contracts into coherent lineages.<n>This capability is essential for understanding vulnerability propagation patterns and evaluating the effectiveness of security patches in blockchain environments.
arXiv Detail & Related papers (2024-12-30T11:10:22Z) - FullCert: Deterministic End-to-End Certification for Training and Inference of Neural Networks [62.897993591443594]
FullCert is the first end-to-end certifier with sound, deterministic bounds.
We experimentally demonstrate FullCert's feasibility on two datasets.
arXiv Detail & Related papers (2024-06-17T13:23:52Z) - The Adversarial Implications of Variable-Time Inference [47.44631666803983]
We present an approach that exploits a novel side channel in which the adversary simply measures the execution time of the algorithm used to post-process the predictions of the ML model under attack.
We investigate leakage from the non-maximum suppression (NMS) algorithm, which plays a crucial role in the operation of object detectors.
We demonstrate attacks against the YOLOv3 detector, leveraging the timing leakage to successfully evade object detection using adversarial examples, and perform dataset inference.
arXiv Detail & Related papers (2023-09-05T11:53:17Z) - G$^2$uardFL: Safeguarding Federated Learning Against Backdoor Attacks
through Attributed Client Graph Clustering [116.4277292854053]
Federated Learning (FL) offers collaborative model training without data sharing.
FL is vulnerable to backdoor attacks, where poisoned model weights lead to compromised system integrity.
We present G$2$uardFL, a protective framework that reinterprets the identification of malicious clients as an attributed graph clustering problem.
arXiv Detail & Related papers (2023-06-08T07:15:04Z) - CC-Cert: A Probabilistic Approach to Certify General Robustness of
Neural Networks [58.29502185344086]
In safety-critical machine learning applications, it is crucial to defend models against adversarial attacks.
It is important to provide provable guarantees for deep learning models against semantically meaningful input transformations.
We propose a new universal probabilistic certification approach based on Chernoff-Cramer bounds.
arXiv Detail & Related papers (2021-09-22T12:46:04Z) - Federated Learning with Unreliable Clients: Performance Analysis and
Mechanism Design [76.29738151117583]
Federated Learning (FL) has become a promising tool for training effective machine learning models among distributed clients.
However, low quality models could be uploaded to the aggregator server by unreliable clients, leading to a degradation or even a collapse of training.
We model these unreliable behaviors of clients and propose a defensive mechanism to mitigate such a security risk.
arXiv Detail & Related papers (2021-05-10T08:02:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.