SCRUTINEER: Detecting Logic-Level Usage Violations of Reusable Components in Smart Contracts
- URL: http://arxiv.org/abs/2511.11411v1
- Date: Fri, 14 Nov 2025 15:41:56 GMT
- Title: SCRUTINEER: Detecting Logic-Level Usage Violations of Reusable Components in Smart Contracts
- Authors: Xingshuang Lin, Binbin Zhao, Jinwen Wang, Qinge Xie, Xibin Zhao, Shouling Ji,
- Abstract summary: SCRUTINEER is an automated system for detecting logic-level usage violations of SCRs.<n> SCRUTINEER achieves a precision of 80.77%, a recall of 82.35%, and an F1-score of 81.55%.
- Score: 41.56019272656647
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Smart Contract Reusable Components(SCRs) play a vital role in accelerating the development of business-specific contracts by promoting modularity and code reuse. However, the risks associated with SCR usage violations have become a growing concern. One particular type of SCR usage violation, known as a logic-level usage violation, is becoming especially harmful. This violation occurs when the SCR adheres to its specified usage rules but fails to align with the specific business logic of the current context, leading to significant vulnerabilities. Detecting such violations necessitates a deep semantic understanding of the contract's business logic, including the ability to extract implicit usage patterns and analyze fine-grained logical behaviors. To address these challenges, we propose SCRUTINEER, the first automated and practical system for detecting logic-level usage violations of SCRs. First, we design a composite feature extraction approach that produces three complementary feature representations, supporting subsequent analysis. We then introduce a Large Language Model-powered knowledge construction framework, which leverages comprehension-oriented prompts and domain-specific tools to extract logic-level usage and build the SCR knowledge base. Next, we develop a Retrieval-Augmented Generation-driven inspector, which combines a rapid retrieval strategy with both comprehensive and targeted analysis to identify potentially insecure logic-level usages. Finally, we implement a logic-level usage violation analysis engine that integrates a similarity-based checker and a snapshot-based inference conflict checker to enable accurate and robust detection. We evaluate SCRUTINEER from multiple perspectives on 3 ground-truth datasets. The results show that SCRUTINEER achieves a precision of 80.77%, a recall of 82.35%, and an F1-score of 81.55% in detecting logic-level usage violations of SCRs.
Related papers
- Interpretable Logical Anomaly Classification via Constraint Decomposition and Instruction Fine-Tuning [0.17722218114340835]
We introduce Logical Anomaly Classification (LAC), a task that unifies anomaly detection and fine-grained violation classification in a single inference step.<n>To tackle LAC, we propose LogiCls, a vision-language framework that decomposes complex logical constraints into a sequence of verifiable subqueries.
arXiv Detail & Related papers (2026-02-03T13:48:09Z) - LogicScan: An LLM-driven Framework for Detecting Business Logic Vulnerabilities in Smart Contracts [18.126385773266396]
We propose LogicScan, an automated contrastive auditing framework for detecting business logic vulnerabilities in smart contracts.<n>The key insight behind LogicScan is that mature, widely deployed on-chain protocols implicitly encode well-tested and consensus-driven business invariants.<n>We evaluate LogicScan on three real-world datasets, including DeFiHacks, Web3Bugs, and a set of top-200 audited contracts.
arXiv Detail & Related papers (2026-02-03T08:56:53Z) - Self-Compression of Chain-of-Thought via Multi-Agent Reinforcement Learning [34.10133693878611]
We propose a multi-agent RL framework that selectively penalizes redundant chunks, while preserving essential reasoning logic.<n>Our framework, Self-Compression via MARL (SCMA), instantiates redundancy detection and evaluation through two specialized agents.<n> Empirical evaluations across model scales demonstrate that SCMA reduces response length by 11.1% to 39.0% while boosting accuracy by 4.33% to 10.02%.
arXiv Detail & Related papers (2026-01-29T16:13:10Z) - CoT-Seg: Rethinking Segmentation with Chain-of-Thought Reasoning and Self-Correction [50.67483317563736]
This paper aims to explore a system that can think step-by-step, look up information if needed, generate results, self-evaluate its own results, and refine the results.<n>We introduce CoT-Seg, a training-free framework that rethinks reasoning segmentation by combining chain-of-thought reasoning with self-correction.
arXiv Detail & Related papers (2026-01-24T11:41:54Z) - VIRO: Robust and Efficient Neuro-Symbolic Reasoning with Verification for Referring Expression Comprehension [51.76841625486355]
Referring Expression (REC) aims to localize the image region corresponding to a natural-language query.<n>Recent neuro-symbolic REC approaches leverage large language models (LLMs) and vision-language models (VLMs) to perform compositional reasoning.<n>We introduce VIRO, a neuro-symbolic framework that embeds lightweight operator-level verifiers within reasoning steps.
arXiv Detail & Related papers (2026-01-19T07:21:19Z) - RISER: Orchestrating Latent Reasoning Skills for Adaptive Activation Steering [62.63376387138257]
We propose a plug-and-play intervention framework that adaptively steers large language models (LLMs) reasoning in activation space.<n>RISER constructs a library of reusable reasoning vectors and employs a lightweight Router to dynamically compose them for each input.<n>The Router is optimized via reinforcement learning under task-level rewards, activating latent cognitive primitives in an emergent and compositional manner.
arXiv Detail & Related papers (2026-01-14T08:04:33Z) - CyberRAG: An Agentic RAG cyber attack classification and reporting tool [0.3914676152740142]
CyberRAG is a modular agent-based RAG framework that delivers real-time classification, explanation, and structured reporting for cyber-attacks.<n>Unlike traditional RAG, CyberRAG adopts an agentic design that enables dynamic control flow and adaptive reasoning.
arXiv Detail & Related papers (2025-07-03T08:32:19Z) - Resisting Contextual Interference in RAG via Parametric-Knowledge Reinforcement [9.66890519317288]
Retrieval-augmented generation (RAG) improves performance on knowledge-intensive tasks but can be derailed by wrong, irrelevant, or conflicting retrieved text.<n>We propose Knowledgeable-R1, a reinforcement-learning framework that explicitly trains large language models to use parametric knowledge.
arXiv Detail & Related papers (2025-06-05T15:34:15Z) - Simplifying Root Cause Analysis in Kubernetes with StateGraph and LLM [13.293736787442414]
We introduce SynergyRCA, an innovative tool for root cause analysis.<n> SynergyRCA constructs a StateGraph to capture spatial and temporal relationships.<n>It can identify root causes in an average time of about two minutes and achieves an impressive precision of approximately 0.90.
arXiv Detail & Related papers (2025-06-03T06:09:13Z) - Retrieval is Not Enough: Enhancing RAG Reasoning through Test-Time Critique and Optimization [58.390885294401066]
Retrieval-augmented generation (RAG) has become a widely adopted paradigm for enabling knowledge-grounded large language models (LLMs)<n>RAG pipelines often fail to ensure that model reasoning remains consistent with the evidence retrieved, leading to factual inconsistencies or unsupported conclusions.<n>We propose AlignRAG, a novel iterative framework grounded in Critique-Driven Alignment (CDA)<n>We introduce AlignRAG-auto, an autonomous variant that dynamically terminates refinement, removing the need to pre-specify the number of critique iterations.
arXiv Detail & Related papers (2025-04-21T04:56:47Z) - Towards Copyright Protection for Knowledge Bases of Retrieval-augmented Language Models via Reasoning [58.57194301645823]
Large language models (LLMs) are increasingly integrated into real-world personalized applications.<n>The valuable and often proprietary nature of the knowledge bases used in RAG introduces the risk of unauthorized usage by adversaries.<n>Existing methods that can be generalized as watermarking techniques to protect these knowledge bases typically involve poisoning or backdoor attacks.<n>We propose name for harmless' copyright protection of knowledge bases.
arXiv Detail & Related papers (2025-02-10T09:15:56Z) - From Objects to Events: Unlocking Complex Visual Understanding in Object Detectors via LLM-guided Symbolic Reasoning [71.41062111470414]
Current object detectors excel at entity localization and classification, yet exhibit inherent limitations in event recognition capabilities.<n>We present a novel framework that expands the capability of standard object detectors beyond mere object recognition to complex event understanding.<n>Our key innovation lies in bridging the semantic gap between object detection and event understanding without requiring expensive task-specific training.
arXiv Detail & Related papers (2025-02-09T10:30:54Z) - Soley: Identification and Automated Detection of Logic Vulnerabilities in Ethereum Smart Contracts Using Large Language Models [1.081463830315253]
We empirically investigate logic vulnerabilities in real-world smart contracts extracted from code changes on GitHub.
We introduce Soley, an automated method for detecting logic vulnerabilities in smart contracts.
We examine mitigation strategies employed by smart contract developers to address these vulnerabilities in real-world scenarios.
arXiv Detail & Related papers (2024-06-24T00:15:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.