Definition and Detection of Defects in NFT Smart Contracts
- URL: http://arxiv.org/abs/2305.15829v2
- Date: Fri, 4 Aug 2023 07:05:24 GMT
- Title: Definition and Detection of Defects in NFT Smart Contracts
- Authors: Shuo Yang, Jiachi Chen, Zibin Zheng
- Abstract summary: Defects in NFT smart contracts could be exploited by attackers to harm the security and reliability of the NFT ecosystem.
In this paper, we introduce 5 defects in NFT smart contracts and propose a tool named NFTGuard to detect these defects.
We find that 1,331 contracts contain at least one of the 5 defects, and the overall precision achieved by our tool is 92.6%.
- Score: 34.359991158202796
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, the birth of non-fungible tokens (NFTs) has attracted great
attention. NFTs are capable of representing users' ownership on the blockchain
and have experienced tremendous market sales due to their popularity.
Unfortunately, the high value of NFTs also makes them a target for attackers.
The defects in NFT smart contracts could be exploited by attackers to harm the
security and reliability of the NFT ecosystem. Despite the significance of this
issue, there is a lack of systematic work that focuses on analyzing NFT smart
contracts, which may raise worries about the security of users' NFTs. To
address this gap, in this paper, we introduce 5 defects in NFT smart contracts.
Each defect is defined and illustrated with a code example highlighting its
features and consequences, paired with possible solutions to fix it.
Furthermore, we propose a tool named NFTGuard to detect our defined defects
based on a symbolic execution framework. Specifically, NFTGuard extracts the
information of the state variables from the contract abstract syntax tree
(AST), which is critical for identifying variable-loading and storing
operations during symbolic execution. Furthermore, NFTGuard recovers
source-code-level features from the bytecode to effectively locate defects and
report them based on predefined detection patterns. We run NFTGuard on 16,527
real-world smart contracts and perform an evaluation based on the manually
labeled results. We find that 1,331 contracts contain at least one of the 5
defects, and the overall precision achieved by our tool is 92.6%.
Related papers
- AI-Based Vulnerability Analysis of NFT Smart Contracts [6.378351117969227]
This study proposes an AI-driven approach to detect vulnerabilities in NFT smart contracts.
We collected 16,527 public smart contract codes, classifying them into five vulnerability categories: Risky Mutable Proxy, ERC-721 Reentrancy, Unlimited Minting, Missing Requirements, and Public Burn.
A random forest model was implemented to improve robustness through random data/feature sampling and multitree integration.
arXiv Detail & Related papers (2025-04-18T08:55:31Z) - Are You Getting What You Pay For? Auditing Model Substitution in LLM APIs [60.881609323604685]
Large Language Models (LLMs) accessed via black-box APIs introduce a trust challenge.
Users pay for services based on advertised model capabilities.
providers may covertly substitute the specified model with a cheaper, lower-quality alternative to reduce operational costs.
This lack of transparency undermines fairness, erodes trust, and complicates reliable benchmarking.
arXiv Detail & Related papers (2025-04-07T03:57:41Z) - WakeMint: Detecting Sleepminting Vulnerabilities in NFT Smart Contracts [33.83946216568598]
Sleepminting allows attackers to illegally transfer others' tokens.
There is a lack of understanding from the contract code perspective, which is crucial for identifying such issues.
We propose WakeMint, which is built on a symbolic execution framework and is designed to be compatible with both high and low versions of Solidity.
arXiv Detail & Related papers (2025-02-26T10:39:46Z) - Exact Certification of (Graph) Neural Networks Against Label Poisoning [50.87615167799367]
We introduce an exact certification method for label flipping in Graph Neural Networks (GNNs)
We apply our method to certify a broad range of GNN architectures in node classification tasks.
Our work presents the first exact certificate to a poisoning attack ever derived for neural networks.
arXiv Detail & Related papers (2024-11-30T17:05:12Z) - Guardians of the Ledger: Protecting Decentralized Exchanges from State Derailment Defects [4.891180928768215]
We conduct the first systematic study of state derailment defects in DEX projects.
We propose a novel deep learning-based framework StateGuard for detecting state derailment defects in DEX smart contracts.
arXiv Detail & Related papers (2024-11-28T05:55:25Z) - Effective Targeted Testing of Smart Contracts [0.0]
Since smart contracts are immutable, their bugs cannot be fixed, which may lead to significant monetary losses.
Our framework, Griffin, tackles this deficiency by employing a targeted symbolic execution technique for generating test data.
This paper discusses how smart contracts differ from legacy software in targeted symbolic execution and how these differences can affect the tool structure.
arXiv Detail & Related papers (2024-07-05T04:38:11Z) - StateGuard: Detecting State Derailment Defects in Decentralized Exchange Smart Contract [4.891180928768215]
We conduct the first systematic study on state derailment defects of DEXs.
These defects could lead to incorrect, incomplete, or unauthorized changes to the system state during contract execution.
We propose StateGuard, a deep learning-based framework to detect state derailment defects in DEX smart contracts.
arXiv Detail & Related papers (2024-05-15T08:40:29Z) - SoK: On the Security of Non-Fungible Tokens [9.574922406565372]
Non-fungible tokens (NFTs) drive the prosperity of the Web3 ecosystem.
There is a lack of understanding of kinds of NFT security issues.
This paper is the first SoK of NFT security, shedding light their root causes, real-world attacks, and potential ways to address them.
arXiv Detail & Related papers (2023-12-13T09:16:24Z) - Graph Agent Network: Empowering Nodes with Inference Capabilities for Adversarial Resilience [50.460555688927826]
We propose the Graph Agent Network (GAgN) to address the vulnerabilities of graph neural networks (GNNs)
GAgN is a graph-structured agent network in which each node is designed as an 1-hop-view agent.
Agents' limited view prevents malicious messages from propagating globally in GAgN, thereby resisting global-optimization-based secondary attacks.
arXiv Detail & Related papers (2023-06-12T07:27:31Z) - G$^2$uardFL: Safeguarding Federated Learning Against Backdoor Attacks
through Attributed Client Graph Clustering [116.4277292854053]
Federated Learning (FL) offers collaborative model training without data sharing.
FL is vulnerable to backdoor attacks, where poisoned model weights lead to compromised system integrity.
We present G$2$uardFL, a protective framework that reinterprets the identification of malicious clients as an attributed graph clustering problem.
arXiv Detail & Related papers (2023-06-08T07:15:04Z) - NFTVis: Visual Analysis of NFT Performance [12.491701063977825]
A non-fungible token (NFT) is a data unit stored on the blockchain.
Current rarity models have flaws and are sometimes not convincing.
It is difficult to take comprehensive consideration and analyze NFT performance efficiently.
arXiv Detail & Related papers (2023-06-05T09:02:48Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - ESCORT: Ethereum Smart COntRacTs Vulnerability Detection using Deep
Neural Network and Transfer Learning [80.85273827468063]
Existing machine learning-based vulnerability detection methods are limited and only inspect whether the smart contract is vulnerable.
We propose ESCORT, the first Deep Neural Network (DNN)-based vulnerability detection framework for smart contracts.
We show that ESCORT achieves an average F1-score of 95% on six vulnerability types and the detection time is 0.02 seconds per contract.
arXiv Detail & Related papers (2021-03-23T15:04:44Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL) with
Lazy Clients [124.48732110742623]
We propose a novel framework by integrating blockchain into Federated Learning (FL)
BLADE-FL has a good performance in terms of privacy preservation, tamper resistance, and effective cooperation of learning.
It gives rise to a new problem of training deficiency, caused by lazy clients who plagiarize others' trained models and add artificial noises to conceal their cheating behaviors.
arXiv Detail & Related papers (2020-12-02T12:18:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.