Identifying Security Risks in NFT Platforms
- URL: http://arxiv.org/abs/2204.01487v2
- Date: Tue, 5 Apr 2022 23:22:25 GMT
- Title: Identifying Security Risks in NFT Platforms
- Authors: Yash Gupta, Jayanth Kumar and Dr. Andrew Reifers
- Abstract summary: We explore the risks to understand their nature and scope, and if we could find ways to mitigate them.
We arrive at a set of solutions that are a combination of processes to be adopted, and technological changes or improvements to be incorporated into the ecosystem.
- Score: 1.224664973838839
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper examines the effects of inherent risks in the emerging technology
of non-fungible tokens and proposes an actionable set of solutions for
stakeholders in this ecosystem and observers. Web3 and NFTs are a fast-growing
300 billion dollar economy with some clear, highly publicized harms that came
to light recently. We set out to explore the risks to understand their nature
and scope, and if we could find ways to mitigate them. In due course of
investigation, we recap the background of the evolution of the web from a
client-server model to the rise of Web2.0 tech giants in the early 2000s. We
contrast how the Web3 movement is trying to re-establish the independent style
of the early web. In our research we discover a primary set of risks and harms
relevant to the ecosystem, and classify them into a simple taxonomy while
addressing their mitigations with solutions. We arrive at a set of solutions
that are a combination of processes to be adopted, and technological changes or
improvements to be incorporated into the ecosystem, to implement risk
mitigations. By linking mitigations to individual risks, we are confident our
recommendations will improve the security maturity of the growing Web3
ecosystem. We are not endorsing, or recommending specifically any particular
product or service in our solution set. Nor are we compensated or influenced in
any way by these companies to list these products in our research. The
evaluations of products in our research have to simply be viewed as suggested
improvements.
Related papers
- Multi-Agent Risks from Advanced AI [90.74347101431474]
Multi-agent systems of advanced AI pose novel and under-explored risks.
We identify three key failure modes based on agents' incentives, as well as seven key risk factors.
We highlight several important instances of each risk, as well as promising directions to help mitigate them.
arXiv Detail & Related papers (2025-02-19T23:03:21Z) - The Rising Threat to Emerging AI-Powered Search Engines [20.796363884152466]
We conduct the first safety risk quantification on seven production AIPSEs.
Our findings reveal that AIPSEs frequently generate harmful content that contains malicious URLs.
We develop an agent-based defense with a GPT-4o-based content refinement tool and an XGBoost-based URL detector.
arXiv Detail & Related papers (2025-02-07T14:15:46Z) - Open Problems in Machine Unlearning for AI Safety [61.43515658834902]
Machine unlearning -- the ability to selectively forget or suppress specific types of knowledge -- has shown promise for privacy and data removal tasks.
In this paper, we identify key limitations that prevent unlearning from serving as a comprehensive solution for AI safety.
arXiv Detail & Related papers (2025-01-09T03:59:10Z) - Interpretable Cyber Threat Detection for Enterprise Industrial Networks: A Computational Design Science Approach [1.935143126104097]
We use IS computational design science paradigm to develop a two-stage cyber threat detection system for enterprise-level IS.
The first stage generates synthetic industrial network data using a modified generative adversarial network.
The second stage develops a novel bidirectional gated recurrent unit and a modified attention mechanism for effective threat detection.
arXiv Detail & Related papers (2024-09-04T19:54:28Z) - "Glue pizza and eat rocks" -- Exploiting Vulnerabilities in Retrieval-Augmented Generative Models [74.05368440735468]
Retrieval-Augmented Generative (RAG) models enhance Large Language Models (LLMs)
In this paper, we demonstrate a security threat where adversaries can exploit the openness of these knowledge bases.
arXiv Detail & Related papers (2024-06-26T05:36:23Z) - Mapping LLM Security Landscapes: A Comprehensive Stakeholder Risk Assessment Proposal [0.0]
We propose a risk assessment process using tools like the risk rating methodology which is used for traditional systems.
We conduct scenario analysis to identify potential threat agents and map the dependent system components against vulnerability factors.
We also map threats against three key stakeholder groups.
arXiv Detail & Related papers (2024-03-20T05:17:22Z) - Prioritizing Safeguarding Over Autonomy: Risks of LLM Agents for Science [65.77763092833348]
Intelligent agents powered by large language models (LLMs) have demonstrated substantial promise in autonomously conducting experiments and facilitating scientific discoveries across various disciplines.
While their capabilities are promising, these agents also introduce novel vulnerabilities that demand careful consideration for safety.
This paper conducts a thorough examination of vulnerabilities in LLM-based agents within scientific domains, shedding light on potential risks associated with their misuse and emphasizing the need for safety measures.
arXiv Detail & Related papers (2024-02-06T18:54:07Z) - Control Risk for Potential Misuse of Artificial Intelligence in Science [85.91232985405554]
We aim to raise awareness of the dangers of AI misuse in science.
We highlight real-world examples of misuse in chemical science.
We propose a system called SciGuard to control misuse risks for AI models in science.
arXiv Detail & Related papers (2023-12-11T18:50:57Z) - Adverse Media Mining for KYC and ESG Compliance [2.381399746981591]
Adverse media or negative news screening is crucial for the identification of such non-financial risks.
We present an automated system to conduct both real-time and batch search of adverse media for users' queries.
Our scalable, machine-learning driven approach to high-precision, adverse news filtering is based on four perspectives.
arXiv Detail & Related papers (2021-10-22T01:04:16Z) - 'They're all about pushing the products and shiny things rather than
fundamental security' Mapping Socio-technical Challenges in Securing the
Smart Home [1.52292571922932]
Insecure connected devices can cause serious threats not just to smart home owners, but also the underlying infrastructural network as well.
There has been increasing academic and regulatory interest in addressing cybersecurity risks from both the standpoint of Internet of Things (IoT) vendors and that of end-users.
We interviewed 13 experts in the field of IoT and identified three main categories of barriers to making IoT products usably secure.
arXiv Detail & Related papers (2021-05-25T08:38:36Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.