Identifying Security Risks in NFT Platforms
- URL: http://arxiv.org/abs/2204.01487v2
- Date: Tue, 5 Apr 2022 23:22:25 GMT
- Title: Identifying Security Risks in NFT Platforms
- Authors: Yash Gupta, Jayanth Kumar and Dr. Andrew Reifers
- Abstract summary: We explore the risks to understand their nature and scope, and if we could find ways to mitigate them.
We arrive at a set of solutions that are a combination of processes to be adopted, and technological changes or improvements to be incorporated into the ecosystem.
- Score: 1.224664973838839
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper examines the effects of inherent risks in the emerging technology
of non-fungible tokens and proposes an actionable set of solutions for
stakeholders in this ecosystem and observers. Web3 and NFTs are a fast-growing
300 billion dollar economy with some clear, highly publicized harms that came
to light recently. We set out to explore the risks to understand their nature
and scope, and if we could find ways to mitigate them. In due course of
investigation, we recap the background of the evolution of the web from a
client-server model to the rise of Web2.0 tech giants in the early 2000s. We
contrast how the Web3 movement is trying to re-establish the independent style
of the early web. In our research we discover a primary set of risks and harms
relevant to the ecosystem, and classify them into a simple taxonomy while
addressing their mitigations with solutions. We arrive at a set of solutions
that are a combination of processes to be adopted, and technological changes or
improvements to be incorporated into the ecosystem, to implement risk
mitigations. By linking mitigations to individual risks, we are confident our
recommendations will improve the security maturity of the growing Web3
ecosystem. We are not endorsing, or recommending specifically any particular
product or service in our solution set. Nor are we compensated or influenced in
any way by these companies to list these products in our research. The
evaluations of products in our research have to simply be viewed as suggested
improvements.
Related papers
- Interpretable Cyber Threat Detection for Enterprise Industrial Networks: A Computational Design Science Approach [1.935143126104097]
We use IS computational design science paradigm to develop a two-stage cyber threat detection system for enterprise-level IS.
The first stage generates synthetic industrial network data using a modified generative adversarial network.
The second stage develops a novel bidirectional gated recurrent unit and a modified attention mechanism for effective threat detection.
arXiv Detail & Related papers (2024-09-04T19:54:28Z) - "Glue pizza and eat rocks" -- Exploiting Vulnerabilities in Retrieval-Augmented Generative Models [74.05368440735468]
Retrieval-Augmented Generative (RAG) models enhance Large Language Models (LLMs)
In this paper, we demonstrate a security threat where adversaries can exploit the openness of these knowledge bases.
arXiv Detail & Related papers (2024-06-26T05:36:23Z) - Mapping LLM Security Landscapes: A Comprehensive Stakeholder Risk Assessment Proposal [0.0]
We propose a risk assessment process using tools like the risk rating methodology which is used for traditional systems.
We conduct scenario analysis to identify potential threat agents and map the dependent system components against vulnerability factors.
We also map threats against three key stakeholder groups.
arXiv Detail & Related papers (2024-03-20T05:17:22Z) - Prioritizing Safeguarding Over Autonomy: Risks of LLM Agents for Science [65.77763092833348]
Intelligent agents powered by large language models (LLMs) have demonstrated substantial promise in autonomously conducting experiments and facilitating scientific discoveries across various disciplines.
While their capabilities are promising, these agents also introduce novel vulnerabilities that demand careful consideration for safety.
This paper conducts a thorough examination of vulnerabilities in LLM-based agents within scientific domains, shedding light on potential risks associated with their misuse and emphasizing the need for safety measures.
arXiv Detail & Related papers (2024-02-06T18:54:07Z) - Control Risk for Potential Misuse of Artificial Intelligence in Science [85.91232985405554]
We aim to raise awareness of the dangers of AI misuse in science.
We highlight real-world examples of misuse in chemical science.
We propose a system called SciGuard to control misuse risks for AI models in science.
arXiv Detail & Related papers (2023-12-11T18:50:57Z) - Incentive-Aware Recommender Systems in Two-Sided Markets [49.692453629365204]
We propose a novel recommender system that aligns with agents' incentives while achieving myopically optimal performance.
Our framework models this incentive-aware system as a multi-agent bandit problem in two-sided markets.
Both algorithms satisfy an ex-post fairness criterion, which protects agents from over-exploitation.
arXiv Detail & Related papers (2022-11-23T22:20:12Z) - Adverse Media Mining for KYC and ESG Compliance [2.381399746981591]
Adverse media or negative news screening is crucial for the identification of such non-financial risks.
We present an automated system to conduct both real-time and batch search of adverse media for users' queries.
Our scalable, machine-learning driven approach to high-precision, adverse news filtering is based on four perspectives.
arXiv Detail & Related papers (2021-10-22T01:04:16Z) - 'They're all about pushing the products and shiny things rather than
fundamental security' Mapping Socio-technical Challenges in Securing the
Smart Home [1.52292571922932]
Insecure connected devices can cause serious threats not just to smart home owners, but also the underlying infrastructural network as well.
There has been increasing academic and regulatory interest in addressing cybersecurity risks from both the standpoint of Internet of Things (IoT) vendors and that of end-users.
We interviewed 13 experts in the field of IoT and identified three main categories of barriers to making IoT products usably secure.
arXiv Detail & Related papers (2021-05-25T08:38:36Z) - A Research Ecosystem for Secure Computing [4.212354651854757]
Security of computers, systems, and applications has been an active area of research in computer science for decades.
Challenges range from security and trust of the information ecosystem to adversarial artificial intelligence and machine learning.
New incentives and education are at the core of this change.
arXiv Detail & Related papers (2021-01-04T22:42:28Z) - Overcoming Failures of Imagination in AI Infused System Development and
Deployment [71.9309995623067]
NeurIPS 2020 requested that research paper submissions include impact statements on "potential nefarious uses and the consequences of failure"
We argue that frameworks of harms must be context-aware and consider a wider range of potential stakeholders, system affordances, as well as viable proxies for assessing harms in the widest sense.
arXiv Detail & Related papers (2020-11-26T18:09:52Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.