BC4LLM: Trusted Artificial Intelligence When Blockchain Meets Large
Language Models
- URL: http://arxiv.org/abs/2310.06278v1
- Date: Tue, 10 Oct 2023 03:18:26 GMT
- Title: BC4LLM: Trusted Artificial Intelligence When Blockchain Meets Large
Language Models
- Authors: Haoxiang Luo, Jian Luo, Athanasios V. Vasilakos
- Abstract summary: Large language models (LLMs) serve people in the form of AI-generated content (AIGC)
It is difficult to guarantee the authenticity and reliability of AIGC learning data.
There are also hidden dangers of privacy disclosure in distributed AI training.
- Score: 6.867309936992639
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: In recent years, artificial intelligence (AI) and machine learning (ML) are
reshaping society's production methods and productivity, and also changing the
paradigm of scientific research. Among them, the AI language model represented
by ChatGPT has made great progress. Such large language models (LLMs) serve
people in the form of AI-generated content (AIGC) and are widely used in
consulting, healthcare, and education. However, it is difficult to guarantee
the authenticity and reliability of AIGC learning data. In addition, there are
also hidden dangers of privacy disclosure in distributed AI training. Moreover,
the content generated by LLMs is difficult to identify and trace, and it is
difficult to cross-platform mutual recognition. The above information security
issues in the coming era of AI powered by LLMs will be infinitely amplified and
affect everyone's life. Therefore, we consider empowering LLMs using blockchain
technology with superior security features to propose a vision for trusted AI.
This paper mainly introduces the motivation and technical route of blockchain
for LLM (BC4LLM), including reliable learning corpus, secure training process,
and identifiable generated content. Meanwhile, this paper also reviews the
potential applications and future challenges, especially in the frontier
communication networks field, including network resource allocation, dynamic
spectrum sharing, and semantic communication. Based on the above work combined
and the prospect of blockchain and LLMs, it is expected to help the early
realization of trusted AI and provide guidance for the academic community.
Related papers
- OML: Open, Monetizable, and Loyal AI [39.63122342758896]
We propose OML, which stands for Open, Monetizable, and Loyal AI.
OML is an approach designed to democratize AI development.
Key innovation of our work is introducing a new scientific field: AI-native cryptography.
arXiv Detail & Related papers (2024-11-01T18:46:03Z) - Large Language Models for Base Station Siting: Intelligent Deployment based on Prompt or Agent [62.16747639440893]
Large language models (LLMs) and their associated technologies advance, particularly in the realms of prompt engineering and agent engineering.
This approach entails the strategic use of well-crafted prompts to infuse human experience and knowledge into these sophisticated LLMs.
This integration represents the future paradigm of artificial intelligence (AI) as a service and AI for more ease.
arXiv Detail & Related papers (2024-08-07T08:43:32Z) - Federated TrustChain: Blockchain-Enhanced LLM Training and Unlearning [22.33179965773829]
We propose a novel blockchain-based federated learning framework for Large Language Models (LLMs)
Our framework leverages blockchain technology to create a tamper-proof record of each model's contributions and introduces an innovative unlearning function that seamlessly integrates with the federated learning mechanism.
arXiv Detail & Related papers (2024-06-06T13:44:44Z) - A Survey on RAG Meeting LLMs: Towards Retrieval-Augmented Large Language Models [71.25225058845324]
Large Language Models (LLMs) have demonstrated revolutionary abilities in language understanding and generation.
Retrieval-Augmented Generation (RAG) can offer reliable and up-to-date external knowledge.
RA-LLMs have emerged to harness external and authoritative knowledge bases, rather than relying on the model's internal knowledge.
arXiv Detail & Related papers (2024-05-10T02:48:45Z) - Rethinking Machine Unlearning for Large Language Models [85.92660644100582]
We explore machine unlearning in the domain of large language models (LLMs)
This initiative aims to eliminate undesirable data influence (e.g., sensitive or illegal information) and the associated model capabilities.
arXiv Detail & Related papers (2024-02-13T20:51:58Z) - Trust, Accountability, and Autonomy in Knowledge Graph-based AI for
Self-determination [1.4305544869388402]
Knowledge Graphs (KGs) have emerged as fundamental platforms for powering intelligent decision-making.
The integration of KGs with neuronal learning is currently a topic of active research.
This paper conceptualises the foundational topics and research pillars to support KG-based AI for self-determination.
arXiv Detail & Related papers (2023-10-30T12:51:52Z) - Privacy in Large Language Models: Attacks, Defenses and Future Directions [84.73301039987128]
We analyze the current privacy attacks targeting large language models (LLMs) and categorize them according to the adversary's assumed capabilities.
We present a detailed overview of prominent defense strategies that have been developed to counter these privacy attacks.
arXiv Detail & Related papers (2023-10-16T13:23:54Z) - AI Transparency in the Age of LLMs: A Human-Centered Research Roadmap [46.98582021477066]
The rise of powerful large language models (LLMs) brings about tremendous opportunities for innovation but also looming risks for individuals and society at large.
We have reached a pivotal moment for ensuring that LLMs and LLM-infused applications are developed and deployed responsibly.
It is paramount to pursue new approaches to provide transparency for LLMs.
arXiv Detail & Related papers (2023-06-02T22:51:26Z) - Voluminous yet Vacuous? Semantic Capital in an Age of Large Language
Models [0.0]
Large Language Models (LLMs) have emerged as transformative forces in the realm of natural language processing, wielding the power to generate human-like text.
This paper explores the evolution, capabilities, and limitations of these models, while highlighting ethical concerns they raise.
arXiv Detail & Related papers (2023-05-29T09:26:28Z) - APPFLChain: A Privacy Protection Distributed Artificial-Intelligence
Architecture Based on Federated Learning and Consortium Blockchain [6.054775780656853]
We propose a new system architecture called APPFLChain.
It is an integrated architecture of a Hyperledger Fabric-based blockchain and a federated-learning paradigm.
Our new system can maintain a high degree of security and privacy as users do not need to share sensitive personal information to the server.
arXiv Detail & Related papers (2022-06-26T05:30:07Z) - Empowering Things with Intelligence: A Survey of the Progress,
Challenges, and Opportunities in Artificial Intelligence of Things [98.10037444792444]
We show how AI can empower the IoT to make it faster, smarter, greener, and safer.
First, we present progress in AI research for IoT from four perspectives: perceiving, learning, reasoning, and behaving.
Finally, we summarize some promising applications of AIoT that are likely to profoundly reshape our world.
arXiv Detail & Related papers (2020-11-17T13:14:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.