Infusion of Blockchain to Establish Trustworthiness in AI Supported Software Evolution: A Systematic Literature Review
- URL: http://arxiv.org/abs/2601.20918v2
- Date: Mon, 02 Feb 2026 12:08:01 GMT
- Title: Infusion of Blockchain to Establish Trustworthiness in AI Supported Software Evolution: A Systematic Literature Review
- Authors: Mohammad Naserameri, Juergen Rilling,
- Abstract summary: Most studies focus on integrating AI in software engineering (SE), with only 31% explicitly addressing trustworthiness.<n>Our review highlights six recent studies exploring blockchain-based approaches to reinforce reliability, transparency, and accountability in AI-assisted SE tasks.<n>Future work must develop measurable, reproducible trust frameworks to enable reliable, secure, and compliant AI-driven SE ecosystems.
- Score: 0.3437656066916039
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Context: Blockchain and AI are increasingly explored to enhance trustworthiness in software engineering (SE), particularly in supporting software evolution tasks. Method: We conducted a systematic literature review (SLR) using a predefined protocol with clear eligibility criteria to ensure transparency, reproducibility, and minimized bias, synthesizing research on blockchain-enabled trust in AI-driven SE tools and processes. Results: Most studies focus on integrating AI in SE, with only 31% explicitly addressing trustworthiness. Our review highlights six recent studies exploring blockchain-based approaches to reinforce reliability, transparency, and accountability in AI-assisted SE tasks. Conclusion: Blockchain enhances trust by ensuring data immutability, model transparency, and lifecycle accountability, including federated learning with blockchain consensus and private data verification. However, inconsistent definitions of trust and limited real-world testing remain major challenges. Future work must develop measurable, reproducible trust frameworks to enable reliable, secure, and compliant AI-driven SE ecosystems, including applications involving large language models.
Related papers
- AI-Governed Agent Architecture for Web-Trustworthy Tokenization of Alternative Assets [3.0801485631077457]
Alternative Assets tokenization is transforming non-traditional financial instruments are represented and traded on the web.<n>This paper proposes an AI-governed agent architecture that integrates intelligent agents with blockchain to achieve web-trustworthy tokenization of alternative assets.
arXiv Detail & Related papers (2025-06-30T11:28:51Z) - Zero-Trust Foundation Models: A New Paradigm for Secure and Collaborative Artificial Intelligence for Internet of Things [61.43014629640404]
Zero-Trust Foundation Models (ZTFMs) embed zero-trust security principles into the lifecycle of foundation models (FMs) for Internet of Things (IoT) systems.<n>ZTFMs can enable secure, privacy-preserving AI across distributed, heterogeneous, and potentially adversarial IoT environments.
arXiv Detail & Related papers (2025-05-26T06:44:31Z) - Do LLMs trust AI regulation? Emerging behaviour of game-theoretic LLM agents [61.132523071109354]
This paper investigates the interplay between AI developers, regulators and users, modelling their strategic choices under different regulatory scenarios.<n>Our research identifies emerging behaviours of strategic AI agents, which tend to adopt more "pessimistic" stances than pure game-theoretic agents.
arXiv Detail & Related papers (2025-04-11T15:41:21Z) - Enhancing Trust in AI Marketplaces: Evaluating On-Chain Verification of Personalized AI models using zk-SNARKs [8.458944388986067]
This paper addresses the challenge of verifying personalized AI models in decentralized environments.<n>We propose a novel framework that integrates zero-knowledge succinct non-interactive arguments of knowledge (zk-SNARKs) with Chainlink decentralized oracles.<n>Our results indicate the framework's efficacy, with key metrics including proof generation taking an average of 233.63 seconds and verification time of 61.50 seconds.
arXiv Detail & Related papers (2025-04-07T07:38:29Z) - Blockchain As a Platform For Artificial Intelligence (AI) Transparency [0.0]
"Black box" problem in AI decision-making limits stakeholders' ability to understand, trust, and verify outcomes.<n>This paper explores the integration of blockchain with AI to improve decision traceability, provenance data, and model accountability.<n>Findings suggest that blockchain could be a technology for ensuring AI systems remain accountable, ethical, and aligned with regulatory standards.
arXiv Detail & Related papers (2025-03-07T01:57:26Z) - SoK: Decentralized AI (DeAI) [4.651101982820699]
We present a Systematization of Knowledge (SoK) for blockchain-based DeAI solutions.<n>Based on this taxonomy, we provide a structured way to clarify the landscape of DeAI protocols.<n>We investigate how blockchain features contribute to enhancing the security, transparency, and trustworthiness of AI processes.
arXiv Detail & Related papers (2024-11-26T14:28:25Z) - Meta-Sealing: A Revolutionizing Integrity Assurance Protocol for Transparent, Tamper-Proof, and Trustworthy AI System [0.0]
This research introduces Meta-Sealing, a cryptographic framework that fundamentally changes integrity verification in AI systems.
The framework combines advanced cryptography with distributed verification, delivering tamper-evident guarantees that achieve both mathematical rigor and computational efficiency.
arXiv Detail & Related papers (2024-10-31T15:31:22Z) - Generative AI-enabled Blockchain Networks: Fundamentals, Applications,
and Case Study [73.87110604150315]
Generative Artificial Intelligence (GAI) has emerged as a promising solution to address challenges of blockchain technology.
In this paper, we first introduce GAI techniques, outline their applications, and discuss existing solutions for integrating GAI into blockchains.
arXiv Detail & Related papers (2024-01-28T10:46:17Z) - BC4LLM: Trusted Artificial Intelligence When Blockchain Meets Large
Language Models [6.867309936992639]
Large language models (LLMs) serve people in the form of AI-generated content (AIGC)
It is difficult to guarantee the authenticity and reliability of AIGC learning data.
There are also hidden dangers of privacy disclosure in distributed AI training.
arXiv Detail & Related papers (2023-10-10T03:18:26Z) - Auditing and Generating Synthetic Data with Controllable Trust Trade-offs [54.262044436203965]
We introduce a holistic auditing framework that comprehensively evaluates synthetic datasets and AI models.
It focuses on preventing bias and discrimination, ensures fidelity to the source data, assesses utility, robustness, and privacy preservation.
We demonstrate the framework's effectiveness by auditing various generative models across diverse use cases.
arXiv Detail & Related papers (2023-04-21T09:03:18Z) - Uncertainty Quantification 360: A Holistic Toolkit for Quantifying and
Communicating the Uncertainty of AI [49.64037266892634]
We describe an open source Python toolkit named Uncertainty Quantification 360 (UQ360) for the uncertainty quantification of AI models.
The goal of this toolkit is twofold: first, to provide a broad range of capabilities to streamline as well as foster the common practices of quantifying, evaluating, improving, and communicating uncertainty in the AI application development lifecycle; second, to encourage further exploration of UQ's connections to other pillars of trustworthy AI.
arXiv Detail & Related papers (2021-06-02T18:29:04Z) - Trustworthy AI [75.99046162669997]
Brittleness to minor adversarial changes in the input data, ability to explain the decisions, address the bias in their training data, are some of the most prominent limitations.
We propose the tutorial on Trustworthy AI to address six critical issues in enhancing user and public trust in AI systems.
arXiv Detail & Related papers (2020-11-02T20:04:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.