Resilient Federated Chain: Transforming Blockchain Consensus into an Active Defense Layer for Federated Learning
- URL: http://arxiv.org/abs/2602.21841v1
- Date: Wed, 25 Feb 2026 12:20:47 GMT
- Title: Resilient Federated Chain: Transforming Blockchain Consensus into an Active Defense Layer for Federated Learning
- Authors: Mario García-Márquez, Nuria Rodríguez-Barroso, M. Victoria Luzón, Francisco Herrera,
- Abstract summary: This paper introduces Resilient Federated Chain (RFC), a novel blockchain-enabled Federated Learning framework.<n> RFC builds upon the existing Proof of Federated Learning architecture by repurposing the redundancy of its Pooled Mining mechanism.<n> RFC significantly improves robustness compared to baseline methods, providing a viable solution for securing decentralized learning environments.
- Score: 3.189189590825304
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning (FL) has emerged as a key paradigm for building Trustworthy AI systems by enabling privacy-preserving, decentralized model training. However, FL is highly susceptible to adversarial attacks that compromise model integrity and data confidentiality, a vulnerability exacerbated by the fact that conventional data inspection methods are incompatible with its decentralized design. While integrating FL with Blockchain technology has been proposed to address some limitations, its potential for mitigating adversarial attacks remains largely unexplored. This paper introduces Resilient Federated Chain (RFC), a novel blockchain-enabled FL framework designed specifically to enhance resilience against such threats. RFC builds upon the existing Proof of Federated Learning architecture by repurposing the redundancy of its Pooled Mining mechanism as an active defense layer that can be combined with robust aggregation rules. Furthermore, the framework introduces a flexible evaluation function in its consensus mechanism, allowing for adaptive defense against different attack strategies. Extensive experimental evaluation on image classification tasks under various adversarial scenarios, demonstrates that RFC significantly improves robustness compared to baseline methods, providing a viable solution for securing decentralized learning environments.
Related papers
- Toward a Sustainable Federated Learning Ecosystem: A Practical Least Core Mechanism for Payoff Allocation [71.86087908416255]
We introduce a payoff allocation framework based on the least core (LC) concept.<n>Unlike traditional methods, the LC prioritizes the cohesion of the federation by minimizing the maximum dissatisfaction.<n>Case studies in federated intrusion detection demonstrate that our mechanism correctly identifies pivotal contributors and strategic alliances.
arXiv Detail & Related papers (2026-02-03T11:10:50Z) - From Consensus to Chaos: A Vulnerability Assessment of the RAFT Algorithm [0.0]
This paper presents a systematic security analysis of the RAFT protocol.<n>It focuses on its susceptibility to security threats such as message replay attacks and message forgery attacks.<n>To address these vulnerabilities, a novel approach based on cryptography, authenticated message verification, and freshness check is proposed.
arXiv Detail & Related papers (2026-01-01T09:25:53Z) - Blockchain-Enabled Federated Learning [15.579343834528231]
BCFL addresses challenges of trust, privacy, and coordination in AI systems.<n>This chapter provides comprehensive architectural analysis of BCFL systems.<n>We analyze design patterns from blockchain-verified centralized coordination to fully decentralized peer-to-peer networks.
arXiv Detail & Related papers (2025-08-08T15:47:55Z) - Poster: FedBlockParadox -- A Framework for Simulating and Securing Decentralized Federated Learning [5.585625844344932]
FedBlockParadox is a modular framework for modeling and evaluating decentralized federated learning systems built on blockchain technologies.<n>It supports multiple consensus protocols, validation methods, aggregation strategies, and adversarial attack models.<n>By enabling controlled experiments, FedBlockParadox provides a valuable resource for researchers developing secure, decentralized learning solutions.
arXiv Detail & Related papers (2025-06-03T09:25:06Z) - Zero-Trust Foundation Models: A New Paradigm for Secure and Collaborative Artificial Intelligence for Internet of Things [61.43014629640404]
Zero-Trust Foundation Models (ZTFMs) embed zero-trust security principles into the lifecycle of foundation models (FMs) for Internet of Things (IoT) systems.<n>ZTFMs can enable secure, privacy-preserving AI across distributed, heterogeneous, and potentially adversarial IoT environments.
arXiv Detail & Related papers (2025-05-26T06:44:31Z) - Integrating Identity-Based Identification against Adaptive Adversaries in Federated Learning [0.0]
Federated Learning (FL) has emerged as a promising paradigm for privacy-preserving, distributed machine learning.<n>One such threat is the presence of Reconnecting Malicious Clients (RMCs), which exploit FLs open connectivity by reconnecting to the system with modified attack strategies.<n>We propose integration of Identity-Based Identification (IBI) as a security measure within FL environments.
arXiv Detail & Related papers (2025-04-03T22:58:27Z) - Byzantine-Resilient Over-the-Air Federated Learning under Zero-Trust Architecture [68.83934802584899]
We propose a novel Byzantine-robust FL paradigm for over-the-air transmissions, referred to as federated learning with secure adaptive clustering (FedSAC)<n>FedSAC aims to protect a portion of the devices from attacks through zero trust architecture (ZTA) based Byzantine identification and adaptive device clustering.<n> Numerical results substantiate the superiority of the proposed FedSAC over existing methods in terms of both test accuracy and convergence rate.
arXiv Detail & Related papers (2025-03-24T01:56:30Z) - FEDLAD: Federated Evaluation of Deep Leakage Attacks and Defenses [50.921333548391345]
Federated Learning is a privacy preserving decentralized machine learning paradigm.<n>Recent research has revealed that private ground truth data can be recovered through a gradient technique known as Deep Leakage.<n>This paper introduces the FEDLAD Framework (Federated Evaluation of Deep Leakage Attacks and Defenses), a comprehensive benchmark for evaluating Deep Leakage attacks and defenses.
arXiv Detail & Related papers (2024-11-05T11:42:26Z) - Enhancing Security in Federated Learning through Adaptive
Consensus-Based Model Update Validation [2.28438857884398]
This paper introduces an advanced approach for fortifying Federated Learning (FL) systems against label-flipping attacks.
We propose a consensus-based verification process integrated with an adaptive thresholding mechanism.
Our results indicate a significant mitigation of label-flipping attacks, bolstering the FL system's resilience.
arXiv Detail & Related papers (2024-03-05T20:54:56Z) - Combating Exacerbated Heterogeneity for Robust Models in Federated
Learning [91.88122934924435]
Combination of adversarial training and federated learning can lead to the undesired robustness deterioration.
We propose a novel framework called Slack Federated Adversarial Training (SFAT)
We verify the rationality and effectiveness of SFAT on various benchmarked and real-world datasets.
arXiv Detail & Related papers (2023-03-01T06:16:15Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.