Distributed and Decentralised Training: Technical Governance Challenges in a Shifting AI Landscape
- URL: http://arxiv.org/abs/2507.07765v1
- Date: Thu, 10 Jul 2025 13:43:15 GMT
- Title: Distributed and Decentralised Training: Technical Governance Challenges in a Shifting AI Landscape
- Authors: Jakub KryĆ, Yashvardhan Sharma, Janet Egan,
- Abstract summary: Low-communication training algorithms are enabling a shift from centralised model training to compute setups that are either distributed across multiple clusters or decentralised via community-driven contributions.<n>This paper distinguishes these two scenarios - distributed and decentralised training - which are little understood and often conflated in policy discourse.<n>We discuss how they could impact technical AI governance through an increased risk of compute structuring, capability proliferation, and the erosion of detectability and shutdownability.
- Score: 1.6590638305972631
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Advances in low-communication training algorithms are enabling a shift from centralised model training to compute setups that are either distributed across multiple clusters or decentralised via community-driven contributions. This paper distinguishes these two scenarios - distributed and decentralised training - which are little understood and often conflated in policy discourse. We discuss how they could impact technical AI governance through an increased risk of compute structuring, capability proliferation, and the erosion of detectability and shutdownability. While these trends foreshadow a possible new paradigm that could challenge key assumptions of compute governance, we emphasise that certain policy levers, like export controls, remain relevant. We also acknowledge potential benefits of decentralised AI, including privacy-preserving training runs that could unlock access to more data, and mitigating harmful power concentration. Our goal is to support more precise policymaking around compute, capability proliferation, and decentralised AI development.
Related papers
- Reducing Variance Caused by Communication in Decentralized Multi-agent Deep Reinforcement Learning [2.1461517065527445]
We study the variance that is caused by communication in policy gradients.<n>We propose modular techniques to reduce the variance in policy gradients during training.<n>The results show that decentralized MADRL communication methods extended with our proposed techniques.
arXiv Detail & Related papers (2025-02-10T08:53:13Z) - Towards Responsible Governing AI Proliferation [0.0]
The paper introduces the Proliferation' paradigm, which anticipates the rise of smaller, decentralized, open-sourced AI models.<n>It posits that these developments are both probable and likely to introduce both benefits and novel risks.
arXiv Detail & Related papers (2024-12-18T13:10:35Z) - Protocol Learning, Decentralized Frontier Risk and the No-Off Problem [56.74434512241989]
We identify a third paradigm - Protocol Learning - where models are trained across decentralized networks of incentivized participants.<n>This approach has the potential to aggregate orders of magnitude more computational resources than any single centralized entity.<n>It also introduces novel challenges: heterogeneous and unreliable nodes, malicious participants, the need for unextractable models to preserve incentives, and complex governance dynamics.
arXiv Detail & Related papers (2024-12-10T19:53:50Z) - Decentralized Intelligence Network (DIN) [0.0]
Decentralized Intelligence Network (DIN) is a theoretical framework designed to address challenges in AI development.
The framework supports effective AI training by allowing Participants to maintain control over their data, benefit financially, and contribute to a decentralized, scalable ecosystem.
arXiv Detail & Related papers (2024-07-02T17:40:06Z) - Decentralized Learning Strategies for Estimation Error Minimization with Graph Neural Networks [94.2860766709971]
We address the challenge of sampling and remote estimation for autoregressive Markovian processes in a wireless network with statistically-identical agents.<n>Our goal is to minimize time-average estimation error and/or age of information with decentralized scalable sampling and transmission policies.
arXiv Detail & Related papers (2024-04-04T06:24:11Z) - Computing Power and the Governance of Artificial Intelligence [51.967584623262674]
Governments and companies have started to leverage compute as a means to govern AI.
compute-based policies and technologies have the potential to assist in these areas, but there is significant variation in their readiness for implementation.
naive or poorly scoped approaches to compute governance carry significant risks in areas like privacy, economic impacts, and centralization of power.
arXiv Detail & Related papers (2024-02-13T21:10:21Z) - DePAint: A Decentralized Safe Multi-Agent Reinforcement Learning Algorithm considering Peak and Average Constraints [1.1549572298362787]
We propose a momentum-based decentralized gradient policy method, DePAint, to solve the problem.
This is the first privacy-preserving fully decentralized multi-agent reinforcement learning algorithm that considers both peak and average constraints.
arXiv Detail & Related papers (2023-10-22T16:36:03Z) - Networked Communication for Decentralised Agents in Mean-Field Games [59.01527054553122]
We introduce networked communication to the mean-field game framework.<n>We prove that our architecture has sample guarantees bounded between those of the centralised- and independent-learning cases.<n>We show that our networked approach has significant advantages over both alternatives in terms of robustness to update failures and to changes in population size.
arXiv Detail & Related papers (2023-06-05T10:45:39Z) - Consensus Control for Decentralized Deep Learning [72.50487751271069]
Decentralized training of deep learning models enables on-device learning over networks, as well as efficient scaling to large compute clusters.
We show in theory that when the training consensus distance is lower than a critical quantity, decentralized training converges as fast as the centralized counterpart.
Our empirical insights allow the principled design of better decentralized training schemes that mitigate the performance drop.
arXiv Detail & Related papers (2021-02-09T13:58:33Z) - Adaptive Serverless Learning [114.36410688552579]
We propose a novel adaptive decentralized training approach, which can compute the learning rate from data dynamically.
Our theoretical results reveal that the proposed algorithm can achieve linear speedup with respect to the number of workers.
To reduce the communication-efficient overhead, we further propose a communication-efficient adaptive decentralized training approach.
arXiv Detail & Related papers (2020-08-24T13:23:02Z) - Decentralized MCTS via Learned Teammate Models [89.24858306636816]
We present a trainable online decentralized planning algorithm based on decentralized Monte Carlo Tree Search.
We show that deep learning and convolutional neural networks can be employed to produce accurate policy approximators.
arXiv Detail & Related papers (2020-03-19T13:10:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.