Model Agnostic Hybrid Sharding For Heterogeneous Distributed Inference
- URL: http://arxiv.org/abs/2407.19775v1
- Date: Mon, 29 Jul 2024 08:18:48 GMT
- Title: Model Agnostic Hybrid Sharding For Heterogeneous Distributed Inference
- Authors: Claudio Angione, Yue Zhao, Harry Yang, Ahmad Farhan, Fielding Johnston, James Buban, Patrick Colangelo,
- Abstract summary: Nesa introduces a model-agnostic sharding framework designed for decentralized AI inference.
Our framework uses blockchain-based deep neural network sharding to distribute computational tasks across a diverse network of nodes.
Our results highlight the potential to democratize access to cutting-edge AI technologies.
- Score: 11.39873199479642
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rapid growth of large-scale AI models, particularly large language models has brought significant challenges in data privacy, computational resources, and accessibility. Traditional centralized architectures often struggle to meet required data security and scalability needs which hinders the democratization of AI systems. Nesa introduces a model-agnostic sharding framework designed for decentralized AI inference. Our framework uses blockchain-based sequential deep neural network sharding to distribute computational tasks across a diverse network of nodes based on a personalised heuristic and routing mechanism. This enables efficient distributed training and inference for recent large-scale models even on consumer-grade hardware. We use compression techniques like dynamic blockwise quantization and mixed matrix decomposition to reduce data transfer and memory needs. We also integrate robust security measures, including hardware-based trusted execution environments to ensure data integrity and confidentiality. Evaluating our system across various natural language processing and vision tasks shows that these compression strategies do not compromise model accuracy. Our results highlight the potential to democratize access to cutting-edge AI technologies by enabling secure and efficient inference on a decentralized network.
Related papers
- Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - Complete Security and Privacy for AI Inference in Decentralized Systems [14.526663289437584]
Large models are crucial for tasks like diagnosing diseases but tend to be delicate and not very scalable.
Nesa solves these challenges with a comprehensive framework using multiple techniques to protect data and model outputs.
Nesa's state-of-the-art proofs and principles demonstrate the framework's effectiveness.
arXiv Detail & Related papers (2024-07-28T05:09:17Z) - LoRA-Ensemble: Efficient Uncertainty Modelling for Self-attention Networks [52.46420522934253]
We introduce LoRA-Ensemble, a parameter-efficient deep ensemble method for self-attention networks.
By employing a single pre-trained self-attention network with weights shared across all members, we train member-specific low-rank matrices for the attention projections.
Our method exhibits superior calibration compared to explicit ensembles and achieves similar or better accuracy across various prediction tasks and datasets.
arXiv Detail & Related papers (2024-05-23T11:10:32Z) - From Algorithm to Hardware: A Survey on Efficient and Safe Deployment of Deep Neural Networks [23.928893359202753]
Deep neural networks (DNNs) have been widely used in many artificial intelligence (AI) tasks.
deploying them brings significant challenges due to the huge cost of memory, energy, and computation.
Recently, there has been a surge in research of compression methods to achieve model efficiency while retaining the performance.
arXiv Detail & Related papers (2024-05-09T18:17:25Z) - Effective Intrusion Detection in Heterogeneous Internet-of-Things Networks via Ensemble Knowledge Distillation-based Federated Learning [52.6706505729803]
We introduce Federated Learning (FL) to collaboratively train a decentralized shared model of Intrusion Detection Systems (IDS)
FLEKD enables a more flexible aggregation method than conventional model fusion techniques.
Experiment results show that the proposed approach outperforms local training and traditional FL in terms of both speed and performance.
arXiv Detail & Related papers (2024-01-22T14:16:37Z) - Heterogenous Memory Augmented Neural Networks [84.29338268789684]
We introduce a novel heterogeneous memory augmentation approach for neural networks.
By introducing learnable memory tokens with attention mechanism, we can effectively boost performance without huge computational overhead.
We show our approach on various image and graph-based tasks under both in-distribution (ID) and out-of-distribution (OOD) conditions.
arXiv Detail & Related papers (2023-10-17T01:05:28Z) - Advancing Federated Learning in 6G: A Trusted Architecture with
Graph-based Analysis [6.192092124154705]
Federated Learning (FL) is a potential paradigm, facilitating decentralized AI model training across a diverse range of devices under the coordination of a central server.
This work proposes a trusted architecture for supporting FL, which utilizes Distributed Ledger Technology (DLT) and Graph Neural Network (GNN)
The feasibility of the novel architecture is validated through simulations, demonstrating improved performance in anomalous model detection and global model accuracy compared to relevant baselines.
arXiv Detail & Related papers (2023-09-11T15:10:41Z) - Towards a Better Theoretical Understanding of Independent Subnetwork Training [56.24689348875711]
We take a closer theoretical look at Independent Subnetwork Training (IST)
IST is a recently proposed and highly effective technique for solving the aforementioned problems.
We identify fundamental differences between IST and alternative approaches, such as distributed methods with compressed communication.
arXiv Detail & Related papers (2023-06-28T18:14:22Z) - Evaluating Distribution System Reliability with Hyperstructures Graph
Convolutional Nets [74.51865676466056]
We show how graph convolutional networks and hyperstructures representation learning framework can be employed for accurate, reliable, and computationally efficient distribution grid planning.
Our numerical experiments show that the proposed Hyper-GCNNs approach yields substantial gains in computational efficiency.
arXiv Detail & Related papers (2022-11-14T01:29:09Z) - RL-DistPrivacy: Privacy-Aware Distributed Deep Inference for low latency
IoT systems [41.1371349978643]
We present an approach that targets the security of collaborative deep inference via re-thinking the distribution strategy.
We formulate this methodology, as an optimization, where we establish a trade-off between the latency of co-inference and the privacy-level of data.
arXiv Detail & Related papers (2022-08-27T14:50:00Z) - Robust, Deep, and Reinforcement Learning for Management of Communication
and Power Networks [6.09170287691728]
The present thesis first develops principled methods to make generic machine learning models robust against distributional uncertainties and adversarial data.
We then build on this robust framework to design robust semi-supervised learning over graph methods.
The second part of this thesis aspires to fully unleash the potential of next-generation wired and wireless networks.
arXiv Detail & Related papers (2022-02-08T05:49:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.