A Machine Learning Framework for Distributed Functional Compression over
Wireless Channels in IoT
- URL: http://arxiv.org/abs/2201.09483v2
- Date: Mon, 1 May 2023 02:40:42 GMT
- Title: A Machine Learning Framework for Distributed Functional Compression over
Wireless Channels in IoT
- Authors: Yashas Malur Saidutta, Afshin Abdi, Faramarz Fekri
- Abstract summary: IoT devices generate enormous data and state-of-the-art machine learning techniques together will revolutionize cyber-physical systems.
Traditional cloud-based methods that focus on transferring data to a central location either for training or inference place enormous strain on network resources.
We develop, to the best of our knowledge, the first machine learning framework for distributed functional compression over both the Gaussian Multiple Access Channel (GMAC) and AWGN channels.
- Score: 13.385373310554327
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: IoT devices generating enormous data and state-of-the-art machine learning
techniques together will revolutionize cyber-physical systems. In many diverse
fields, from autonomous driving to augmented reality, distributed IoT devices
compute specific target functions without simple forms like obstacle detection,
object recognition, etc. Traditional cloud-based methods that focus on
transferring data to a central location either for training or inference place
enormous strain on network resources. To address this, we develop, to the best
of our knowledge, the first machine learning framework for distributed
functional compression over both the Gaussian Multiple Access Channel (GMAC)
and orthogonal AWGN channels. Due to the Kolmogorov-Arnold representation
theorem, our machine learning framework can, by design, compute any arbitrary
function for the desired functional compression task in IoT. Importantly the
raw sensory data are never transferred to a central node for training or
inference, thus reducing communication. For these algorithms, we provide
theoretical convergence guarantees and upper bounds on communication. Our
simulations show that the learned encoders and decoders for functional
compression perform significantly better than traditional approaches, are
robust to channel condition changes and sensor outages. Compared to the
cloud-based scenario, our algorithms reduce channel use by two orders of
magnitude.
Related papers
- Dynamic Encoding and Decoding of Information for Split Learning in
Mobile-Edge Computing: Leveraging Information Bottleneck Theory [1.1151919978983582]
Split learning is a privacy-preserving distributed learning paradigm in which an ML model is split into two parts (i.e., an encoder and a decoder)
In mobile-edge computing, network functions can be trained via split learning where an encoder resides in a user equipment (UE) and a decoder resides in the edge network.
We present a new framework and training mechanism to enable a dynamic balancing of the transmission resource consumption with the informativeness of the shared latent representations.
arXiv Detail & Related papers (2023-09-06T07:04:37Z) - Decentralized Learning over Wireless Networks: The Effect of Broadcast
with Random Access [56.91063444859008]
We investigate the impact of broadcast transmission and probabilistic random access policy on the convergence performance of D-SGD.
Our results demonstrate that optimizing the access probability to maximize the expected number of successful links is a highly effective strategy for accelerating the system convergence.
arXiv Detail & Related papers (2023-05-12T10:32:26Z) - Causal Semantic Communication for Digital Twins: A Generalizable
Imitation Learning Approach [74.25870052841226]
A digital twin (DT) leverages a virtual representation of the physical world, along with communication (e.g., 6G), computing, and artificial intelligence (AI) technologies to enable many connected intelligence services.
Wireless systems can exploit the paradigm of semantic communication (SC) for facilitating informed decision-making under strict communication constraints.
A novel framework called causal semantic communication (CSC) is proposed for DT-based wireless systems.
arXiv Detail & Related papers (2023-04-25T00:15:00Z) - Communication-Efficient Federated Learning via Predictive Coding [38.778944321534084]
Federated learning can enable remote workers to collaboratively train a shared machine learning model.
The communication overhead is a critical bottleneck due to limited power and bandwidth.
We propose a predictive coding based communication scheme for federated learning.
arXiv Detail & Related papers (2021-08-02T14:12:19Z) - Federated Learning over Wireless Device-to-Device Networks: Algorithms
and Convergence Analysis [46.76179091774633]
This paper studies federated learning (FL) over wireless device-to-device (D2D) networks.
First, we introduce generic digital and analog wireless implementations of communication-efficient DSGD algorithms.
Second, under the assumptions of convexity and connectivity, we provide convergence bounds for both implementations.
arXiv Detail & Related papers (2021-01-29T17:42:26Z) - Federated Learning in Unreliable and Resource-Constrained Cellular
Wireless Networks [35.80470886180477]
We propose a federated learning algorithm that is suitable for cellular wireless networks.
We prove its convergence, and provide the optimal scheduling policy that maximizes the convergence rate.
arXiv Detail & Related papers (2020-12-09T16:16:43Z) - PowerGossip: Practical Low-Rank Communication Compression in
Decentralized Deep Learning [62.440827696638664]
We introduce a simple algorithm that directly compresses the model differences between neighboring workers.
Inspired by the PowerSGD for centralized deep learning, this algorithm uses power steps to maximize the information transferred per bit.
arXiv Detail & Related papers (2020-08-04T09:14:52Z) - Wireless Communications for Collaborative Federated Learning [160.82696473996566]
Internet of Things (IoT) devices may not be able to transmit their collected data to a central controller for training machine learning models.
Google's seminal FL algorithm requires all devices to be directly connected with a central controller.
This paper introduces a novel FL framework, called collaborative FL (CFL), which enables edge devices to implement FL with less reliance on a central controller.
arXiv Detail & Related papers (2020-06-03T20:00:02Z) - A Compressive Sensing Approach for Federated Learning over Massive MIMO
Communication Systems [82.2513703281725]
Federated learning is a privacy-preserving approach to train a global model at a central server by collaborating with wireless devices.
We present a compressive sensing approach for federated learning over massive multiple-input multiple-output communication systems.
arXiv Detail & Related papers (2020-03-18T05:56:27Z) - Ternary Compression for Communication-Efficient Federated Learning [17.97683428517896]
Federated learning provides a potential solution to privacy-preserving and secure machine learning.
We propose a ternary federated averaging protocol (T-FedAvg) to reduce the upstream and downstream communication of federated learning systems.
Our results show that the proposed T-FedAvg is effective in reducing communication costs and can even achieve slightly better performance on non-IID data.
arXiv Detail & Related papers (2020-03-07T11:55:34Z) - Deep Learning for Ultra-Reliable and Low-Latency Communications in 6G
Networks [84.2155885234293]
We first summarize how to apply data-driven supervised deep learning and deep reinforcement learning in URLLC.
To address these open problems, we develop a multi-level architecture that enables device intelligence, edge intelligence, and cloud intelligence for URLLC.
arXiv Detail & Related papers (2020-02-22T14:38:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.