Compressed Regression over Adaptive Networks
- URL: http://arxiv.org/abs/2304.03638v1
- Date: Fri, 7 Apr 2023 13:41:08 GMT
- Title: Compressed Regression over Adaptive Networks
- Authors: Marco Carpentiero, Vincenzo Matta, Ali H. Sayed
- Abstract summary: We derive the performance achievable by a network of distributed agents that solve, adaptively and in the presence of communication constraints, a regression problem.
We devise an optimized allocation strategy where the parameters necessary for the optimization can be learned online by the agents.
- Score: 58.79251288443156
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work we derive the performance achievable by a network of distributed
agents that solve, adaptively and in the presence of communication constraints,
a regression problem. Agents employ the recently proposed ACTC
(adapt-compress-then-combine) diffusion strategy, where the signals exchanged
locally by neighboring agents are encoded with randomized differential
compression operators. We provide a detailed characterization of the
mean-square estimation error, which is shown to comprise a term related to the
error that agents would achieve without communication constraints, plus a term
arising from compression. The analysis reveals quantitative relationships
between the compression loss and fundamental attributes of the distributed
regression problem, in particular, the stochastic approximation error caused by
the gradient noise and the network topology (through the Perron eigenvector).
We show that knowledge of such relationships is critical to allocate optimally
the communication resources across the agents, taking into account their
individual attributes, such as the quality of their data or their degree of
centrality in the network topology. We devise an optimized allocation strategy
where the parameters necessary for the optimization can be learned online by
the agents. Illustrative examples show that a significant performance
improvement, as compared to a blind (i.e., uniform) resource allocation, can be
achieved by optimizing the allocation by means of the provided
mean-square-error formulas.
Related papers
- Differential error feedback for communication-efficient decentralized learning [48.924131251745266]
We propose a new decentralized communication-efficient learning approach that blends differential quantization with error feedback.
We show that the resulting communication-efficient strategy is stable both in terms of mean-square error and average bit rate.
The results establish that, in the small step-size regime and with a finite number of bits, it is possible to attain the performance achievable in the absence of compression.
arXiv Detail & Related papers (2024-06-26T15:11:26Z) - Distributed Event-Based Learning via ADMM [11.461617927469316]
We consider a distributed learning problem, where agents minimize a global objective function by exchanging information over a network.
Our approach has two distinct features: (i) It substantially reduces communication by triggering communication only when necessary, and (ii) it is agnostic to the data-distribution among the different agents.
arXiv Detail & Related papers (2024-05-17T08:30:28Z) - Decentralized Learning Strategies for Estimation Error Minimization with Graph Neural Networks [94.2860766709971]
We address the challenge of sampling and remote estimation for autoregressive Markovian processes in a wireless network with statistically-identical agents.
Our goal is to minimize time-average estimation error and/or age of information with decentralized scalable sampling and transmission policies.
arXiv Detail & Related papers (2024-04-04T06:24:11Z) - Communication-Efficient Zeroth-Order Distributed Online Optimization:
Algorithm, Theory, and Applications [9.045332526072828]
This paper focuses on a multi-agent zeroth-order online optimization problem in a federated learning setting for target tracking.
The proposed solution is further analyzed in terms of errors and errors in two relevant applications.
arXiv Detail & Related papers (2023-06-09T03:51:45Z) - Distributed Finite-Sum Constrained Optimization subject to Nonlinearity
on the Node Dynamics [6.211043407287827]
We consider a distributed finite-sum (or fixed-sum) allocation technique to solve convex optimization problems over multi-agent networks (MANs)
This paper discusses how various nonlinearity constraints on the optimization problem can be addressed for different applications via a distributed setup (over a network)
arXiv Detail & Related papers (2022-03-28T06:47:01Z) - Learning Resilient Radio Resource Management Policies with Graph Neural
Networks [124.89036526192268]
We formulate a resilient radio resource management problem with per-user minimum-capacity constraints.
We show that we can parameterize the user selection and power control policies using a finite set of parameters.
Thanks to such adaptation, our proposed method achieves a superior tradeoff between the average rate and the 5th percentile rate.
arXiv Detail & Related papers (2022-03-07T19:40:39Z) - Distributed Adaptive Learning Under Communication Constraints [54.22472738551687]
This work examines adaptive distributed learning strategies designed to operate under communication constraints.
We consider a network of agents that must solve an online optimization problem from continual observation of streaming data.
arXiv Detail & Related papers (2021-12-03T19:23:48Z) - Implicit Distributional Reinforcement Learning [61.166030238490634]
implicit distributional actor-critic (IDAC) built on two deep generator networks (DGNs)
Semi-implicit actor (SIA) powered by a flexible policy distribution.
We observe IDAC outperforms state-of-the-art algorithms on representative OpenAI Gym environments.
arXiv Detail & Related papers (2020-07-13T02:52:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.