Data Sharing and Compression for Cooperative Networked Control
- URL: http://arxiv.org/abs/2109.14675v1
- Date: Wed, 29 Sep 2021 19:14:55 GMT
- Title: Data Sharing and Compression for Cooperative Networked Control
- Authors: Jiangnan Cheng, Marco Pavone, Sachin Katti, Sandeep Chinchali, Ao Tang
- Abstract summary: We present a solution to learn succinct, highly-compressed forecasts that are co-designed with a modular controller's task objective.
Our simulations with real cellular, Internet-of-Things (IoT), and electricity load data show we can improve a model predictive controller's performance by at least $25%$ while transmitting $80%$ less data than the competing method.
- Score: 28.19172672710827
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Sharing forecasts of network timeseries data, such as cellular or electricity
load patterns, can improve independent control applications ranging from
traffic scheduling to power generation. Typically, forecasts are designed
without knowledge of a downstream controller's task objective, and thus simply
optimize for mean prediction error. However, such task-agnostic representations
are often too large to stream over a communication network and do not emphasize
salient temporal features for cooperative control. This paper presents a
solution to learn succinct, highly-compressed forecasts that are co-designed
with a modular controller's task objective. Our simulations with real cellular,
Internet-of-Things (IoT), and electricity load data show we can improve a model
predictive controller's performance by at least $25\%$ while transmitting
$80\%$ less data than the competing method. Further, we present theoretical
compression results for a networked variant of the classical linear quadratic
regulator (LQR) control problem.
Related papers
- Communication-Control Codesign for Large-Scale Wireless Networked Control Systems [80.30532872347668]
Wireless Networked Control Systems (WNCSs) are essential to Industry 4.0, enabling flexible control in applications, such as drone swarms and autonomous robots.
We propose a practical WNCS model that captures correlated dynamics among multiple control loops with spatially distributed sensors and actuators sharing limited wireless resources over multi-state Markov block-fading channels.
We develop a Deep Reinforcement Learning (DRL) algorithm that efficiently handles the hybrid action space, captures communication-control correlations, and ensures robust training despite sparse cross-domain variables and floating control inputs.
arXiv Detail & Related papers (2024-10-15T06:28:21Z) - GRLinQ: An Intelligent Spectrum Sharing Mechanism for Device-to-Device Communications with Graph Reinforcement Learning [36.37521131173745]
Device-to-device (D2D) spectrum sharing in communications is a challenging non- wireless optimization problem.
We propose a novel model/datadriven spectrum sharing mechanism with graph reinforcement learning for link (GRLinQ)
GRLinQ demonstrates superior performance to the existing model-based link scheduling and/or power control methods.
arXiv Detail & Related papers (2024-08-18T07:39:01Z) - Time-Series JEPA for Predictive Remote Control under Capacity-Limited Networks [31.408649975934008]
Time-Series Joint Embedding Predictive Architecture (TSEPA) and semantic actor trained through self-supervised learning.
We propose a Time-Series Joint Embedding Predictive Architecture (TSEPA) and a semantic actor trained through self-supervised learning.
arXiv Detail & Related papers (2024-06-07T11:35:15Z) - Fed-CVLC: Compressing Federated Learning Communications with
Variable-Length Codes [54.18186259484828]
In Federated Learning (FL) paradigm, a parameter server (PS) concurrently communicates with distributed participating clients for model collection, update aggregation, and model distribution over multiple rounds.
We show strong evidences that variable-length is beneficial for compression in FL.
We present Fed-CVLC (Federated Learning Compression with Variable-Length Codes), which fine-tunes the code length in response to the dynamics of model updates.
arXiv Detail & Related papers (2024-02-06T07:25:21Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - Hierarchical Federated Learning in Wireless Networks: Pruning Tackles Bandwidth Scarcity and System Heterogeneity [32.321021292376315]
We propose a pruning-enabled hierarchical federated learning (PHFL) in heterogeneous networks (HetNets)
We first derive an upper bound of the convergence rate that clearly demonstrates the impact of the model pruning and wireless communications.
We validate the effectiveness of our proposed PHFL algorithm in terms of test accuracy, wall clock time, energy consumption and bandwidth requirement.
arXiv Detail & Related papers (2023-08-03T07:03:33Z) - Towards Long-Term Time-Series Forecasting: Feature, Pattern, and
Distribution [57.71199089609161]
Long-term time-series forecasting (LTTF) has become a pressing demand in many applications, such as wind power supply planning.
Transformer models have been adopted to deliver high prediction capacity because of the high computational self-attention mechanism.
We propose an efficient Transformerbased model, named Conformer, which differentiates itself from existing methods for LTTF in three aspects.
arXiv Detail & Related papers (2023-01-05T13:59:29Z) - Cross-network transferable neural models for WLAN interference
estimation [8.519313977400735]
In this paper, we adopt a principled approach to interference estimation in robustnesss.
We first use real data to characterize the factors that impact it, and derive a set of relevant synthetic workloads.
We find, unsurprisingly, that Graph Conalvolution Networks (GCNs) yield the best performance overall.
arXiv Detail & Related papers (2022-11-25T11:01:43Z) - Time-to-Green predictions for fully-actuated signal control systems with
supervised learning [56.66331540599836]
This paper proposes a time series prediction framework using aggregated traffic signal and loop detector data.
We utilize state-of-the-art machine learning models to predict future signal phases' duration.
Results based on an empirical data set from a fully-actuated signal control system in Zurich, Switzerland, show that machine learning models outperform conventional prediction methods.
arXiv Detail & Related papers (2022-08-24T07:50:43Z) - Communication Topology Co-Design in Graph Recurrent Neural Network Based
Distributed Control [4.492630871726495]
We introduce a compact but expressive graph recurrent neural network (GRNN) parameterization of distributed controllers.
Our proposed parameterization enjoys a local and distributed architecture, similar to previous Graph Neural Network (GNN)-based parameterizations.
We show that our method allows for performance/communication density tradeoff curves to be efficiently approximated.
arXiv Detail & Related papers (2021-04-28T16:30:02Z) - Toward fast and accurate human pose estimation via soft-gated skip
connections [97.06882200076096]
This paper is on highly accurate and highly efficient human pose estimation.
We re-analyze this design choice in the context of improving both the accuracy and the efficiency over the state-of-the-art.
Our model achieves state-of-the-art results on the MPII and LSP datasets.
arXiv Detail & Related papers (2020-02-25T18:51:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.