From Pixels to CSI: Distilling Latent Dynamics For Efficient Wireless Resource Management
- URL: http://arxiv.org/abs/2506.16216v1
- Date: Thu, 19 Jun 2025 11:08:20 GMT
- Title: From Pixels to CSI: Distilling Latent Dynamics For Efficient Wireless Resource Management
- Authors: Charbel Bou Chaaya, Abanoub M. Girgis, Mehdi Bennis,
- Abstract summary: We propose a novel machine learning technique to jointly model and predict the dynamics of the control system.<n>We show that our proposed approach reduces transmit power by over 50% while maintaining control performance comparable to baseline methods.
- Score: 31.080933663717257
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we aim to optimize the radio resource management of a communication system between a remote controller and its device, whose state is represented through image frames, without compromising the performance of the control task. We propose a novel machine learning (ML) technique to jointly model and predict the dynamics of the control system as well as the wireless propagation environment in latent space. Our method leverages two coupled joint-embedding predictive architectures (JEPAs): a control JEPA models the control dynamics and guides the predictions of a wireless JEPA, which captures the dynamics of the device's channel state information (CSI) through cross-modal conditioning. We then train a deep reinforcement learning (RL) algorithm to derive a control policy from latent control dynamics and a power predictor to estimate scheduling intervals with favorable channel conditions based on latent CSI representations. As such, the controller minimizes the usage of radio resources by utilizing the coupled JEPA networks to imagine the device's trajectory in latent space. We present simulation results on synthetic multimodal data and show that our proposed approach reduces transmit power by over 50% while maintaining control performance comparable to baseline methods that do not account for wireless optimization.
Related papers
- AI-in-the-Loop Sensing and Communication Joint Design for Edge Intelligence [65.29835430845893]
We propose a framework that enhances edge intelligence through AI-in-the-loop joint sensing and communication.<n>A key contribution of our work is establishing an explicit relationship between validation loss and the system's tunable parameters.<n>We show that our framework reduces communication energy consumption by up to 77 percent and sensing costs measured by the number of samples by up to 52 percent.
arXiv Detail & Related papers (2025-02-14T14:56:58Z) - Communication-Control Codesign for Large-Scale Wireless Networked Control Systems [80.30532872347668]
Wireless Networked Control Systems (WNCSs) are essential to Industry 4.0, enabling flexible control in applications, such as drone swarms and autonomous robots.
We propose a practical WNCS model that captures correlated dynamics among multiple control loops with spatially distributed sensors and actuators sharing limited wireless resources over multi-state Markov block-fading channels.
We develop a Deep Reinforcement Learning (DRL) algorithm that efficiently handles the hybrid action space, captures communication-control correlations, and ensures robust training despite sparse cross-domain variables and floating control inputs.
arXiv Detail & Related papers (2024-10-15T06:28:21Z) - Time-Series JEPA for Predictive Remote Control under Capacity-Limited Networks [31.408649975934008]
Time-Series Joint Embedding Predictive Architecture (TSEPA) and semantic actor trained through self-supervised learning.
We propose a Time-Series Joint Embedding Predictive Architecture (TSEPA) and a semantic actor trained through self-supervised learning.
arXiv Detail & Related papers (2024-06-07T11:35:15Z) - Model Based Residual Policy Learning with Applications to Antenna
Control [5.01069065110753]
Non-differentiable controllers and rule-based policies are widely used for controlling real systems such as telecommunication networks and robots.
Motivated by the antenna tilt control problem, we introduce Model-Based Residual Policy Learning (MBRPL), a practical reinforcement learning (RL) method.
arXiv Detail & Related papers (2022-11-16T09:48:14Z) - Sensor Control for Information Gain in Dynamic, Sparse and Partially
Observed Environments [1.5402666674186938]
We present an approach for autonomous sensor control for information gathering under partially observable, dynamic and sparsely sampled environments.
We extend the Deep Anticipatory Network (DAN) Reinforcement Learning framework by (1) improving exploration in sparse, non-stationary environments using a novel information gain reward.
We also extend this problem to situations in which sampling from the intended RF spectrum/field is limited and propose a model-based version of the original RL algorithm that fine-tunes the controller via a model that is iteratively improved from the limited field sampling.
arXiv Detail & Related papers (2022-11-03T00:03:14Z) - Deep Learning for Wireless Networked Systems: a joint
Estimation-Control-Scheduling Approach [47.29474858956844]
Wireless networked control system (WNCS) connecting sensors, controllers, and actuators via wireless communications is a key enabling technology for highly scalable and low-cost deployment of control systems in the Industry 4.0 era.
Despite the tight interaction of control and communications in WNCSs, most existing works adopt separative design approaches.
We propose a novel deep reinforcement learning (DRL)-based algorithm for controller and optimization utilizing both model-free and model-based data.
arXiv Detail & Related papers (2022-10-03T01:29:40Z) - Age of Semantics in Cooperative Communications: To Expedite Simulation
Towards Real via Offline Reinforcement Learning [53.18060442931179]
We propose the age of semantics (AoS) for measuring semantics freshness of status updates in a cooperative relay communication system.
We derive an online deep actor-critic (DAC) learning scheme under the on-policy temporal difference learning framework.
We then put forward a novel offline DAC scheme, which estimates the optimal control policy from a previously collected dataset.
arXiv Detail & Related papers (2022-09-19T11:55:28Z) - Learning Resilient Radio Resource Management Policies with Graph Neural
Networks [124.89036526192268]
We formulate a resilient radio resource management problem with per-user minimum-capacity constraints.
We show that we can parameterize the user selection and power control policies using a finite set of parameters.
Thanks to such adaptation, our proposed method achieves a superior tradeoff between the average rate and the 5th percentile rate.
arXiv Detail & Related papers (2022-03-07T19:40:39Z) - Deep Reinforcement Learning for Wireless Scheduling in Distributed Networked Control [37.10638636086814]
We consider a joint uplink and downlink scheduling problem of a fully distributed wireless control system (WNCS) with a limited number of frequency channels.<n>We develop a deep reinforcement learning (DRL) based framework for solving it.<n>To tackle the challenges of a large action space in DRL, we propose novel action space reduction and action embedding methods.
arXiv Detail & Related papers (2021-09-26T11:27:12Z) - Learning Event-triggered Control from Data through Joint Optimization [7.391641422048646]
We present a framework for model-free learning of event-triggered control strategies.
We propose a novel algorithm based on hierarchical reinforcement learning.
The resulting algorithm is shown to accomplish high-performance control in line with resource savings and scales seamlessly to nonlinear and high-dimensional systems.
arXiv Detail & Related papers (2020-08-11T14:15:38Z) - Optimization-driven Deep Reinforcement Learning for Robust Beamforming
in IRS-assisted Wireless Communications [54.610318402371185]
Intelligent reflecting surface (IRS) is a promising technology to assist downlink information transmissions from a multi-antenna access point (AP) to a receiver.
We minimize the AP's transmit power by a joint optimization of the AP's active beamforming and the IRS's passive beamforming.
We propose a deep reinforcement learning (DRL) approach that can adapt the beamforming strategies from past experiences.
arXiv Detail & Related papers (2020-05-25T01:42:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.