Integrated Push-and-Pull Update Model for Goal-Oriented Effective Communication
- URL: http://arxiv.org/abs/2407.14092v1
- Date: Fri, 19 Jul 2024 07:57:31 GMT
- Title: Integrated Push-and-Pull Update Model for Goal-Oriented Effective Communication
- Authors: Pouya Agheli, Nikolaos Pappas, Petar Popovski, Marios Kountouris,
- Abstract summary: We consider an end-to-end status update system where a sensing agent observes a source, generates and transmits updates to an actuation agent.
We integrate the push- and pull-based update communication models to obtain a push-and-pull model.
Our results show the proposed push-and-pull model outperforms models solely based on push- or pull-based updates.
- Score: 40.57990979803115
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper studies decision-making for goal-oriented effective communication. We consider an end-to-end status update system where a sensing agent (SA) observes a source, generates and transmits updates to an actuation agent (AA), while the AA takes actions to accomplish a goal at the endpoint. We integrate the push- and pull-based update communication models to obtain a push-and-pull model, which allows the transmission controller at the SA to decide to push an update to the AA and the query controller at the AA to pull updates by raising queries at specific time instances. To gauge effectiveness, we utilize a grade of effectiveness (GoE) metric incorporating updates' freshness, usefulness, and timeliness of actions as qualitative attributes. We then derive effect-aware policies to maximize the expected discounted sum of updates' effectiveness subject to induced costs. The effect-aware policy at the SA considers the potential effectiveness of communicated updates at the endpoint, while at the AA, it accounts for the probabilistic evolution of the source and importance of generated updates. Our results show the proposed push-and-pull model outperforms models solely based on push- or pull-based updates both in terms of efficiency and effectiveness. Additionally, using effect-aware policies at both agents enhances effectiveness compared to periodic and/or probabilistic effect-agnostic policies at either or both agents.
Related papers
- Bridging and Modeling Correlations in Pairwise Data for Direct Preference Optimization [75.1240295759264]
We propose an effective framework for Bridging and Modeling Correlations in pairwise data, named BMC.
We increase the consistency and informativeness of the pairwise preference signals through targeted modifications.
We identify that DPO alone is insufficient to model these correlations and capture nuanced variations.
arXiv Detail & Related papers (2024-08-14T11:29:47Z) - Push- and Pull-based Effective Communication in Cyber-Physical Systems [15.079887992932692]
We propose an analytical model for push- and pull-based communication in CPSs, observing that the policy optimality coincides with Cyber Value Information (VoI) state.
Our results also highlight that, despite providing a better optimal solution, implementable push-based communication strategies may underperform even in relatively simple scenarios.
arXiv Detail & Related papers (2024-01-15T10:06:17Z) - Learn from the Past: A Proxy Guided Adversarial Defense Framework with
Self Distillation Regularization [53.04697800214848]
Adversarial Training (AT) is pivotal in fortifying the robustness of deep learning models.
AT methods, relying on direct iterative updates for target model's defense, frequently encounter obstacles such as unstable training and catastrophic overfitting.
We present a general proxy guided defense framework, LAST' (bf Learn from the Pbf ast)
arXiv Detail & Related papers (2023-10-19T13:13:41Z) - Adversarial Policy Optimization in Deep Reinforcement Learning [16.999444076456268]
The policy represented by the deep neural network can overfitting, which hamper a reinforcement learning agent from learning effective policy.
Data augmentation can provide a performance boost to RL agents by mitigating the effect of overfitting.
We propose a novel RL algorithm to mitigate the above issue and improve the efficiency of the learned policy.
arXiv Detail & Related papers (2023-04-27T21:01:08Z) - Age of Semantics in Cooperative Communications: To Expedite Simulation
Towards Real via Offline Reinforcement Learning [53.18060442931179]
We propose the age of semantics (AoS) for measuring semantics freshness of status updates in a cooperative relay communication system.
We derive an online deep actor-critic (DAC) learning scheme under the on-policy temporal difference learning framework.
We then put forward a novel offline DAC scheme, which estimates the optimal control policy from a previously collected dataset.
arXiv Detail & Related papers (2022-09-19T11:55:28Z) - Fully Decentralized Model-based Policy Optimization for Networked
Systems [23.46407780093797]
This work aims to improve data efficiency of multi-agent control by model-based learning.
We consider networked systems where agents are cooperative and communicate only locally with their neighbors.
In our method, each agent learns a dynamic model to predict future states and broadcast their predictions by communication, and then the policies are trained under the model rollouts.
arXiv Detail & Related papers (2022-07-13T23:52:14Z) - Differential Assessment of Black-Box AI Agents [29.98710357871698]
We propose a novel approach to differentially assess black-box AI agents that have drifted from their previously known models.
We leverage sparse observations of the drifted agent's current behavior and knowledge of its initial model to generate an active querying policy.
Empirical evaluation shows that our approach is much more efficient than re-learning the agent model from scratch.
arXiv Detail & Related papers (2022-03-24T17:48:58Z) - Offline Reinforcement Learning with Implicit Q-Learning [85.62618088890787]
Current offline reinforcement learning methods need to query the value of unseen actions during training to improve the policy.
We propose an offline RL method that never needs to evaluate actions outside of the dataset.
This method enables the learned policy to improve substantially over the best behavior in the data through generalization.
arXiv Detail & Related papers (2021-10-12T17:05:05Z) - APS: Active Pretraining with Successor Features [96.24533716878055]
We show that by reinterpreting and combining successorcitepHansenFast with non entropy, the intractable mutual information can be efficiently optimized.
The proposed method Active Pretraining with Successor Feature (APS) explores the environment via non entropy, and the explored data can be efficiently leveraged to learn behavior.
arXiv Detail & Related papers (2021-08-31T16:30:35Z) - Active Feature Acquisition with Generative Surrogate Models [11.655069211977464]
In this work, we consider models that perform active feature acquisition (AFA) and query the environment for unobserved features.
Our work reformulates the Markov decision process (MDP) that underlies the AFA problem as a generative modeling task.
We propose learning a generative surrogate model ( GSM) that captures the dependencies among input features to assess potential information gain from acquisitions.
arXiv Detail & Related papers (2020-10-06T02:10:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.