Learning-augmented Online Minimization of Age of Information and
Transmission Costs
- URL: http://arxiv.org/abs/2403.02573v1
- Date: Tue, 5 Mar 2024 01:06:25 GMT
- Title: Learning-augmented Online Minimization of Age of Information and
Transmission Costs
- Authors: Zhongdong Liu, Keyuan Zhang, Bin Li, Yin Sun, Y. Thomas Hou, and Bo Ji
- Abstract summary: We develop an online algorithm to minimize the sum of transmission and staleness costs, ensuring a worst-case performance guarantee.
While online algorithms are robust, they are usually overly conservative and may have a poor average performance in typical scenarios.
We show that our learning-augmented algorithm achieves both consistency and robustness.
- Score: 24.873041306990288
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider a discrete-time system where a resource-constrained source (e.g.,
a small sensor) transmits its time-sensitive data to a destination over a
time-varying wireless channel. Each transmission incurs a fixed transmission
cost (e.g., energy cost), and no transmission results in a staleness cost
represented by the Age-of-Information. The source must balance the tradeoff
between transmission and staleness costs. To address this challenge, we develop
a robust online algorithm to minimize the sum of transmission and staleness
costs, ensuring a worst-case performance guarantee. While online algorithms are
robust, they are usually overly conservative and may have a poor average
performance in typical scenarios. In contrast, by leveraging historical data
and prediction models, machine learning (ML) algorithms perform well in average
cases. However, they typically lack worst-case performance guarantees. To
achieve the best of both worlds, we design a learning-augmented online
algorithm that exhibits two desired properties: (i) consistency: closely
approximating the optimal offline algorithm when the ML prediction is accurate
and trusted; (ii) robustness: ensuring worst-case performance guarantee even ML
predictions are inaccurate. Finally, we perform extensive simulations to show
that our online algorithm performs well empirically and that our
learning-augmented algorithm achieves both consistency and robustness.
Related papers
- Learning-augmented Online Algorithm for Two-level Ski-rental Problem [8.381344684943212]
We study the two-level ski-rental problem,where a user needs to fulfill a sequence of demands for multiple items by choosing one of three payment options.
We develop a learning-augmented algorithm (LADTSR) by integrating Machine Learning predictions into the robust online algorithm.
arXiv Detail & Related papers (2024-02-09T16:10:54Z) - Handling Delayed Feedback in Distributed Online Optimization : A
Projection-Free Approach [1.9797215742507548]
Learning at the edges has become increasingly important as large quantities of data are continually generated locally.
We propose two projection-free algorithms for centralised and distributed settings in which they are carefully designed to achieve a regret bound of O(sqrtB) where B is the sum of delay.
We provide an extensive theoretical study and experimentally validate the performance of our algorithms by comparing them with existing ones on real-world problems.
arXiv Detail & Related papers (2024-02-03T10:43:22Z) - Breaking Boundaries: Balancing Performance and Robustness in Deep
Wireless Traffic Forecasting [11.029214459961114]
Balancing the trade-off between accuracy and robustness is a long-standing challenge in time series forecasting.
We study a wide array of perturbation scenarios and propose novel defense mechanisms against adversarial attacks using real-world telecom data.
arXiv Detail & Related papers (2023-11-16T11:10:38Z) - Robust Learning for Smoothed Online Convex Optimization with Feedback
Delay [43.85262428603507]
We propose a novel machine learning (ML) augmented online algorithm, Robustness-Constrained Learning (RCL)
RCL combines untrusted ML predictions with a trusted expert online algorithm via constrained projection to robustify the ML prediction.
RCL is the first ML-augmented algorithm with a provable robustness guarantee in the case of multi-step switching cost and feedback delay.
arXiv Detail & Related papers (2023-10-31T00:22:55Z) - Robustified Learning for Online Optimization with Memory Costs [28.737193318136725]
We propose a novel expert-robustified learning (ERL) approach, achieving both good average performance and robustness.
For any $lambdageq1$, ERL can achieve $lambda$-competitive against the expert algorithm and $lambdacdot C$-competitive against the optimal offline algorithm.
arXiv Detail & Related papers (2023-05-01T06:12:01Z) - TransPath: Learning Heuristics For Grid-Based Pathfinding via
Transformers [64.88759709443819]
We suggest learning the instance-dependent proxies that are supposed to notably increase the efficiency of the search.
The first proxy we suggest to learn is the correction factor, i.e. the ratio between the instance independent cost-to-go estimate and the perfect one.
The second proxy is the path probability, which indicates how likely the grid cell is lying on the shortest path.
arXiv Detail & Related papers (2022-12-22T14:26:11Z) - Robustification of Online Graph Exploration Methods [59.50307752165016]
We study a learning-augmented variant of the classical, notoriously hard online graph exploration problem.
We propose an algorithm that naturally integrates predictions into the well-known Nearest Neighbor (NN) algorithm.
arXiv Detail & Related papers (2021-12-10T10:02:31Z) - Learning to Hash Robustly, with Guarantees [79.68057056103014]
In this paper, we design an NNS algorithm for the Hamming space that has worst-case guarantees essentially matching that of theoretical algorithms.
We evaluate the algorithm's ability to optimize for a given dataset both theoretically and practically.
Our algorithm has a 1.8x and 2.1x better recall on the worst-performing queries to the MNIST and ImageNet datasets.
arXiv Detail & Related papers (2021-08-11T20:21:30Z) - Online Adversarial Attacks [57.448101834579624]
We formalize the online adversarial attack problem, emphasizing two key elements found in real-world use-cases.
We first rigorously analyze a deterministic variant of the online threat model.
We then propose algoname, a simple yet practical algorithm yielding a provably better competitive ratio for $k=2$ over the current best single threshold algorithm.
arXiv Detail & Related papers (2021-03-02T20:36:04Z) - Double Coverage with Machine-Learned Advice [100.23487145400833]
We study the fundamental online $k$-server problem in a learning-augmented setting.
We show that our algorithm achieves for any k an almost optimal consistency-robustness tradeoff.
arXiv Detail & Related papers (2021-03-02T11:04:33Z) - Optimal Robustness-Consistency Trade-offs for Learning-Augmented Online
Algorithms [85.97516436641533]
We study the problem of improving the performance of online algorithms by incorporating machine-learned predictions.
The goal is to design algorithms that are both consistent and robust.
We provide the first set of non-trivial lower bounds for competitive analysis using machine-learned predictions.
arXiv Detail & Related papers (2020-10-22T04:51:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.