Generative Diffusion Models for Resource Allocation in Wireless Networks
- URL: http://arxiv.org/abs/2504.20277v2
- Date: Thu, 17 Jul 2025 16:38:32 GMT
- Title: Generative Diffusion Models for Resource Allocation in Wireless Networks
- Authors: Yigit Berkay Uslu, Samar Hadou, Shirin Saeedi Bidokhti, Alejandro Ribeiro,
- Abstract summary: We train a policy to imitate an expert and generate new samples from the optimal distribution.<n>We achieve near-optimal performance through the sequential execution of the generated samples.<n>We present numerical results in a case study of power control.
- Score: 77.36145730415045
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper proposes a supervised training algorithm for learning stochastic resource allocation policies with generative diffusion models (GDMs). We formulate the allocation problem as the maximization of an ergodic utility function subject to ergodic Quality of Service (QoS) constraints. Given samples from a stochastic expert policy that yields a near-optimal solution to the constrained optimization problem, we train a GDM policy to imitate the expert and generate new samples from the optimal distribution. We achieve near-optimal performance through the sequential execution of the generated samples. To enable generalization to a family of network configurations, we parameterize the backward diffusion process with a graph neural network (GNN) architecture. We present numerical results in a case study of power control.
Related papers
- GFlowNets for Active Learning Based Resource Allocation in Next Generation Wireless Networks [34.19521408051132]
We consider the radio resource allocation problem in a wireless system with various integrated functionalities, such as communication, sensing and computing.<n>We propose a novel active learning framework where resource allocation patterns are drawn sequentially, evaluated in the environment, and then used to iteratively update a surrogate model of the environment.<n>GFlowNet generates diverse and high return resource management designs that update the surrogate model and swiftly discover suitable solutions.
arXiv Detail & Related papers (2025-05-08T13:16:40Z) - Diffusion Models as Network Optimizers: Explorations and Analysis [71.69869025878856]
generative diffusion models (GDMs) have emerged as a promising new approach to network optimization.<n>In this study, we first explore the intrinsic characteristics of generative models.<n>We provide a concise theoretical and intuitive demonstration of the advantages of generative models over discriminative network optimization.
arXiv Detail & Related papers (2024-11-01T09:05:47Z) - DiffSG: A Generative Solver for Network Optimization with Diffusion Model [75.27274046562806]
Generative diffusion models are popular in various cross-domain applications.
These models hold promise in tackling complex network optimization problems.
We propose a new framework for generative diffusion models called Diffusion Model-based Solution Generation.
arXiv Detail & Related papers (2024-08-13T07:56:21Z) - Thompson sampling for improved exploration in GFlowNets [75.89693358516944]
Generative flow networks (GFlowNets) are amortized variational inference algorithms that treat sampling from a distribution over compositional objects as a sequential decision-making problem with a learnable action policy.
We show in two domains that TS-GFN yields improved exploration and thus faster convergence to the target distribution than the off-policy exploration strategies used in past work.
arXiv Detail & Related papers (2023-06-30T14:19:44Z) - Stochastic Unrolled Federated Learning [85.6993263983062]
We introduce UnRolled Federated learning (SURF), a method that expands algorithm unrolling to federated learning.
Our proposed method tackles two challenges of this expansion, namely the need to feed whole datasets to the unrolleds and the decentralized nature of federated learning.
arXiv Detail & Related papers (2023-05-24T17:26:22Z) - Learning Autonomy in Management of Wireless Random Networks [102.02142856863563]
This paper presents a machine learning strategy that tackles a distributed optimization task in a wireless network with an arbitrary number of randomly interconnected nodes.
We develop a flexible deep neural network formalism termed distributed message-passing neural network (DMPNN) with forward and backward computations independent of the network topology.
arXiv Detail & Related papers (2021-06-15T09:03:28Z) - Attentive Gaussian processes for probabilistic time-series generation [4.94950858749529]
We propose a computationally efficient attention-based network combined with the Gaussian process regression to generate real-valued sequence.
We develop a block-wise training algorithm to allow mini-batch training of the network while the GP is trained using full-batch.
The algorithm has been proved to converge and shows comparable, if not better, quality of the found solution.
arXiv Detail & Related papers (2021-02-10T01:19:15Z) - Graph Neural Networks for Scalable Radio Resource Management:
Architecture Design and Theoretical Analysis [31.372548374969387]
We propose to apply graph neural networks (GNNs) to solve large-scale radio resource management problems.
The proposed method is highly scalable and can solve the beamforming problem in an interference channel with $1000$ transceiver pairs within $6$ milliseconds on a single GPU.
arXiv Detail & Related papers (2020-07-15T11:43:32Z) - Resource Allocation via Graph Neural Networks in Free Space Optical
Fronthaul Networks [119.81868223344173]
This paper investigates the optimal resource allocation in free space optical (FSO) fronthaul networks.
We consider the graph neural network (GNN) for the policy parameterization to exploit the FSO network structure.
The primal-dual learning algorithm is developed to train the GNN in a model-free manner, where the knowledge of system models is not required.
arXiv Detail & Related papers (2020-06-26T14:20:48Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z) - Large Scale Many-Objective Optimization Driven by Distributional
Adversarial Networks [1.2461503242570644]
We will propose a novel algorithm based on RVEA framework and using Distributional Adversarial Networks (DAN) to generate new offspring.
The propose new algorithm will be tested on 9 benchmark problems in Large scale multi-objective problems (LSMOP)
arXiv Detail & Related papers (2020-03-16T04:14:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.