Neural Quantile Optimization for Edge-Cloud Computing
- URL: http://arxiv.org/abs/2307.05170v1
- Date: Tue, 11 Jul 2023 11:05:10 GMT
- Title: Neural Quantile Optimization for Edge-Cloud Computing
- Authors: Bin Du, He Zhang, Xiangle Cheng, Lei Zhang
- Abstract summary: We seek the best traffic allocation scheme for the edge-cloud computing network that satisfies constraints and minimizes the cost based on burstable billing.
We introduce the Gumbel-softmax sampling network to solve the optimization problems via unsupervised learning.
The trained network works as an efficient traffic allocation scheme sampler, remarkably outperforming the random strategy in feasibility and cost function value.
- Score: 17.57923005428252
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We seek the best traffic allocation scheme for the edge-cloud computing
network that satisfies constraints and minimizes the cost based on burstable
billing. First, for a fixed network topology, we formulate a family of integer
programming problems with random parameters describing the various traffic
demands. Then, to overcome the difficulty caused by the discrete feature of the
problem, we generalize the Gumbel-softmax reparameterization method to induce
an unconstrained continuous optimization problem as a regularized continuation
of the discrete problem. Finally, we introduce the Gumbel-softmax sampling
network to solve the optimization problems via unsupervised learning. The
network structure reflects the edge-cloud computing topology and is trained to
minimize the expectation of the cost function for unconstrained continuous
optimization problems. The trained network works as an efficient traffic
allocation scheme sampler, remarkably outperforming the random strategy in
feasibility and cost function value. Besides testing the quality of the output
allocation scheme, we examine the generalization property of the network by
increasing the time steps and the number of users. We also feed the solution to
existing integer optimization solvers as initial conditions and verify the
warm-starts can accelerate the short-time iteration process. The framework is
general with solid performance, and the decoupled feature of the random neural
networks is adequate for practical implementations.
Related papers
- Lower Bounds and Optimal Algorithms for Non-Smooth Convex Decentralized Optimization over Time-Varying Networks [57.24087627267086]
We consider the task of minimizing the sum of convex functions stored in a decentralized manner across the nodes of a communication network.
Lower bounds on the number of decentralized communications and (sub)gradient computations required to solve the problem have been established.
We develop the first optimal algorithm that matches these lower bounds and offers substantially improved theoretical performance compared to the existing state of the art.
arXiv Detail & Related papers (2024-05-28T10:28:45Z) - FALCON: FLOP-Aware Combinatorial Optimization for Neural Network Pruning [17.60353530072587]
Network pruning offers a solution to reduce model size and computational cost while maintaining performance.
Most current pruning methods focus primarily on improving sparsity by reducing the number of nonzero parameters.
We propose FALCON, a novel-optimization-based framework for network pruning that jointly takes into account model accuracy (fidelity), FLOPs, and sparsity constraints.
arXiv Detail & Related papers (2024-03-11T18:40:47Z) - Optimization Guarantees of Unfolded ISTA and ADMM Networks With Smooth
Soft-Thresholding [57.71603937699949]
We study optimization guarantees, i.e., achieving near-zero training loss with the increase in the number of learning epochs.
We show that the threshold on the number of training samples increases with the increase in the network width.
arXiv Detail & Related papers (2023-09-12T13:03:47Z) - Does a sparse ReLU network training problem always admit an optimum? [0.0]
We show that the existence of an optimal solution is not always guaranteed, especially in the context of sparse ReLU neural networks.
In particular, we first show that optimization problems involving deep networks with certain sparsity patterns do not always have optimal parameters.
arXiv Detail & Related papers (2023-06-05T08:01:50Z) - Symmetric Tensor Networks for Generative Modeling and Constrained
Combinatorial Optimization [72.41480594026815]
Constrained optimization problems abound in industry, from portfolio optimization to logistics.
One of the major roadblocks in solving these problems is the presence of non-trivial hard constraints which limit the valid search space.
In this work, we encode arbitrary integer-valued equality constraints of the form Ax=b, directly into U(1) symmetric networks (TNs) and leverage their applicability as quantum-inspired generative models.
arXiv Detail & Related papers (2022-11-16T18:59:54Z) - Joint inference and input optimization in equilibrium networks [68.63726855991052]
deep equilibrium model is a class of models that foregoes traditional network depth and instead computes the output of a network by finding the fixed point of a single nonlinear layer.
We show that there is a natural synergy between these two settings.
We demonstrate this strategy on various tasks such as training generative models while optimizing over latent codes, training models for inverse problems like denoising and inpainting, adversarial training and gradient based meta-learning.
arXiv Detail & Related papers (2021-11-25T19:59:33Z) - Learning from Images: Proactive Caching with Parallel Convolutional
Neural Networks [94.85780721466816]
A novel framework for proactive caching is proposed in this paper.
It combines model-based optimization with data-driven techniques by transforming an optimization problem into a grayscale image.
Numerical results show that the proposed scheme can reduce 71.6% computation time with only 0.8% additional performance cost.
arXiv Detail & Related papers (2021-08-15T21:32:47Z) - Non-Gradient Manifold Neural Network [79.44066256794187]
Deep neural network (DNN) generally takes thousands of iterations to optimize via gradient descent.
We propose a novel manifold neural network based on non-gradient optimization.
arXiv Detail & Related papers (2021-06-15T06:39:13Z) - Recurrent Neural Networks for Stochastic Control Problems with Delay [0.76146285961466]
We propose and systematically study deep neural networks-based algorithms to solve control problems with delay features.
Specifically, we employ neural networks for sequence modeling to parameterize the policy and optimize the objective function.
The proposed algorithms are tested on three benchmark examples: a linear-quadratic problem, optimal consumption with fixed finite delay, and portfolio optimization with complete memory.
arXiv Detail & Related papers (2021-01-05T07:18:47Z) - Efficient and Sparse Neural Networks by Pruning Weights in a
Multiobjective Learning Approach [0.0]
We propose a multiobjective perspective on the training of neural networks by treating its prediction accuracy and the network complexity as two individual objective functions.
Preliminary numerical results on exemplary convolutional neural networks confirm that large reductions in the complexity of neural networks with neglibile loss of accuracy are possible.
arXiv Detail & Related papers (2020-08-31T13:28:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.