Learning from Images: Proactive Caching with Parallel Convolutional
Neural Networks
- URL: http://arxiv.org/abs/2108.06817v1
- Date: Sun, 15 Aug 2021 21:32:47 GMT
- Title: Learning from Images: Proactive Caching with Parallel Convolutional
Neural Networks
- Authors: Yantong Wang, Ye Hu, Zhaohui Yang, Walid Saad, Kai-Kit Wong, Vasilis
Friderikos
- Abstract summary: A novel framework for proactive caching is proposed in this paper.
It combines model-based optimization with data-driven techniques by transforming an optimization problem into a grayscale image.
Numerical results show that the proposed scheme can reduce 71.6% computation time with only 0.8% additional performance cost.
- Score: 94.85780721466816
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the continuous trend of data explosion, delivering packets from data
servers to end users causes increased stress on both the fronthaul and backhaul
traffic of mobile networks. To mitigate this problem, caching popular content
closer to the end-users has emerged as an effective method for reducing network
congestion and improving user experience. To find the optimal locations for
content caching, many conventional approaches construct various mixed integer
linear programming (MILP) models. However, such methods may fail to support
online decision making due to the inherent curse of dimensionality. In this
paper, a novel framework for proactive caching is proposed. This framework
merges model-based optimization with data-driven techniques by transforming an
optimization problem into a grayscale image. For parallel training and simple
design purposes, the proposed MILP model is first decomposed into a number of
sub-problems and, then, convolutional neural networks (CNNs) are trained to
predict content caching locations of these sub-problems. Furthermore, since the
MILP model decomposition neglects the internal effects among sub-problems, the
CNNs' outputs have the risk to be infeasible solutions. Therefore, two
algorithms are provided: the first uses predictions from CNNs as an extra
constraint to reduce the number of decision variables; the second employs CNNs'
outputs to accelerate local search. Numerical results show that the proposed
scheme can reduce 71.6% computation time with only 0.8% additional performance
cost compared to the MILP solution, which provides high quality decision making
in real-time.
Related papers
- A Multi-Head Ensemble Multi-Task Learning Approach for Dynamical
Computation Offloading [62.34538208323411]
We propose a multi-head ensemble multi-task learning (MEMTL) approach with a shared backbone and multiple prediction heads (PHs)
MEMTL outperforms benchmark methods in both the inference accuracy and mean square error without requiring additional training data.
arXiv Detail & Related papers (2023-09-02T11:01:16Z) - Training Latency Minimization for Model-Splitting Allowed Federated Edge
Learning [16.8717239856441]
We propose a model-splitting allowed FL (SFL) framework to alleviate the shortage of computing power faced by clients in training deep neural networks (DNNs) using federated learning (FL)
Under the synchronized global update setting, the latency to complete a round of global training is determined by the maximum latency for the clients to complete a local training session.
To solve this mixed integer nonlinear programming problem, we first propose a regression method to fit the quantitative-relationship between the cut-layer and other parameters of an AI-model, and thus, transform the TLMP into a continuous problem.
arXiv Detail & Related papers (2023-07-21T12:26:42Z) - Iterative Soft Shrinkage Learning for Efficient Image Super-Resolution [91.3781512926942]
Image super-resolution (SR) has witnessed extensive neural network designs from CNN to transformer architectures.
This work investigates the potential of network pruning for super-resolution iteration to take advantage of off-the-shelf network designs and reduce the underlying computational overhead.
We propose a novel Iterative Soft Shrinkage-Percentage (ISS-P) method by optimizing the sparse structure of a randomly network at each and tweaking unimportant weights with a small amount proportional to the magnitude scale on-the-fly.
arXiv Detail & Related papers (2023-03-16T21:06:13Z) - Learning to Optimize Permutation Flow Shop Scheduling via Graph-based
Imitation Learning [70.65666982566655]
Permutation flow shop scheduling (PFSS) is widely used in manufacturing systems.
We propose to train the model via expert-driven imitation learning, which accelerates convergence more stably and accurately.
Our model's network parameters are reduced to only 37% of theirs, and the solution gap of our model towards the expert solutions decreases from 6.8% to 1.3% on average.
arXiv Detail & Related papers (2022-10-31T09:46:26Z) - Joint inference and input optimization in equilibrium networks [68.63726855991052]
deep equilibrium model is a class of models that foregoes traditional network depth and instead computes the output of a network by finding the fixed point of a single nonlinear layer.
We show that there is a natural synergy between these two settings.
We demonstrate this strategy on various tasks such as training generative models while optimizing over latent codes, training models for inverse problems like denoising and inpainting, adversarial training and gradient based meta-learning.
arXiv Detail & Related papers (2021-11-25T19:59:33Z) - Multi-objective Evolutionary Approach for Efficient Kernel Size and
Shape for CNN [12.697368516837718]
State-of-the-art development in CNN topology, such as VGGNet and ResNet, have become increasingly accurate.
These networks are computationally expensive involving billions of arithmetic operations and parameters.
This paper considers optimising the computational resource consumption by reducing the size and number of kernels in convolutional layers.
arXiv Detail & Related papers (2021-06-28T14:47:29Z) - Partitioning sparse deep neural networks for scalable training and
inference [8.282177703075453]
State-of-the-art deep neural networks (DNNs) have significant computational and data management requirements.
Sparsification and pruning methods are shown to be effective in removing a large fraction of connections in DNNs.
The resulting sparse networks present unique challenges to further improve the computational efficiency of training and inference in deep learning.
arXiv Detail & Related papers (2021-04-23T20:05:52Z) - CNN Acceleration by Low-rank Approximation with Quantized Factors [9.654865591431593]
The modern convolutional neural networks although achieve great results in solving complex computer vision tasks still cannot be effectively used in mobile and embedded devices.
In order to solve this problem the novel approach combining two known methods, low-rank tensor approximation in Tucker format and quantization of weights and feature maps (activations) is proposed.
The efficiency of our method is demonstrated for ResNet18 and ResNet34 on CIFAR-10, CIFAR-100 and Imagenet classification tasks.
arXiv Detail & Related papers (2020-06-16T02:28:05Z) - A Privacy-Preserving-Oriented DNN Pruning and Mobile Acceleration
Framework [56.57225686288006]
Weight pruning of deep neural networks (DNNs) has been proposed to satisfy the limited storage and computing capability of mobile edge devices.
Previous pruning methods mainly focus on reducing the model size and/or improving performance without considering the privacy of user data.
We propose a privacy-preserving-oriented pruning and mobile acceleration framework that does not require the private training dataset.
arXiv Detail & Related papers (2020-03-13T23:52:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.