Online Learning of Network Bottlenecks via Minimax Paths
- URL: http://arxiv.org/abs/2109.08467v1
- Date: Fri, 17 Sep 2021 11:11:50 GMT
- Title: Online Learning of Network Bottlenecks via Minimax Paths
- Authors: Niklas {\AA}kerblom, Fazeleh Sadat Hoseini, Morteza Haghir Chehreghani
- Abstract summary: We study bottleneck identification in networks via extracting minimax paths.
We then devise an alternative problem formulation which approximates the original objective.
We experimentally evaluate the performance of Thompson Sampling with the approximate formulation on real-world directed and undirected networks.
- Score: 6.316693022958221
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we study bottleneck identification in networks via extracting
minimax paths. Many real-world networks have stochastic weights for which full
knowledge is not available in advance. Therefore, we model this task as a
combinatorial semi-bandit problem to which we apply a combinatorial version of
Thompson Sampling and establish an upper bound on the corresponding Bayesian
regret. Due to the computational intractability of the problem, we then devise
an alternative problem formulation which approximates the original objective.
Finally, we experimentally evaluate the performance of Thompson Sampling with
the approximate formulation on real-world directed and undirected networks.
Related papers
- Sparsest Models Elude Pruning: An Exposé of Pruning's Current Capabilities [4.842973374883628]
Pruning has emerged as a promising approach for compressing large-scale models, yet its effectiveness in recovering the sparsest of models has not yet been explored.
We conducted an extensive series of 485,838 experiments, applying a range of state-of-the-art pruning algorithms to a synthetic dataset we created, named the Cubist Spiral.
Our findings reveal a significant gap in performance compared to ideal sparse networks, which we identified through a novel search algorithm.
arXiv Detail & Related papers (2024-07-04T17:33:15Z) - Optimization Guarantees of Unfolded ISTA and ADMM Networks With Smooth
Soft-Thresholding [57.71603937699949]
We study optimization guarantees, i.e., achieving near-zero training loss with the increase in the number of learning epochs.
We show that the threshold on the number of training samples increases with the increase in the network width.
arXiv Detail & Related papers (2023-09-12T13:03:47Z) - A Bayesian approach to multi-task learning with network lasso [0.0]
We propose a Bayesian approach to solve multi-task learning problems by network lasso.
The effectiveness of the proposed method is shown in a simulation study and a real data analysis.
arXiv Detail & Related papers (2021-10-18T06:25:38Z) - Mixed-Privacy Forgetting in Deep Networks [114.3840147070712]
We show that the influence of a subset of the training samples can be removed from the weights of a network trained on large-scale image classification tasks.
Inspired by real-world applications of forgetting techniques, we introduce a novel notion of forgetting in mixed-privacy setting.
We show that our method allows forgetting without having to trade off the model accuracy.
arXiv Detail & Related papers (2020-12-24T19:34:56Z) - Neural Thompson Sampling [94.82847209157494]
We propose a new algorithm, called Neural Thompson Sampling, which adapts deep neural networks for both exploration and exploitation.
At the core of our algorithm is a novel posterior distribution of the reward, where its mean is the neural network approximator, and its variance is built upon the neural tangent features of the corresponding neural network.
arXiv Detail & Related papers (2020-10-02T07:44:09Z) - Fitting the Search Space of Weight-sharing NAS with Graph Convolutional
Networks [100.14670789581811]
We train a graph convolutional network to fit the performance of sampled sub-networks.
With this strategy, we achieve a higher rank correlation coefficient in the selected set of candidates.
arXiv Detail & Related papers (2020-04-17T19:12:39Z) - Spatially Adaptive Inference with Stochastic Feature Sampling and
Interpolation [72.40827239394565]
We propose to compute features only at sparsely sampled locations.
We then densely reconstruct the feature map with an efficient procedure.
The presented network is experimentally shown to save substantial computation while maintaining accuracy over a variety of computer vision tasks.
arXiv Detail & Related papers (2020-03-19T15:36:31Z) - Belief Propagation Reloaded: Learning BP-Layers for Labeling Problems [83.98774574197613]
We take one of the simplest inference methods, a truncated max-product Belief propagation, and add what is necessary to make it a proper component of a deep learning model.
This BP-Layer can be used as the final or an intermediate block in convolutional neural networks (CNNs)
The model is applicable to a range of dense prediction problems, is well-trainable and provides parameter-efficient and robust solutions in stereo, optical flow and semantic segmentation.
arXiv Detail & Related papers (2020-03-13T13:11:35Z) - Ensemble Sampling [18.85309520133554]
This paper develops ensemble sampling, which aims to approximate Thompson sampling while maintaining tractability even in the face of complex models such as neural networks.
We establish a theoretical basis that supports the approach and present computational results that offer further insight.
arXiv Detail & Related papers (2017-05-20T19:36:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.