Multi-layered Network Exploration via Random Walks: From Offline
Optimization to Online Learning
- URL: http://arxiv.org/abs/2106.05065v1
- Date: Wed, 9 Jun 2021 13:35:39 GMT
- Title: Multi-layered Network Exploration via Random Walks: From Offline
Optimization to Online Learning
- Authors: Xutong Liu, Jinhang Zuo, Xiaowei Chen, Wei Chen, John C.S. Lui
- Abstract summary: Multi-layered network exploration (MuLaNE) problem is an important problem abstracted from many applications.
In MuLaNE, there are multiple network layers where each node has an importance weight and each layer is explored by a random walk.
We systematically study this problem from offline optimization to online learning.
- Score: 33.450042829375086
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-layered network exploration (MuLaNE) problem is an important problem
abstracted from many applications. In MuLaNE, there are multiple network layers
where each node has an importance weight and each layer is explored by a random
walk. The MuLaNE task is to allocate total random walk budget $B$ into each
network layer so that the total weights of the unique nodes visited by random
walks are maximized. We systematically study this problem from offline
optimization to online learning. For the offline optimization setting where the
network structure and node weights are known, we provide greedy based
constant-ratio approximation algorithms for overlapping networks, and greedy or
dynamic-programming based optimal solutions for non-overlapping networks. For
the online learning setting, neither the network structure nor the node weights
are known initially. We adapt the combinatorial multi-armed bandit framework
and design algorithms to learn random walk related parameters and node weights
while optimizing the budget allocation in multiple rounds, and prove that they
achieve logarithmic regret bounds. Finally, we conduct experiments on a
real-world social network dataset to validate our theoretical results.
Related papers
- Why Line Search when you can Plane Search? SO-Friendly Neural Networks allow Per-Iteration Optimization of Learning and Momentum Rates for Every Layer [9.849498498869258]
We introduce the class of SO-friendly neural networks, which include several models used in practice.
Performing a precise line search to set the step size has the same cost during full-batch training as using a fixed learning.
For the same cost a planesearch can be used to set both the learning and momentum rate on each step.
arXiv Detail & Related papers (2024-06-25T22:06:40Z) - OFA$^2$: A Multi-Objective Perspective for the Once-for-All Neural
Architecture Search [79.36688444492405]
Once-for-All (OFA) is a Neural Architecture Search (NAS) framework designed to address the problem of searching efficient architectures for devices with different resources constraints.
We aim to give one step further in the search for efficiency by explicitly conceiving the search stage as a multi-objective optimization problem.
arXiv Detail & Related papers (2023-03-23T21:30:29Z) - Entanglement Routing Based on Fidelity Curves [0.0]
We consider quantum networks where each link is characterized by a trade-off between the entanglement generation rate and fidelity.
Two entanglement distribution models are considered: one where entangled qubits are distributed one at a time, and a flow model where a large number of entangled qubits are distributed simultaneously.
arXiv Detail & Related papers (2023-03-22T18:59:36Z) - Combining Optimal Path Search With Task-Dependent Learning in a Neural
Network [4.1712273169097305]
We show that one can define a neural network representation of path finding problems by transforming cost values into synaptic weights.
We show that network learning mechanisms can adapt the weights in the network augmenting the resulting paths according to some task at hand.
arXiv Detail & Related papers (2022-01-26T18:29:00Z) - Layer Adaptive Node Selection in Bayesian Neural Networks: Statistical
Guarantees and Implementation Details [0.5156484100374059]
Sparse deep neural networks have proven to be efficient for predictive model building in large-scale studies.
We propose a Bayesian sparse solution using spike-and-slab Gaussian priors to allow for node selection during training.
We establish the fundamental result of variational posterior consistency together with the characterization of prior parameters.
arXiv Detail & Related papers (2021-08-25T00:48:07Z) - Learning Autonomy in Management of Wireless Random Networks [102.02142856863563]
This paper presents a machine learning strategy that tackles a distributed optimization task in a wireless network with an arbitrary number of randomly interconnected nodes.
We develop a flexible deep neural network formalism termed distributed message-passing neural network (DMPNN) with forward and backward computations independent of the network topology.
arXiv Detail & Related papers (2021-06-15T09:03:28Z) - Joint Coding and Scheduling Optimization for Distributed Learning over
Wireless Edge Networks [21.422040036286536]
This article addresses problems by leveraging recent advances in coded computing and the deep dueling neural network architecture.
By introducing coded structures/redundancy, a distributed learning task can be completed without waiting for straggling nodes.
Simulations show that the proposed framework reduces the average learning delay in wireless edge computing up to 66% compared with other DL approaches.
arXiv Detail & Related papers (2021-03-07T08:57:09Z) - Learning Neural Network Subspaces [74.44457651546728]
Recent observations have advanced our understanding of the neural network optimization landscape.
With a similar computational cost as training one model, we learn lines, curves, and simplexes of high-accuracy neural networks.
With a similar computational cost as training one model, we learn lines, curves, and simplexes of high-accuracy neural networks.
arXiv Detail & Related papers (2021-02-20T23:26:58Z) - Firefly Neural Architecture Descent: a General Approach for Growing
Neural Networks [50.684661759340145]
Firefly neural architecture descent is a general framework for progressively and dynamically growing neural networks.
We show that firefly descent can flexibly grow networks both wider and deeper, and can be applied to learn accurate but resource-efficient neural architectures.
In particular, it learns networks that are smaller in size but have higher average accuracy than those learned by the state-of-the-art methods.
arXiv Detail & Related papers (2021-02-17T04:47:18Z) - Dynamic Graph: Learning Instance-aware Connectivity for Neural Networks [78.65792427542672]
Dynamic Graph Network (DG-Net) is a complete directed acyclic graph, where the nodes represent convolutional blocks and the edges represent connection paths.
Instead of using the same path of the network, DG-Net aggregates features dynamically in each node, which allows the network to have more representation ability.
arXiv Detail & Related papers (2020-10-02T16:50:26Z) - Fitting the Search Space of Weight-sharing NAS with Graph Convolutional
Networks [100.14670789581811]
We train a graph convolutional network to fit the performance of sampled sub-networks.
With this strategy, we achieve a higher rank correlation coefficient in the selected set of candidates.
arXiv Detail & Related papers (2020-04-17T19:12:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.