Learning Bayesian Networks Under Sparsity Constraints: A Parameterized
Complexity Analysis
- URL: http://arxiv.org/abs/2004.14724v3
- Date: Mon, 6 Sep 2021 11:43:39 GMT
- Title: Learning Bayesian Networks Under Sparsity Constraints: A Parameterized
Complexity Analysis
- Authors: Niels Gr\"uttemeier and Christian Komusiewicz
- Abstract summary: We study the problem of learning the structure of an optimal Bayesian network when additional constraints are posed on the network or on its moralized graph.
We show that learning an optimal network with at most $k$ edges in the moralized graph presumably has no $f(k)cdot |I|O(1)$-time algorithm.
- Score: 7.99536002595393
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study the problem of learning the structure of an optimal Bayesian network
when additional constraints are posed on the network or on its moralized graph.
More precisely, we consider the constraint that the network or its moralized
graph are close, in terms of vertex or edge deletions, to a sparse graph class
$\Pi$. For example, we show that learning an optimal network whose moralized
graph has vertex deletion distance at most $k$ from a graph with maximum degree
1 can be computed in polynomial time when $k$ is constant. This extends
previous work that gave an algorithm with such a running time for the vertex
deletion distance to edgeless graphs [Korhonen & Parviainen, NIPS 2015]. We
then show that further extensions or improvements are presumably impossible.
For example, we show that learning optimal networks where the network or its
moralized graph have maximum degree $2$ or connected components of size at most
$c$, $c\ge 3$, is NP-hard. Finally, we show that learning an optimal network
with at most $k$ edges in the moralized graph presumably has no $f(k)\cdot
|I|^{O(1)}$-time algorithm and that, in contrast, an optimal network with at
most $k$ arcs can be computed in $2^{O(k)}\cdot |I|^{O(1)}$ time where $|I|$ is
the total input size.
Related papers
- Most Neural Networks Are Almost Learnable [52.40331776572531]
We show that for any fixed $epsilon>0$ and depth $i$, there is a poly-time algorithm that learns random Xavier networks of depth $i$.
The algorithm runs in time and sample complexity of $(bard)mathrmpoly(epsilon-1)$, where $bar d$ is the size of the network.
For some cases of sigmoid and ReLU-like activations the bound can be improved to $(bard)mathrmpolylog(eps
arXiv Detail & Related papers (2023-05-25T22:27:42Z) - Random Projection Forest Initialization for Graph Convolutional Networks [0.6554326244334866]
Graph convolutional networks (GCNs) were a great step towards extending deep learning to unstructured data such as graphs.
We present a new way to construct the graph and initialize the GCN using random projection forest (rpForest)
rpForest enables us to assign varying weights on edges indicating varying importance, which enhanced the learning.
arXiv Detail & Related papers (2023-02-22T11:49:19Z) - Improved High-Probability Regret for Adversarial Bandits with
Time-Varying Feedback Graphs [62.52390282012508]
We study high-probability regret bounds for adversarial $K$-armed bandits with time-varying feedback graphs over $T$ rounds.
We develop an algorithm that achieves the optimal regret $widetildemathcalO((sum_t=1Talpha_t)1/2+max_tin[T]alpha_t]$ with high probability.
We also develop the first algorithm that achieves the optimal high-probability regret bound for weakly observable graphs.
arXiv Detail & Related papers (2022-10-04T04:36:15Z) - Nearly Optimal Best-of-Both-Worlds Algorithms for Online Learning with
Feedback Graphs [34.37963000493442]
This study considers online learning with general directed feedback graphs.
We present best-of-both-worlds algorithms that achieve nearly tight regret bounds for adversarial environments.
arXiv Detail & Related papers (2022-06-02T05:01:40Z) - A Near-Optimal Best-of-Both-Worlds Algorithm for Online Learning with
Feedback Graphs [21.563733343861713]
We consider online learning with feedback graphs, a sequential decision-making framework where the learner's feedback is determined by a directed graph over the action set.
We present a computationally efficient algorithm for learning in this framework that simultaneously achieves near-optimal regret bounds in both and adversarial environments.
arXiv Detail & Related papers (2022-06-01T15:14:32Z) - Distributed Saddle-Point Problems Under Similarity [173.19083235638104]
We show that a given suboptimality $epsilon0$ is achieved master/workers networks in $Omegabig.
We then propose algorithms matching the lower bounds either types of networks (up to log-overs)
We assess effectiveness of the proposed algorithms on a robust logistic regression problem.
arXiv Detail & Related papers (2021-07-22T14:25:16Z) - Maximizing Influence of Leaders in Social Networks [15.05158252504978]
We consider the edge addition problem for the DeGroot model of opinion dynamics in a social network with $n$ nodes and $m$ edges.
We show that our algorithm is efficient and effective, which scales to large networks with more than a million nodes.
arXiv Detail & Related papers (2021-06-11T02:31:46Z) - Accelerated Gradient Tracking over Time-varying Graphs for Decentralized Optimization [59.65871549878937]
We prove that the practical single loop accelerated gradient tracking needs $O(fracgamma1-sigma_gamma)2sqrtfracLepsilon)$.
Our convergence rates improve significantly over the ones of $O(frac1epsilon5/7)$ and $O(fracLmu)5/7frac1 (1-sigma)1.5logfrac1epsilon)$.
arXiv Detail & Related papers (2021-04-06T15:34:14Z) - Optimal Regret Algorithm for Pseudo-1d Bandit Convex Optimization [51.23789922123412]
We study online learning with bandit feedback (i.e. learner has access to only zeroth-order oracle) where cost/reward functions admit a "pseudo-1d" structure.
We show a lower bound of $min(sqrtdT, T3/4)$ for the regret of any algorithm, where $T$ is the number of rounds.
We propose a new algorithm sbcalg that combines randomized online gradient descent with a kernelized exponential weights method to exploit the pseudo-1d structure effectively.
arXiv Detail & Related papers (2021-02-15T08:16:51Z) - Random Graph Matching with Improved Noise Robustness [2.294014185517203]
We propose a new algorithm for graph matching under probabilistic models.
Our algorithm recovers the underlying matching with high probability when $alpha le 1 / (log log n)C$.
This improves the condition $alpha le 1 / (log n)C$ achieved in previous work.
arXiv Detail & Related papers (2021-01-28T02:39:27Z) - Scalable Deep Generative Modeling for Sparse Graphs [105.60961114312686]
Existing deep neural methods require $Omega(n2)$ complexity by building up the adjacency matrix.
We develop a novel autoregressive model, named BiGG, that utilizes this sparsity to avoid generating the full adjacency matrix.
During training this autoregressive model can be parallelized with $O(log n)$ synchronization stages.
arXiv Detail & Related papers (2020-06-28T04:37:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.