Learning nonparametric DAGs with incremental information via high-order
HSIC
- URL: http://arxiv.org/abs/2308.05969v2
- Date: Thu, 14 Sep 2023 08:42:33 GMT
- Title: Learning nonparametric DAGs with incremental information via high-order
HSIC
- Authors: Yafei Wang, Jianguo Liu
- Abstract summary: We present an identifiability condition based on a determined subset of parents to identify the underlying DAG.
In the optimal phase, an optimization problem based on first-order Hilbert-optimal independence criterion (HSIC) gives an estimated skeleton as the initial determined parents subset.
In the tuning phase, the skeleton is locally tuned by deletion, addition and DAG-formalization strategies.
- Score: 13.061477915002767
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Score-based methods for learning Bayesain networks(BN) aim to maximizing the
global score functions. However, if local variables have direct and indirect
dependence simultaneously, the global optimization on score functions misses
edges between variables with indirect dependent relationship, of which scores
are smaller than those with direct dependent relationship. In this paper, we
present an identifiability condition based on a determined subset of parents to
identify the underlying DAG. By the identifiability condition, we develop a
two-phase algorithm namely optimal-tuning (OT) algorithm to locally amend the
global optimization. In the optimal phase, an optimization problem based on
first-order Hilbert-Schmidt independence criterion (HSIC) gives an estimated
skeleton as the initial determined parents subset. In the tuning phase, the
skeleton is locally tuned by deletion, addition and DAG-formalization
strategies using the theoretically proved incremental properties of high-order
HSIC. Numerical experiments for different synthetic datasets and real-world
datasets show that the OT algorithm outperforms existing methods. Especially in
Sigmoid Mix model with the size of the graph being ${\rm\bf d=40}$, the
structure intervention distance (SID) of the OT algorithm is 329.7 smaller than
the one obtained by CAM, which indicates that the graph estimated by the OT
algorithm misses fewer edges compared with CAM.Source code of the OT algorithm
is available at https://github.com/YafeiannWang/optimal-tune-algorithm.
Related papers
- Optimality of Message-Passing Architectures for Sparse Graphs [13.96547777184641]
We study the node classification problem on feature-decorated graphs in the sparse setting, i.e., when the expected degree of a node is $O(1)$ in the number of nodes.
We introduce a notion of Bayes optimality for node classification tasks, called local Bayes optimality.
We show that the optimal message-passing architecture interpolates between a standard in the regime of low graph signal and a typical in the regime of high graph signal.
arXiv Detail & Related papers (2023-05-17T17:31:20Z) - An Algebraically Converging Stochastic Gradient Descent Algorithm for
Global Optimization [14.336473214524663]
A key component in the algorithm is the randomness based on the value of the objective function.
We prove the convergence of the algorithm with an algebra and tuning in the parameter space.
We present several numerical examples to demonstrate the efficiency and robustness of the algorithm.
arXiv Detail & Related papers (2022-04-12T16:27:49Z) - Escaping Saddle Points with Bias-Variance Reduced Local Perturbed SGD
for Communication Efficient Nonconvex Distributed Learning [58.79085525115987]
Local methods are one of the promising approaches to reduce communication time.
We show that the communication complexity is better than non-local methods when the local datasets is smaller than the smoothness local loss.
arXiv Detail & Related papers (2022-02-12T15:12:17Z) - Simple Stochastic and Online Gradient DescentAlgorithms for Pairwise
Learning [65.54757265434465]
Pairwise learning refers to learning tasks where the loss function depends on a pair instances.
Online descent (OGD) is a popular approach to handle streaming data in pairwise learning.
In this paper, we propose simple and online descent to methods for pairwise learning.
arXiv Detail & Related papers (2021-11-23T18:10:48Z) - The Minimax Complexity of Distributed Optimization [0.0]
I present the "graph oracle model", an extension of the classic oracle framework that can be applied to distributed optimization.
I focus on the specific case of the "intermittent communication setting"
I analyze the theoretical properties of the popular Local Descent (SGD) algorithm in convex setting.
arXiv Detail & Related papers (2021-09-01T15:18:33Z) - Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex
Decentralized Optimization Over Time-Varying Networks [79.16773494166644]
We consider the task of minimizing the sum of smooth and strongly convex functions stored in a decentralized manner across the nodes of a communication network.
We design two optimal algorithms that attain these lower bounds.
We corroborate the theoretical efficiency of these algorithms by performing an experimental comparison with existing state-of-the-art methods.
arXiv Detail & Related papers (2021-06-08T15:54:44Z) - FedPD: A Federated Learning Framework with Optimal Rates and Adaptivity
to Non-IID Data [59.50904660420082]
Federated Learning (FL) has become a popular paradigm for learning from distributed data.
To effectively utilize data at different devices without moving them to the cloud, algorithms such as the Federated Averaging (FedAvg) have adopted a "computation then aggregation" (CTA) model.
arXiv Detail & Related papers (2020-05-22T23:07:42Z) - Convergence of Online Adaptive and Recurrent Optimization Algorithms [0.0]
We prove local convergence of several notable descent algorithms used in machine learning.
We adopt an "ergodic" rather than probabilistic viewpoint, working with empirical time averages instead of probability distributions.
arXiv Detail & Related papers (2020-05-12T09:48:52Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z) - Towards Better Understanding of Adaptive Gradient Algorithms in
Generative Adversarial Nets [71.05306664267832]
Adaptive algorithms perform gradient updates using the history of gradients and are ubiquitous in training deep neural networks.
In this paper we analyze a variant of OptimisticOA algorithm for nonconcave minmax problems.
Our experiments show that adaptive GAN non-adaptive gradient algorithms can be observed empirically.
arXiv Detail & Related papers (2019-12-26T22:10:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.