Learning the hub graphical Lasso model with the structured sparsity via
an efficient algorithm
- URL: http://arxiv.org/abs/2308.08852v1
- Date: Thu, 17 Aug 2023 08:24:28 GMT
- Title: Learning the hub graphical Lasso model with the structured sparsity via
an efficient algorithm
- Authors: Chengjing Wang, Peipei Tang, Wenling He, Meixia Lin
- Abstract summary: We introduce a two-phase algorithm to estimate hub graphical models.
The proposed algorithm first generates a good initial point via a dual alternating direction method of multipliers.
It then warms a semismooth Newton (SSN) based augmented Lagrangian method (ALM) to compute a solution that is accurate enough for practical tasks.
- Score: 1.0923877073891446
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Graphical models have exhibited their performance in numerous tasks ranging
from biological analysis to recommender systems. However, graphical models with
hub nodes are computationally difficult to fit, particularly when the dimension
of the data is large. To efficiently estimate the hub graphical models, we
introduce a two-phase algorithm. The proposed algorithm first generates a good
initial point via a dual alternating direction method of multipliers (ADMM),
and then warm starts a semismooth Newton (SSN) based augmented Lagrangian
method (ALM) to compute a solution that is accurate enough for practical tasks.
The sparsity structure of the generalized Jacobian ensures that the algorithm
can obtain a nice solution very efficiently. Comprehensive experiments on both
synthetic data and real data show that it obviously outperforms the existing
state-of-the-art algorithms. In particular, in some high dimensional tasks, it
can save more than 70\% of the execution time, meanwhile still achieves a
high-quality estimation.
Related papers
- Ensemble Quadratic Assignment Network for Graph Matching [52.20001802006391]
Graph matching is a commonly used technique in computer vision and pattern recognition.
Recent data-driven approaches have improved the graph matching accuracy remarkably.
We propose a graph neural network (GNN) based approach to combine the advantages of data-driven and traditional methods.
arXiv Detail & Related papers (2024-03-11T06:34:05Z) - A Weighted K-Center Algorithm for Data Subset Selection [70.49696246526199]
Subset selection is a fundamental problem that can play a key role in identifying smaller portions of the training data.
We develop a novel factor 3-approximation algorithm to compute subsets based on the weighted sum of both k-center and uncertainty sampling objective functions.
arXiv Detail & Related papers (2023-12-17T04:41:07Z) - Large-scale Bayesian Structure Learning for Gaussian Graphical Models using Marginal Pseudo-likelihood [0.26249027950824516]
We introduce two novel Markov chain Monte Carlo (MCMC) search algorithms with a significantly lower computational cost than leading Bayesian approaches.
These algorithms can deliver reliable results in mere minutes on standard computers, even for large-scale problems with one thousand variables.
We also illustrate the practical utility of our methods on medium and large-scale applications from human and mice gene expression studies.
arXiv Detail & Related papers (2023-06-30T20:37:40Z) - ParlayANN: Scalable and Deterministic Parallel Graph-Based Approximate
Nearest Neighbor Search Algorithms [5.478671305092084]
We introduce ParlayANN, a library of deterministic and parallel graph-based approximate nearest neighbor search algorithms.
We develop novel parallel implementations for four state-of-the-art graph-based ANNS algorithms that scale to billion-scale datasets.
arXiv Detail & Related papers (2023-05-07T19:28:23Z) - Representation Learning with Multi-Step Inverse Kinematics: An Efficient
and Optimal Approach to Rich-Observation RL [106.82295532402335]
Existing reinforcement learning algorithms suffer from computational intractability, strong statistical assumptions, and suboptimal sample complexity.
We provide the first computationally efficient algorithm that attains rate-optimal sample complexity with respect to the desired accuracy level.
Our algorithm, MusIK, combines systematic exploration with representation learning based on multi-step inverse kinematics.
arXiv Detail & Related papers (2023-04-12T14:51:47Z) - Accelerating ERM for data-driven algorithm design using output-sensitive techniques [26.32088674030797]
We study techniques to develop efficient learning algorithms for data-driven algorithm design.
Our approach involves two novel ingredients -- an output-sensitive algorithm for enumerating polytopes induced by a set of hyperplanes.
We illustrate our techniques by giving algorithms for pricing problems, linkage-based clustering and dynamic-programming based sequence alignment.
arXiv Detail & Related papers (2022-04-07T17:27:18Z) - Efficient proximal gradient algorithms for joint graphical lasso [9.752101654013053]
We consider learning an undirected graphical model from sparse data.
We propose proximal gradient procedures with and without a backtracking option for the joint graphical lasso (JGL)
The proposed algorithms can achieve high accuracy and precision, and their efficiency is competitive with state-of-the-art algorithms.
arXiv Detail & Related papers (2021-07-16T09:59:13Z) - Dual Optimization for Kolmogorov Model Learning Using Enhanced Gradient
Descent [8.714458129632158]
Kolmogorov model (KM) is an interpretable and predictable representation approach to learning the underlying probabilistic structure of a set of random variables.
We propose a computationally scalable KM learning algorithm, based on the regularized dual optimization combined with enhanced gradient descent (GD) method.
It is shown that the accuracy of logical relation mining for interpretability by using the proposed KM learning algorithm exceeds $80%$.
arXiv Detail & Related papers (2021-07-11T10:33:02Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Temporal Attention-Augmented Graph Convolutional Network for Efficient
Skeleton-Based Human Action Recognition [97.14064057840089]
Graphal networks (GCNs) have been very successful in modeling non-Euclidean data structures.
Most GCN-based action recognition methods use deep feed-forward networks with high computational complexity to process all skeletons in an action.
We propose a temporal attention module (TAM) for increasing the efficiency in skeleton-based action recognition.
arXiv Detail & Related papers (2020-10-23T08:01:55Z) - Single-Timescale Stochastic Nonconvex-Concave Optimization for Smooth
Nonlinear TD Learning [145.54544979467872]
We propose two single-timescale single-loop algorithms that require only one data point each step.
Our results are expressed in a form of simultaneous primal and dual side convergence.
arXiv Detail & Related papers (2020-08-23T20:36:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.