Directed Acyclic Graphs With Tears
- URL: http://arxiv.org/abs/2302.02160v1
- Date: Sat, 4 Feb 2023 13:00:52 GMT
- Title: Directed Acyclic Graphs With Tears
- Authors: Zhichao Chen, Zhiqiang Ge
- Abstract summary: DAGs with Tears is a new type of structure learning based on mix-integer programming.
In this work, the reason for challenge 1) is analyzed theoretically, and a novel method named DAGs with Tears method is proposed based on mix-integer programming.
In addition, prior knowledge is able to incorporate into the new proposed method, making structure learning more practical and useful in industrial processes.
- Score: 8.774590352292932
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Bayesian network is a frequently-used method for fault detection and
diagnosis in industrial processes. The basis of Bayesian network is structure
learning which learns a directed acyclic graph (DAG) from data. However, the
search space will scale super-exponentially with the increase of process
variables, which makes the data-driven structure learning a challenging
problem. To this end, the DAGs with NOTEARs methods are being well studied not
only for their conversion of the discrete optimization into continuous
optimization problem but also their compatibility with deep learning framework.
Nevertheless, there still remain challenges for NOTEAR-based methods: 1) the
infeasible solution results from the gradient descent-based optimization
paradigm; 2) the truncation operation to promise the learned graph acyclic. In
this work, the reason for challenge 1) is analyzed theoretically, and a novel
method named DAGs with Tears method is proposed based on mix-integer
programming to alleviate challenge 2). In addition, prior knowledge is able to
incorporate into the new proposed method, making structure learning more
practical and useful in industrial processes. Finally, a numerical example and
an industrial example are adopted as case studies to demonstrate the
superiority of the developed method.
Related papers
- $ψ$DAG: Projected Stochastic Approximation Iteration for DAG Structure Learning [6.612096312467342]
Learning the structure of Directed A Graphs (DAGs) presents a significant challenge due to the vast search space of possible graphs, which scales with the number of nodes.
Recent advancements have redefined this problem as a continuous optimization task by incorporating differentiable a exponentiallyity constraints.
We present a novel framework for learning DAGs, employing a Approximation approach integrated with Gradient Descent (SGD)-based optimization techniques.
arXiv Detail & Related papers (2024-10-31T12:13:11Z) - End-to-End Meta-Bayesian Optimisation with Transformer Neural Processes [52.818579746354665]
This paper proposes the first end-to-end differentiable meta-BO framework that generalises neural processes to learn acquisition functions via transformer architectures.
We enable this end-to-end framework with reinforcement learning (RL) to tackle the lack of labelled acquisition data.
arXiv Detail & Related papers (2023-05-25T10:58:46Z) - On the efficiency of Stochastic Quasi-Newton Methods for Deep Learning [0.0]
We study the behaviour of quasi-Newton training algorithms for deep memory networks.
We show that quasi-Newtons are efficient and able to outperform in some instances the well-known first-order Adam run.
arXiv Detail & Related papers (2022-05-18T20:53:58Z) - GLAN: A Graph-based Linear Assignment Network [29.788755291070462]
We propose a learnable linear assignment solver based on deep graph networks.
The experimental results on a synthetic dataset reveal that our method outperforms state-of-the-art baselines.
We also embed the proposed solver into a popular multi-object tracking (MOT) framework to train the tracker in an end-to-end manner.
arXiv Detail & Related papers (2022-01-05T13:18:02Z) - Simple Stochastic and Online Gradient DescentAlgorithms for Pairwise
Learning [65.54757265434465]
Pairwise learning refers to learning tasks where the loss function depends on a pair instances.
Online descent (OGD) is a popular approach to handle streaming data in pairwise learning.
In this paper, we propose simple and online descent to methods for pairwise learning.
arXiv Detail & Related papers (2021-11-23T18:10:48Z) - Graph Signal Restoration Using Nested Deep Algorithm Unrolling [85.53158261016331]
Graph signal processing is a ubiquitous task in many applications such as sensor, social transportation brain networks, point cloud processing, and graph networks.
We propose two restoration methods based on convexindependent deep ADMM (ADMM)
parameters in the proposed restoration methods are trainable in an end-to-end manner.
arXiv Detail & Related papers (2021-06-30T08:57:01Z) - DAGs with No Curl: An Efficient DAG Structure Learning Approach [62.885572432958504]
Recently directed acyclic graph (DAG) structure learning is formulated as a constrained continuous optimization problem with continuous acyclicity constraints.
We propose a novel learning framework to model and learn the weighted adjacency matrices in the DAG space directly.
We show that our method provides comparable accuracy but better efficiency than baseline DAG structure learning methods on both linear and generalized structural equation models.
arXiv Detail & Related papers (2021-06-14T07:11:36Z) - Integer Programming for Causal Structure Learning in the Presence of
Latent Variables [28.893119229428713]
We propose a novel exact score-based method that solves an integer programming (IP) formulation and returns a score-maximizing ancestral ADMG for a set of continuous variables.
In particular, we generalize the state-of-the-art IP model for DAG learning problems and derive new classes of valid inequalities to formalize the IP-based ADMG learning model.
arXiv Detail & Related papers (2021-02-05T12:10:16Z) - An Uncertainty-Driven GCN Refinement Strategy for Organ Segmentation [53.425900196763756]
We propose a segmentation refinement method based on uncertainty analysis and graph convolutional networks.
We employ the uncertainty levels of the convolutional network in a particular input volume to formulate a semi-supervised graph learning problem.
We show that our method outperforms the state-of-the-art CRF refinement method by improving the dice score by 1% for the pancreas and 2% for spleen.
arXiv Detail & Related papers (2020-12-06T18:55:07Z) - Semi-Supervised Learning with Meta-Gradient [123.26748223837802]
We propose a simple yet effective meta-learning algorithm in semi-supervised learning.
We find that the proposed algorithm performs favorably against state-of-the-art methods.
arXiv Detail & Related papers (2020-07-08T08:48:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.