Permutation-Invariant Subgraph Discovery
- URL: http://arxiv.org/abs/2104.01063v1
- Date: Fri, 2 Apr 2021 14:28:21 GMT
- Title: Permutation-Invariant Subgraph Discovery
- Authors: Raghvendra Mall, Shameem A. Parambath, Han Yufei, Ting Yu and Sanjay
Chawla
- Abstract summary: We introduce Permutation and Structured Perturbation Inference (PSPI)
PSPI is a new problem formulation that abstracts many graph matching tasks that arise in systems biology.
We propose an ADMM algorithm (STEPD) to solve a relaxed version of the PSPI problem.
- Score: 16.380476734531513
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce Permutation and Structured Perturbation Inference (PSPI), a new
problem formulation that abstracts many graph matching tasks that arise in
systems biology. PSPI can be viewed as a robust formulation of the permutation
inference or graph matching, where the objective is to find a permutation
between two graphs under the assumption that a set of edges may have undergone
a perturbation due to an underlying cause. For example, suppose there are two
gene regulatory networks X and Y from a diseased and normal tissue
respectively. Then, the PSPI problem can be used to detect if there has been a
structural change between the two networks which can serve as a signature of
the disease. Besides the new problem formulation, we propose an ADMM algorithm
(STEPD) to solve a relaxed version of the PSPI problem. An extensive case study
on comparative gene regulatory networks (GRNs) is used to demonstrate that
STEPD is able to accurately infer structured perturbations and thus provides a
tool for computational biologists to identify novel prognostic signatures. A
spectral analysis confirms that STEPD can recover small clique-like
perturbations making it a useful tool for detecting permutation-invariant
changes in graphs.
Related papers
- Machines and Mathematical Mutations: Using GNNs to Characterize Quiver Mutation Classes [4.229995708813431]
We use graph neural networks and AI explainability techniques to discover mutation equivalence criteria for quivers of type $tildeD_n$.
We also show that our model captures structure within its hidden representation that allows us to reconstruct known criteria from type $D_n$.
arXiv Detail & Related papers (2024-11-12T01:09:41Z) - CSGDN: Contrastive Signed Graph Diffusion Network for Predicting Crop Gene-phenotype Associations [6.5678927417916455]
We propose a Contrastive Signed Graph Diffusion Network, CSGDN, to learn robust node representations with fewer training samples to achieve higher link prediction accuracy.
We conduct experiments to validate the performance of CSGDN on three crop datasets: Gossypium hirsutum, Brassica napus, and Triticum turgidum.
arXiv Detail & Related papers (2024-10-10T01:01:10Z) - Predicting Genetic Mutation from Whole Slide Images via Biomedical-Linguistic Knowledge Enhanced Multi-label Classification [119.13058298388101]
We develop a Biological-knowledge enhanced PathGenomic multi-label Transformer to improve genetic mutation prediction performances.
BPGT first establishes a novel gene encoder that constructs gene priors by two carefully designed modules.
BPGT then designs a label decoder that finally performs genetic mutation prediction by two tailored modules.
arXiv Detail & Related papers (2024-06-05T06:42:27Z) - Cell Graph Transformer for Nuclei Classification [78.47566396839628]
We develop a cell graph transformer (CGT) that treats nodes and edges as input tokens to enable learnable adjacency and information exchange among all nodes.
Poorly features can lead to noisy self-attention scores and inferior convergence.
We propose a novel topology-aware pretraining method that leverages a graph convolutional network (GCN) to learn a feature extractor.
arXiv Detail & Related papers (2024-02-20T12:01:30Z) - Unsupervised Learning of Invariance Transformations [105.54048699217668]
We develop an algorithmic framework for finding approximate graph automorphisms.
We discuss how this framework can be used to find approximate automorphisms in weighted graphs in general.
arXiv Detail & Related papers (2023-07-24T17:03:28Z) - Discrete Graph Auto-Encoder [52.50288418639075]
We introduce a new framework named Discrete Graph Auto-Encoder (DGAE)
We first use a permutation-equivariant auto-encoder to convert graphs into sets of discrete latent node representations.
In the second step, we sort the sets of discrete latent representations and learn their distribution with a specifically designed auto-regressive model.
arXiv Detail & Related papers (2023-06-13T12:40:39Z) - Permutation-Invariant Set Autoencoders with Fixed-Size Embeddings for
Multi-Agent Learning [7.22614468437919]
We introduce a Permutation-Invariant Set Autoencoder (PISA)
PISA produces encodings with significantly lower reconstruction error than existing baselines.
We demonstrate its usefulness in a multi-agent application.
arXiv Detail & Related papers (2023-02-24T18:59:13Z) - DynGFN: Towards Bayesian Inference of Gene Regulatory Networks with
GFlowNets [81.75973217676986]
Gene regulatory networks (GRN) describe interactions between genes and their products that control gene expression and cellular function.
Existing methods either focus on challenge (1), identifying cyclic structure from dynamics, or on challenge (2) learning complex Bayesian posteriors over DAGs, but not both.
In this paper we leverage the fact that it is possible to estimate the "velocity" of gene expression with RNA velocity techniques to develop an approach that addresses both challenges.
arXiv Detail & Related papers (2023-02-08T16:36:40Z) - A step towards neural genome assembly [0.0]
We train the MPNN model with max-aggregator to execute several algorithms for graph simplification.
We show that the algorithms were learned successfully and can be scaled to graphs of sizes up to 20 times larger than the ones used in training.
arXiv Detail & Related papers (2020-11-10T10:12:19Z) - Learn to Predict Sets Using Feed-Forward Neural Networks [63.91494644881925]
This paper addresses the task of set prediction using deep feed-forward neural networks.
We present a novel approach for learning to predict sets with unknown permutation and cardinality.
We demonstrate the validity of our set formulations on relevant vision problems.
arXiv Detail & Related papers (2020-01-30T01:52:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.