Promotion/Inhibition Effects in Networks: A Model with Negative
Probabilities
- URL: http://arxiv.org/abs/2307.07738v2
- Date: Thu, 17 Aug 2023 01:03:16 GMT
- Title: Promotion/Inhibition Effects in Networks: A Model with Negative
Probabilities
- Authors: Anqi Dong, Tryphon T. Georgiou and Allen Tannenbaum
- Abstract summary: Biological networks often encapsulate promotion/inhibition as signed edge-weights of a graph.
We address the inverse problem to determine network edge-weights based on a sign-indefinite adjacency and expression levels at the nodes.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Biological networks often encapsulate promotion/inhibition as signed
edge-weights of a graph. Nodes may correspond to genes assigned expression
levels (mass) of respective proteins. The promotion/inhibition nature of
co-expression between nodes is encoded in the sign of the corresponding entry
of a sign-indefinite adjacency matrix, though the strength of such
co-expression (i.e., the precise value of edge weights) cannot typically be
directly measured. Herein we address the inverse problem to determine network
edge-weights based on a sign-indefinite adjacency and expression levels at the
nodes. While our motivation originates in gene networks, the framework applies
to networks where promotion/inhibition dictates a stationary mass distribution
at the nodes. In order to identify suitable edge-weights we adopt a framework
of ``negative probabilities,'' advocated by P.\ Dirac and R.\ Feynman, and we
set up a likelihood formalism to obtain values for the sought edge-weights. The
proposed optimization problem can be solved via a generalization of the
well-known Sinkhorn algorithm; in our setting the Sinkhorn-type ``diagonal
scalings'' are multiplicative or inverse-multiplicative, depending on the sign
of the respective entries in the adjacency matrix, with value computed as the
positive root of a quadratic polynomial.
Related papers
- Residual Connections and Normalization Can Provably Prevent Oversmoothing in GNNs [30.003409099607204]
We provide a formal and precise characterization of (linearized) graph neural networks (GNNs) with residual connections and normalization layers.
We show that the centering step of a normalization layer alters the graph signal in message-passing in such a way that relevant information can become harder to extract.
We introduce a novel, principled normalization layer called GraphNormv2 in which the centering step is learned such that it does not distort the original graph signal in an undesirable way.
arXiv Detail & Related papers (2024-06-05T06:53:16Z) - Subspace Node Pruning [2.3125457626961263]
nodes pruning is the art of removing computational units such as neurons, filters, attention heads, or even entire layers to significantly reduce inference time while retaining network performance.
In this work, we propose the projection of unit activations to an orthogonal subspace in which there is no redundant activity and within which we may prune nodes while simultaneously recovering the impact of lost units.
Our proposed method reaches state of the art when pruning ImageNet trained VGG-16 and rivals more complex state of the art methods when pruning ResNet-50 networks.
arXiv Detail & Related papers (2024-05-26T14:27:26Z) - Efficient Link Prediction via GNN Layers Induced by Negative Sampling [92.05291395292537]
Graph neural networks (GNNs) for link prediction can loosely be divided into two broad categories.
First, emphnode-wise architectures pre-compute individual embeddings for each node that are later combined by a simple decoder to make predictions.
Second, emphedge-wise methods rely on the formation of edge-specific subgraph embeddings to enrich the representation of pair-wise relationships.
arXiv Detail & Related papers (2023-10-14T07:02:54Z) - Refined Edge Usage of Graph Neural Networks for Edge Prediction [51.06557652109059]
We propose a novel edge prediction paradigm named Edge-aware Message PassIng neuRal nEtworks (EMPIRE)
We first introduce an edge splitting technique to specify use of each edge where each edge is solely used as either the topology or the supervision.
In order to emphasize the differences between pairs connected by supervision edges and pairs unconnected, we further weight the messages to highlight the relative ones that can reflect the differences.
arXiv Detail & Related papers (2022-12-25T23:19:56Z) - On the Effective Number of Linear Regions in Shallow Univariate ReLU
Networks: Convergence Guarantees and Implicit Bias [50.84569563188485]
We show that gradient flow converges in direction when labels are determined by the sign of a target network with $r$ neurons.
Our result may already hold for mild over- parameterization, where the width is $tildemathcalO(r)$ and independent of the sample size.
arXiv Detail & Related papers (2022-05-18T16:57:10Z) - Pseudo-Euclidean Attract-Repel Embeddings for Undirected Graphs [73.0261182389643]
Dot product embeddings take a graph and construct vectors for nodes such that dot products between two vectors give the strength of the edge.
We remove the transitivity assumption by embedding nodes into a pseudo-Euclidean space.
Pseudo-Euclidean embeddings can compress networks efficiently, allow for multiple notions of nearest neighbors each with their own interpretation, and can be slotted' into existing models.
arXiv Detail & Related papers (2021-06-17T17:23:56Z) - Spectral clustering under degree heterogeneity: a case for the random
walk Laplacian [83.79286663107845]
This paper shows that graph spectral embedding using the random walk Laplacian produces vector representations which are completely corrected for node degree.
In the special case of a degree-corrected block model, the embedding concentrates about K distinct points, representing communities.
arXiv Detail & Related papers (2021-05-03T16:36:27Z) - Controllable Orthogonalization in Training DNNs [96.1365404059924]
Orthogonality is widely used for training deep neural networks (DNNs) due to its ability to maintain all singular values of the Jacobian close to 1.
This paper proposes a computationally efficient and numerically stable orthogonalization method using Newton's iteration (ONI)
We show that our method improves the performance of image classification networks by effectively controlling the orthogonality to provide an optimal tradeoff between optimization benefits and representational capacity reduction.
We also show that ONI stabilizes the training of generative adversarial networks (GANs) by maintaining the Lipschitz continuity of a network, similar to spectral normalization (
arXiv Detail & Related papers (2020-04-02T10:14:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.