Structured Information Loss in Network Embeddings
- URL: http://arxiv.org/abs/2509.12396v1
- Date: Mon, 15 Sep 2025 19:41:24 GMT
- Title: Structured Information Loss in Network Embeddings
- Authors: Gabriel Chuang, Augustin Chaintreau,
- Abstract summary: We analyze a simple algorithm for network embedding, explicitly characterizing conditions under which the learned representation encodes the graph's generative model fully, partially, or not at all.<n>We show implications for community detection and link prediction.
- Score: 0.5672132510411464
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We analyze a simple algorithm for network embedding, explicitly characterizing conditions under which the learned representation encodes the graph's generative model fully, partially, or not at all. In cases where the embedding loses some information (i.e., is not invertible), we describe the equivalence classes of graphons that map to the same embedding, finding that these classes preserve community structure but lose substantial density information. Finally, we show implications for community detection and link prediction. Our results suggest strong limitations on the effectiveness of link prediction based on embeddings alone, and we show common conditions under which naive link prediction adds edges in a disproportionate manner that can either mitigate or exacerbate structural biases.
Related papers
- A Signed Graph Approach to Understanding and Mitigating Oversmoothing in GNNs [54.62268052283014]
We present a unified theoretical perspective based on the framework of signed graphs.<n>We show that many existing strategies implicitly introduce negative edges that alter message-passing to resist oversmoothing.<n>We propose Structural Balanced Propagation (SBP), a plug-and-play method that assigns signed edges based on either labels or feature similarity.
arXiv Detail & Related papers (2025-02-17T03:25:36Z) - Conformal Prediction for Federated Graph Neural Networks with Missing Neighbor Information [2.404163279345609]
This study extends the applicability of Conformal Prediction to federated graph learning.
We tackle the missing links issue in distributed subgraphs to minimize its adverse effects on CP set sizes.
We introduce a Variational Autoencoder-based approach for reconstructing missing neighbors to mitigate the negative impact of missing data.
arXiv Detail & Related papers (2024-10-17T20:22:25Z) - Learning to Model Graph Structural Information on MLPs via Graph Structure Self-Contrasting [50.181824673039436]
We propose a Graph Structure Self-Contrasting (GSSC) framework that learns graph structural information without message passing.
The proposed framework is based purely on Multi-Layer Perceptrons (MLPs), where the structural information is only implicitly incorporated as prior knowledge.
It first applies structural sparsification to remove potentially uninformative or noisy edges in the neighborhood, and then performs structural self-contrasting in the sparsified neighborhood to learn robust node representations.
arXiv Detail & Related papers (2024-09-09T12:56:02Z) - Learning Latent Graph Structures and their Uncertainty [63.95971478893842]
We show that minimizing point-prediction losses does not guarantee proper learning of latent relational information.<n>We propose a sampling-based method that solves this joint learning task.
arXiv Detail & Related papers (2024-05-30T10:49:22Z) - Semantic Loss Functions for Neuro-Symbolic Structured Prediction [74.18322585177832]
We discuss the semantic loss, which injects knowledge about such structure, defined symbolically, into training.
It is agnostic to the arrangement of the symbols, and depends only on the semantics expressed thereby.
It can be combined with both discriminative and generative neural models.
arXiv Detail & Related papers (2024-05-12T22:18:25Z) - Revisiting Robustness in Graph Machine Learning [1.5293427903448025]
Many works show that node-level predictions of Graph Neural Networks (GNNs) are unrobust to small, often termed adversarial, changes to the graph structure.
We introduce a more principled notion of an adversarial graph, which is aware of semantic content change.
We find that including the label-structure of the training graph into the inference process of GNNs significantly reduces over-robustness.
arXiv Detail & Related papers (2023-05-01T14:39:55Z) - DTU-Net: Learning Topological Similarity for Curvilinear Structure
Segmentation [2.9398911304923447]
We present DTU-Net, a dual-decoder and topology-aware deep neural network consisting of two sequential light-weight U-Nets.
The texture net makes a coarse prediction using image texture information.
The topology net learns topological information from the coarse prediction by employing a triplet loss trained to recognize false and missed splits.
arXiv Detail & Related papers (2022-05-23T08:15:26Z) - BSAL: A Framework of Bi-component Structure and Attribute Learning for
Link Prediction [33.488229191263564]
We propose a bicomponent structural and attribute learning framework (BSAL) that is designed to adaptively leverage information from topology and feature spaces.
BSAL constructs a semantic topology via the node attributes and then gets the embeddings regarding the semantic view.
It provides a flexible and easy-to-implement solution to adaptively incorporate the information carried by the node attributes.
arXiv Detail & Related papers (2022-04-18T03:12:13Z) - Unbiased Graph Embedding with Biased Graph Observations [52.82841737832561]
We propose a principled new way for obtaining unbiased representations by learning from an underlying bias-free graph.
Based on this new perspective, we propose two complementary methods for uncovering such an underlying graph.
arXiv Detail & Related papers (2021-10-26T18:44:37Z) - Information Obfuscation of Graph Neural Networks [96.8421624921384]
We study the problem of protecting sensitive attributes by information obfuscation when learning with graph structured data.
We propose a framework to locally filter out pre-determined sensitive attributes via adversarial training with the total variation and the Wasserstein distance.
arXiv Detail & Related papers (2020-09-28T17:55:04Z) - CSNE: Conditional Signed Network Embedding [77.54225346953069]
Signed networks encode positive and negative relations between entities such as friend/foe or trust/distrust.
Existing embedding methods for sign prediction generally enforce different notions of status or balance theories in their optimization function.
We introduce conditional signed network embedding (CSNE)
Our probabilistic approach models structural information about the signs in the network separately from fine-grained detail.
arXiv Detail & Related papers (2020-05-19T19:14:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.