Privacy-Preserving Representation Learning for Text-Attributed Networks
with Simplicial Complexes
- URL: http://arxiv.org/abs/2302.04383v1
- Date: Thu, 9 Feb 2023 00:32:06 GMT
- Title: Privacy-Preserving Representation Learning for Text-Attributed Networks
with Simplicial Complexes
- Authors: Huixin Zhan, Victor S. Sheng
- Abstract summary: I will study learning network representations with text attributes for simplicial complexes (RT4SC) via simplicial neural networks (SNNs)
I will conduct research on two potential attacks on the representation outputs from SNNs.
I will study a privacy-preserving deterministic differentially private alternating direction method of multiplier to learn secure representation outputs from SNNs.
- Score: 24.82096971322501
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Although recent network representation learning (NRL) works in
text-attributed networks demonstrated superior performance for various graph
inference tasks, learning network representations could always raise privacy
concerns when nodes represent people or human-related variables. Moreover,
standard NRLs that leverage structural information from a graph proceed by
first encoding pairwise relationships into learned representations and then
analysing its properties. This approach is fundamentally misaligned with
problems where the relationships involve multiple points, and topological
structure must be encoded beyond pairwise interactions. Fortunately, the
machinery of topological data analysis (TDA) and, in particular, simplicial
neural networks (SNNs) offer a mathematically rigorous framework to learn
higher-order interactions between nodes. It is critical to investigate if the
representation outputs from SNNs are more vulnerable compared to regular
representation outputs from graph neural networks (GNNs) via pairwise
interactions. In my dissertation, I will first study learning the
representations with text attributes for simplicial complexes (RT4SC) via SNNs.
Then, I will conduct research on two potential attacks on the representation
outputs from SNNs: (1) membership inference attack, which infers whether a
certain node of a graph is inside the training data of the GNN model; and (2)
graph reconstruction attacks, which infer the confidential edges of a
text-attributed network. Finally, I will study a privacy-preserving
deterministic differentially private alternating direction method of multiplier
to learn secure representation outputs from SNNs that capture multi-scale
relationships and facilitate the passage from local structure to global
invariant features on text-attributed networks.
Related papers
- DGNN: Decoupled Graph Neural Networks with Structural Consistency
between Attribute and Graph Embedding Representations [62.04558318166396]
Graph neural networks (GNNs) demonstrate a robust capability for representation learning on graphs with complex structures.
A novel GNNs framework, dubbed Decoupled Graph Neural Networks (DGNN), is introduced to obtain a more comprehensive embedding representation of nodes.
Experimental results conducted on several graph benchmark datasets verify DGNN's superiority in node classification task.
arXiv Detail & Related papers (2024-01-28T06:43:13Z) - Interpretable Neural Networks with Random Constructive Algorithm [3.1200894334384954]
This paper introduces an Interpretable Neural Network (INN) incorporating spatial information to tackle the opaque parameterization process of random weighted neural networks.
It devises a geometric relationship strategy using a pool of candidate nodes and established relationships to select node parameters conducive to network convergence.
arXiv Detail & Related papers (2023-07-01T01:07:20Z) - Graph Neural Networks Provably Benefit from Structural Information: A
Feature Learning Perspective [53.999128831324576]
Graph neural networks (GNNs) have pioneered advancements in graph representation learning.
This study investigates the role of graph convolution within the context of feature learning theory.
arXiv Detail & Related papers (2023-06-24T10:21:11Z) - Interpolation-based Correlation Reduction Network for Semi-Supervised
Graph Learning [49.94816548023729]
We propose a novel graph contrastive learning method, termed Interpolation-based Correlation Reduction Network (ICRN)
In our method, we improve the discriminative capability of the latent feature by enlarging the margin of decision boundaries.
By combining the two settings, we extract rich supervision information from both the abundant unlabeled nodes and the rare yet valuable labeled nodes for discnative representation learning.
arXiv Detail & Related papers (2022-06-06T14:26:34Z) - Deep Architecture Connectivity Matters for Its Convergence: A
Fine-Grained Analysis [94.64007376939735]
We theoretically characterize the impact of connectivity patterns on the convergence of deep neural networks (DNNs) under gradient descent training.
We show that by a simple filtration on "unpromising" connectivity patterns, we can trim down the number of models to evaluate.
arXiv Detail & Related papers (2022-05-11T17:43:54Z) - Simplicial Attention Networks [4.401427499962144]
Simplicial Neural Networks (SNNs) naturally model interactions by performing message passing on simplicial complexes.
We propose Simplicial Attention Networks (SAT), a new type of simplicial network that dynamically weighs the interactions between neighbouring simplicies.
We demonstrate that SAT outperforms existent convolutional SNNs and GNNs in two image and trajectory classification tasks.
arXiv Detail & Related papers (2022-04-20T13:41:50Z) - BScNets: Block Simplicial Complex Neural Networks [79.81654213581977]
Simplicial neural networks (SNN) have recently emerged as the newest direction in graph learning.
We present Block Simplicial Complex Neural Networks (BScNets) model for link prediction.
BScNets outperforms state-of-the-art models by a significant margin while maintaining low costs.
arXiv Detail & Related papers (2021-12-13T17:35:54Z) - Simplicial Neural Networks [0.0]
We present simplicial neural networks (SNNs)
SNNs are a generalization of graph neural networks to data that live on a class of topological spaces called simplicial complexes.
We test the SNNs on the task of imputing missing data on coauthorship complexes.
arXiv Detail & Related papers (2020-10-07T20:15:01Z) - Beyond Localized Graph Neural Networks: An Attributed Motif
Regularization Framework [6.790281989130923]
InfoMotif is a new semi-supervised, motif-regularized, learning framework over graphs.
We overcome two key limitations of message passing in graph neural networks (GNNs)
We show significant gains (3-10% accuracy) across six diverse, real-world datasets.
arXiv Detail & Related papers (2020-09-11T02:03:09Z) - SNoRe: Scalable Unsupervised Learning of Symbolic Node Representations [0.0]
The proposed SNoRe algorithm is capable of learning symbolic, human-understandable representations of individual network nodes.
SNoRe's interpretable features are suitable for direct explanation of individual predictions.
The vectorized implementation of SNoRe scales to large networks, making it suitable for contemporary network learning and analysis tasks.
arXiv Detail & Related papers (2020-09-08T08:13:21Z) - Binarized Graph Neural Network [65.20589262811677]
We develop a binarized graph neural network to learn the binary representations of the nodes with binary network parameters.
Our proposed method can be seamlessly integrated into the existing GNN-based embedding approaches.
Experiments indicate that the proposed binarized graph neural network, namely BGN, is orders of magnitude more efficient in terms of both time and space.
arXiv Detail & Related papers (2020-04-19T09:43:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.