Graph Neural Network Surrogates of Fair Graph Filtering
- URL: http://arxiv.org/abs/2303.08157v2
- Date: Thu, 16 Mar 2023 10:56:37 GMT
- Title: Graph Neural Network Surrogates of Fair Graph Filtering
- Authors: Emmanouil Krasanakis, Symeon Papadopoulos
- Abstract summary: We introduce a filter-aware universal approximation framework for posterior objectives.
This defines appropriate graph neural networks trained at runtime to be similar to filters.
We show that our approach performs equally well or better than alternatives in meeting parity constraints.
- Score: 13.854091527965297
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph filters that transform prior node values to posterior scores via edge
propagation often support graph mining tasks affecting humans, such as
recommendation and ranking. Thus, it is important to make them fair in terms of
satisfying statistical parity constraints between groups of nodes (e.g.,
distribute score mass between genders proportionally to their representation).
To achieve this while minimally perturbing the original posteriors, we
introduce a filter-aware universal approximation framework for posterior
objectives. This defines appropriate graph neural networks trained at runtime
to be similar to filters but also locally optimize a large class of objectives,
including fairness-aware ones. Experiments on a collection of 8 filters and 5
graphs show that our approach performs equally well or better than alternatives
in meeting parity constraints while preserving the AUC of score-based community
member recommendation and creating minimal utility loss in prior diffusion.
Related papers
- How Powerful is Graph Filtering for Recommendation [7.523823738965443]
We show two limitations suppressing the power of graph filtering.
Due to varied noise distribution, graph filters fail to denoise sparse data where noise is scattered across all frequencies.
supervised training results in worse performance on dense data where noise is concentrated in middle frequencies that can be removed by graph filters without training.
arXiv Detail & Related papers (2024-06-13T05:37:54Z) - Deep Manifold Graph Auto-Encoder for Attributed Graph Embedding [51.75091298017941]
This paper proposes a novel Deep Manifold (Variational) Graph Auto-Encoder (DMVGAE/DMGAE) for attributed graph data.
The proposed method surpasses state-of-the-art baseline algorithms by a significant margin on different downstream tasks across popular datasets.
arXiv Detail & Related papers (2024-01-12T17:57:07Z) - Chasing Fairness in Graphs: A GNN Architecture Perspective [73.43111851492593]
We propose textsfFair textsfMessage textsfPassing (FMP) designed within a unified optimization framework for graph neural networks (GNNs)
In FMP, the aggregation is first adopted to utilize neighbors' information and then the bias mitigation step explicitly pushes demographic group node presentation centers together.
Experiments on node classification tasks demonstrate that the proposed FMP outperforms several baselines in terms of fairness and accuracy on three real-world datasets.
arXiv Detail & Related papers (2023-12-19T18:00:15Z) - Efficient Heterogeneous Graph Learning via Random Projection [58.4138636866903]
Heterogeneous Graph Neural Networks (HGNNs) are powerful tools for deep learning on heterogeneous graphs.
Recent pre-computation-based HGNNs use one-time message passing to transform a heterogeneous graph into regular-shaped tensors.
We propose a hybrid pre-computation-based HGNN, named Random Projection Heterogeneous Graph Neural Network (RpHGNN)
arXiv Detail & Related papers (2023-10-23T01:25:44Z) - DiP-GNN: Discriminative Pre-Training of Graph Neural Networks [49.19824331568713]
Graph neural network (GNN) pre-training methods have been proposed to enhance the power of GNNs.
One popular pre-training method is to mask out a proportion of the edges, and a GNN is trained to recover them.
In our framework, the graph seen by the discriminator better matches the original graph because the generator can recover a proportion of the masked edges.
arXiv Detail & Related papers (2022-09-15T17:41:50Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Graph filtering over expanding graphs [14.84852576248587]
We propose a filter learning scheme for data over expanding graphs.
We show near-optimal performance compared with baselines relying on the exact topology.
These findings lay the foundation for learning representations over expanding graphs by relying only on the connectivity model.
arXiv Detail & Related papers (2022-03-15T16:50:54Z) - Scaling Up Graph Neural Networks Via Graph Coarsening [18.176326897605225]
Scalability of graph neural networks (GNNs) is one of the major challenges in machine learning.
In this paper, we propose to use graph coarsening for scalable training of GNNs.
We show that, simply applying off-the-shelf coarsening methods, we can reduce the number of nodes by up to a factor of ten without causing a noticeable downgrade in classification accuracy.
arXiv Detail & Related papers (2021-06-09T15:46:17Z) - Graph Neural Networks with Adaptive Frequency Response Filter [55.626174910206046]
We develop a graph neural network framework AdaGNN with a well-smooth adaptive frequency response filter.
We empirically validate the effectiveness of the proposed framework on various benchmark datasets.
arXiv Detail & Related papers (2021-04-26T19:31:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.