Pipeline Interventions
- URL: http://arxiv.org/abs/2002.06592v2
- Date: Fri, 28 Aug 2020 19:49:29 GMT
- Title: Pipeline Interventions
- Authors: Eshwar Ram Arunachaleswaran, Sampath Kannan, Aaron Roth, Juba Ziani
- Abstract summary: We present the emphpipeline intervention problem, defined by a layered directed acyclic graph and a set of matrices governing transitions between successive layers.
The graph is a stylized model for how people from different populations are presented opportunities, eventually leading to some reward.
We consider two objectives: social welfare, and a fairness-motivated maximin objective that seeks to maximize the value to the population with the emphleast expected value.
- Score: 15.27567660320295
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce the \emph{pipeline intervention} problem, defined by a layered
directed acyclic graph and a set of stochastic matrices governing transitions
between successive layers. The graph is a stylized model for how people from
different populations are presented opportunities, eventually leading to some
reward. In our model, individuals are born into an initial position (i.e. some
node in the first layer of the graph) according to a fixed probability
distribution, and then stochastically progress through the graph according to
the transition matrices, until they reach a node in the final layer of the
graph; each node in the final layer has a \emph{reward} associated with it. The
pipeline intervention problem asks how to best make costly changes to the
transition matrices governing people's stochastic transitions through the
graph, subject to a budget constraint. We consider two objectives: social
welfare maximization, and a fairness-motivated maximin objective that seeks to
maximize the value to the population (starting node) with the \emph{least}
expected value. We consider two variants of the maximin objective that turn out
to be distinct, depending on whether we demand a deterministic solution or
allow randomization. For each objective, we give an efficient approximation
algorithm (an additive FPTAS) for constant width networks. We also tightly
characterize the "price of fairness" in our setting: the ratio between the
highest achievable social welfare and the highest social welfare consistent
with a maximin optimal solution. Finally we show that for polynomial width
networks, even approximating the maximin objective to any constant factor is NP
hard, even for networks with constant depth. This shows that the restriction on
the width in our positive results is essential.
Related papers
- The Heterophilic Snowflake Hypothesis: Training and Empowering GNNs for Heterophilic Graphs [59.03660013787925]
We introduce the Heterophily Snowflake Hypothesis and provide an effective solution to guide and facilitate research on heterophilic graphs.
Our observations show that our framework acts as a versatile operator for diverse tasks.
It can be integrated into various GNN frameworks, boosting performance in-depth and offering an explainable approach to choosing the optimal network depth.
arXiv Detail & Related papers (2024-06-18T12:16:00Z) - Deep Manifold Graph Auto-Encoder for Attributed Graph Embedding [51.75091298017941]
This paper proposes a novel Deep Manifold (Variational) Graph Auto-Encoder (DMVGAE/DMGAE) for attributed graph data.
The proposed method surpasses state-of-the-art baseline algorithms by a significant margin on different downstream tasks across popular datasets.
arXiv Detail & Related papers (2024-01-12T17:57:07Z) - Graph Neural Network Surrogates of Fair Graph Filtering [13.854091527965297]
We introduce a filter-aware universal approximation framework for posterior objectives.
This defines appropriate graph neural networks trained at runtime to be similar to filters.
We show that our approach performs equally well or better than alternatives in meeting parity constraints.
arXiv Detail & Related papers (2023-03-14T18:14:40Z) - Graph Neural Network Bandits [89.31889875864599]
We consider the bandit optimization problem with the reward function defined over graph-structured data.
Key challenges in this setting are scaling to large domains, and to graphs with many nodes.
We show that graph neural networks (GNNs) can be used to estimate the reward function.
arXiv Detail & Related papers (2022-07-13T18:12:36Z) - High-dimensional limit theorems for SGD: Effective dynamics and critical
scaling [6.950316788263433]
We prove limit theorems for the trajectories of summary statistics of gradient descent (SGD)
We show a critical scaling regime for the step-size, below which the effective ballistic dynamics matches gradient flow for the population loss.
About the fixed points of this effective dynamics, the corresponding diffusive limits can be quite complex and even degenerate.
arXiv Detail & Related papers (2022-06-08T17:42:18Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Biased Edge Dropout for Enhancing Fairness in Graph Representation
Learning [14.664485680918725]
We propose a biased edge dropout algorithm (FairDrop) to counter-act homophily and improve fairness in graph representation learning.
FairDrop can be plugged in easily on many existing algorithms, is efficient, adaptable, and can be combined with other fairness-inducing solutions.
We prove that the proposed algorithm can successfully improve the fairness of all models up to a small or negligible drop in accuracy.
arXiv Detail & Related papers (2021-04-29T08:59:36Z) - Multilayer Graph Clustering with Optimized Node Embedding [70.1053472751897]
multilayer graph clustering aims at dividing the graph nodes into categories or communities.
We propose a clustering-friendly embedding of the layers of a given multilayer graph.
Experiments show that our method leads to a significant improvement.
arXiv Detail & Related papers (2021-03-30T17:36:40Z) - Multilayer Clustered Graph Learning [66.94201299553336]
We use contrastive loss as a data fidelity term, in order to properly aggregate the observed layers into a representative graph.
Experiments show that our method leads to a clustered clusters w.r.t.
We learn a clustering algorithm for solving clustering problems.
arXiv Detail & Related papers (2020-10-29T09:58:02Z) - Better Bounds on the Adaptivity Gap of Influence Maximization under
Full-adoption Feedback [15.533908352376853]
We look for a set of $k$ nodes that maximize the expected number of nodes that are reached by an influence cascade.
We show that the adaptivity gap is upper-bounded by $lceil nrceil $, where $n$ is the number of nodes in the graph.
We also show that in 0-bounded graphs, i.e. undirected graphs, the adaptivity gap is at most $frac3e3e3-1approx 3.16$.
arXiv Detail & Related papers (2020-06-27T14:43:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.