Fairness constraints can help exact inference in structured prediction
- URL: http://arxiv.org/abs/2007.00218v1
- Date: Wed, 1 Jul 2020 04:11:29 GMT
- Title: Fairness constraints can help exact inference in structured prediction
- Authors: Kevin Bello and Jean Honorio
- Abstract summary: We study a generative model with an undirected connected graph $G$ and true vector of binary labels.
We find that, in contrast to the known trade-offs between fairness and model performance, the addition of the fairness constraint improves the probability of exact recovery.
- Score: 37.76221231305701
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many inference problems in structured prediction can be modeled as maximizing
a score function on a space of labels, where graphs are a natural
representation to decompose the total score into a sum of unary (nodes) and
pairwise (edges) scores. Given a generative model with an undirected connected
graph $G$ and true vector of binary labels, it has been previously shown that
when $G$ has good expansion properties, such as complete graphs or $d$-regular
expanders, one can exactly recover the true labels (with high probability and
in polynomial time) from a single noisy observation of each edge and node. We
analyze the previously studied generative model by Globerson et al. (2015)
under a notion of statistical parity. That is, given a fair binary node
labeling, we ask the question whether it is possible to recover the fair
assignment, with high probability and in polynomial time, from single edge and
node observations. We find that, in contrast to the known trade-offs between
fairness and model performance, the addition of the fairness constraint
improves the probability of exact recovery. We effectively explain this
phenomenon and empirically show how graphs with poor expansion properties, such
as grids, are now capable to achieve exact recovery with high probability.
Finally, as a byproduct of our analysis, we provide a tighter
minimum-eigenvalue bound than that of Weyl's inequality.
Related papers
- Identifying General Mechanism Shifts in Linear Causal Representations [58.6238439611389]
We consider the linear causal representation learning setting where we observe a linear mixing of $d$ unknown latent factors.
Recent work has shown that it is possible to recover the latent factors as well as the underlying structural causal model over them.
We provide a surprising identifiability result that it is indeed possible, under some very mild standard assumptions, to identify the set of shifted nodes.
arXiv Detail & Related papers (2024-10-31T15:56:50Z) - Matching Correlated Inhomogeneous Random Graphs using the $k$-core
Estimator [5.685589351789462]
We study the so-called emph$k$-core estimator, which outputs a correspondence that induces a large, common subgraph of both graphs.
We specialize our general framework to derive new results on exact and partial recovery in correlated block models, correlated Chung-Lu geometric graphs, and correlated random graphs.
arXiv Detail & Related papers (2023-02-10T18:21:35Z) - Graph Signal Sampling for Inductive One-Bit Matrix Completion: a
Closed-form Solution [112.3443939502313]
We propose a unified graph signal sampling framework which enjoys the benefits of graph signal analysis and processing.
The key idea is to transform each user's ratings on the items to a function (signal) on the vertices of an item-item graph.
For the online setting, we develop a Bayesian extension, i.e., BGS-IMC which considers continuous random Gaussian noise in the graph Fourier domain.
arXiv Detail & Related papers (2023-02-08T08:17:43Z) - Collaborative likelihood-ratio estimation over graphs [55.98760097296213]
Graph-based Relative Unconstrained Least-squares Importance Fitting (GRULSIF)
We develop this idea in a concrete non-parametric method that we call Graph-based Relative Unconstrained Least-squares Importance Fitting (GRULSIF)
We derive convergence rates for our collaborative approach that highlights the role played by variables such as the number of available observations per node, the size of the graph, and how accurately the graph structure encodes the similarity between tasks.
arXiv Detail & Related papers (2022-05-28T15:37:03Z) - An Interpretable Graph Generative Model with Heterophily [38.59200985962146]
We propose the first edge-independent graph generative model that is expressive enough to capture heterophily.
Our experiments demonstrate the effectiveness of our model for a variety of important application tasks.
arXiv Detail & Related papers (2021-11-04T17:34:39Z) - Learning Sparse Graph with Minimax Concave Penalty under Gaussian Markov
Random Fields [51.07460861448716]
This paper presents a convex-analytic framework to learn from data.
We show that a triangular convexity decomposition is guaranteed by a transform of the corresponding to its upper part.
arXiv Detail & Related papers (2021-09-17T17:46:12Z) - A Thorough View of Exact Inference in Graphs from the Degree-4
Sum-of-Squares Hierarchy [37.34153902687548]
We tackle the problem of exactly recovering an unknown ground-truth binary labeling of the nodes from a single corrupted observation of each edge.
We apply a hierarchy of relaxations known as the sum-of-squares hierarchy, to the problem.
We show that the solution of the dual of the relaxed problem is related to finding edge weights of the Johnson and Kneser graphs.
arXiv Detail & Related papers (2021-02-16T08:36:19Z) - Settling the Sharp Reconstruction Thresholds of Random Graph Matching [19.54246087326288]
We study the problem of recovering the hidden correspondence between two edge-correlated random graphs.
For dense graphs with $p=n-o(1)$, we prove that there exists a sharp threshold.
For sparse ErdHos-R'enyi graphs with $p=n-Theta(1)$, we show that the all-or-nothing phenomenon no longer holds.
arXiv Detail & Related papers (2021-01-29T21:49:50Z) - Residual Correlation in Graph Neural Network Regression [39.54530450932135]
We show that conditional independence assumption severely limits predictive power.
We address this problem with an interpretable and efficient framework.
Our framework achieves substantially higher accuracy than competing baselines.
arXiv Detail & Related papers (2020-02-19T16:32:54Z) - Block-Approximated Exponential Random Graphs [77.4792558024487]
An important challenge in the field of exponential random graphs (ERGs) is the fitting of non-trivial ERGs on large graphs.
We propose an approximative framework to such non-trivial ERGs that result in dyadic independence (i.e., edge independent) distributions.
Our methods are scalable to sparse graphs consisting of millions of nodes.
arXiv Detail & Related papers (2020-02-14T11:42:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.