A General Framework for Distributed Inference with Uncertain Models
- URL: http://arxiv.org/abs/2011.10669v1
- Date: Fri, 20 Nov 2020 22:17:12 GMT
- Title: A General Framework for Distributed Inference with Uncertain Models
- Authors: James Z. Hare, Cesar A. Uribe, Lance Kaplan, Ali Jadbabaie
- Abstract summary: We study the problem of distributed classification with a network of heterogeneous agents.
We build upon the concept of uncertain models to incorporate the agents' uncertainty in the likelihoods.
- Score: 14.8884251609335
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper studies the problem of distributed classification with a network
of heterogeneous agents. The agents seek to jointly identify the underlying
target class that best describes a sequence of observations. The problem is
first abstracted to a hypothesis-testing framework, where we assume that the
agents seek to agree on the hypothesis (target class) that best matches the
distribution of observations. Non-Bayesian social learning theory provides a
framework that solves this problem in an efficient manner by allowing the
agents to sequentially communicate and update their beliefs for each hypothesis
over the network. Most existing approaches assume that agents have access to
exact statistical models for each hypothesis. However, in many practical
applications, agents learn the likelihood models based on limited data, which
induces uncertainty in the likelihood function parameters. In this work, we
build upon the concept of uncertain models to incorporate the agents'
uncertainty in the likelihoods by identifying a broad set of parametric
distribution that allows the agents' beliefs to converge to the same result as
a centralized approach. Furthermore, we empirically explore extensions to
non-parametric models to provide a generalized framework of uncertain models in
non-Bayesian social learning.
Related papers
- Causal modelling without introducing counterfactuals or abstract distributions [7.09435109588801]
In this paper, we construe causal inference as treatment-wise predictions for finite populations where all assumptions are testable.
The new framework highlights the model-dependence of causal claims as well as the difference between statistical and scientific inference.
arXiv Detail & Related papers (2024-07-24T16:07:57Z) - Source-Free Unsupervised Domain Adaptation with Hypothesis Consolidation
of Prediction Rationale [53.152460508207184]
Source-Free Unsupervised Domain Adaptation (SFUDA) is a challenging task where a model needs to be adapted to a new domain without access to target domain labels or source domain data.
This paper proposes a novel approach that considers multiple prediction hypotheses for each sample and investigates the rationale behind each hypothesis.
To achieve the optimal performance, we propose a three-step adaptation process: model pre-adaptation, hypothesis consolidation, and semi-supervised learning.
arXiv Detail & Related papers (2024-02-02T05:53:22Z) - Advancing Counterfactual Inference through Nonlinear Quantile Regression [77.28323341329461]
We propose a framework for efficient and effective counterfactual inference implemented with neural networks.
The proposed approach enhances the capacity to generalize estimated counterfactual outcomes to unseen data.
Empirical results conducted on multiple datasets offer compelling support for our theoretical assertions.
arXiv Detail & Related papers (2023-06-09T08:30:51Z) - Nonparametric Identifiability of Causal Representations from Unknown
Interventions [63.1354734978244]
We study causal representation learning, the task of inferring latent causal variables and their causal relations from mixtures of the variables.
Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data.
arXiv Detail & Related papers (2023-06-01T10:51:58Z) - Cross-model Fairness: Empirical Study of Fairness and Ethics Under Model Multiplicity [10.144058870887061]
We argue that individuals can be harmed when one predictor is chosen ad hoc from a group of equally well performing models.
Our findings suggest that such unfairness can be readily found in real life and it may be difficult to mitigate by technical means alone.
arXiv Detail & Related papers (2022-03-14T14:33:39Z) - A Survey on Evidential Deep Learning For Single-Pass Uncertainty
Estimation [0.0]
Evidential Deep Learning: For unfamiliar data, they admit "what they don't know" and fall back onto a prior belief.
This survey aims to familiarize the reader with an alternative class of models based on the concept of Evidential Deep Learning: For unfamiliar data, they admit "what they don't know" and fall back onto a prior belief.
arXiv Detail & Related papers (2021-10-06T20:13:57Z) - Test-time Collective Prediction [73.74982509510961]
Multiple parties in machine learning want to jointly make predictions on future test points.
Agents wish to benefit from the collective expertise of the full set of agents, but may not be willing to release their data or model parameters.
We explore a decentralized mechanism to make collective predictions at test time, leveraging each agent's pre-trained model.
arXiv Detail & Related papers (2021-06-22T18:29:58Z) - Trust but Verify: Assigning Prediction Credibility by Counterfactual
Constrained Learning [123.3472310767721]
Prediction credibility measures are fundamental in statistics and machine learning.
These measures should account for the wide variety of models used in practice.
The framework developed in this work expresses the credibility as a risk-fit trade-off.
arXiv Detail & Related papers (2020-11-24T19:52:38Z) - Estimating Generalization under Distribution Shifts via Domain-Invariant
Representations [75.74928159249225]
We use a set of domain-invariant predictors as a proxy for the unknown, true target labels.
The error of the resulting risk estimate depends on the target risk of the proxy model.
arXiv Detail & Related papers (2020-07-06T17:21:24Z) - Achieving Equalized Odds by Resampling Sensitive Attributes [13.114114427206678]
We present a flexible framework for learning predictive models that approximately satisfy the equalized odds notion of fairness.
This differentiable functional is used as a penalty driving the model parameters towards equalized odds.
We develop a formal hypothesis test to detect whether a prediction rule violates this property, the first such test in the literature.
arXiv Detail & Related papers (2020-06-08T00:18:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.