On Constraint Definability in Tractable Probabilistic Models
- URL: http://arxiv.org/abs/2001.11349v1
- Date: Wed, 29 Jan 2020 16:05:56 GMT
- Title: On Constraint Definability in Tractable Probabilistic Models
- Authors: Ioannis Papantonis, Vaishak Belle
- Abstract summary: A wide variety of problems require predictions to be integrated with reasoning about constraints.
We consider a mathematical inquiry on how the learning of tractable probabilistic models, such as sum-product networks, is possible while incorporating constraints.
- Score: 12.47276164048813
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Incorporating constraints is a major concern in probabilistic machine
learning. A wide variety of problems require predictions to be integrated with
reasoning about constraints, from modelling routes on maps to approving loan
predictions. In the former, we may require the prediction model to respect the
presence of physical paths between the nodes on the map, and in the latter, we
may require that the prediction model respect fairness constraints that ensure
that outcomes are not subject to bias. Broadly speaking, constraints may be
probabilistic, logical or causal, but the overarching challenge is to determine
if and how a model can be learnt that handles all the declared constraints. To
the best of our knowledge, this is largely an open problem. In this paper, we
consider a mathematical inquiry on how the learning of tractable probabilistic
models, such as sum-product networks, is possible while incorporating
constraints.
Related papers
- On the nonconvexity of some push-forward constraints and its
consequences in machine learning [0.0]
The push-forward operation enables one to redistribute a convex probability measure through a map.
It plays a key role in statistics and: many problems from optimal transport impact to push-forward.
This paper aims to help researchers better understand predictors or algorithmic learning problems.
arXiv Detail & Related papers (2024-03-12T10:06:48Z) - Model Complexity of Program Phases [0.5439020425818999]
In resource limited computing systems, sequence prediction models must operate under tight constraints.
Various models are available that cater to prediction under these conditions that in some way focus on reducing the cost of implementation.
These resource constrained sequence prediction models, in practice, exhibit a fundamental tradeoff between the cost of implementation and the quality of its predictions.
arXiv Detail & Related papers (2023-10-05T19:50:15Z) - On Regularization and Inference with Label Constraints [62.60903248392479]
We compare two strategies for encoding label constraints in a machine learning pipeline, regularization with constraints and constrained inference.
For regularization, we show that it narrows the generalization gap by precluding models that are inconsistent with the constraints.
For constrained inference, we show that it reduces the population risk by correcting a model's violation, and hence turns the violation into an advantage.
arXiv Detail & Related papers (2023-07-08T03:39:22Z) - Uncertainty estimation of pedestrian future trajectory using Bayesian
approximation [137.00426219455116]
Under dynamic traffic scenarios, planning based on deterministic predictions is not trustworthy.
The authors propose to quantify uncertainty during forecasting using approximation which deterministic approaches fail to capture.
The effect of dropout weights and long-term prediction on future state uncertainty has been studied.
arXiv Detail & Related papers (2022-05-04T04:23:38Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - Chance-Constrained Active Inference [21.592135424253826]
Active Inference (ActInf) is an emerging theory that explains perception and action in biological agents.
We propose an alternative approach through chance constraints, which allow for a (typically small) probability of constraint violation.
We show how chance-constrained ActInf weights all imposed (prior) constraints on the generative model, allowing a trade-off between robust control and empirical chance constraint violation.
arXiv Detail & Related papers (2021-02-17T14:36:40Z) - Sufficiently Accurate Model Learning for Planning [119.80502738709937]
This paper introduces the constrained Sufficiently Accurate model learning approach.
It provides examples of such problems, and presents a theorem on how close some approximate solutions can be.
The approximate solution quality will depend on the function parameterization, loss and constraint function smoothness, and the number of samples in model learning.
arXiv Detail & Related papers (2021-02-11T16:27:31Z) - RegFlow: Probabilistic Flow-based Regression for Future Prediction [21.56753543722155]
We introduce a robust and flexible probabilistic framework that allows to model future predictions with virtually no constrains regarding the modality or underlying probability distribution.
The resulting method dubbed RegFlow achieves state-of-the-art results on several benchmark datasets, outperforming competing approaches by a significant margin.
arXiv Detail & Related papers (2020-11-30T08:45:37Z) - Causal Expectation-Maximisation [70.45873402967297]
We show that causal inference is NP-hard even in models characterised by polytree-shaped graphs.
We introduce the causal EM algorithm to reconstruct the uncertainty about the latent variables from data about categorical manifest variables.
We argue that there appears to be an unnoticed limitation to the trending idea that counterfactual bounds can often be computed without knowledge of the structural equations.
arXiv Detail & Related papers (2020-11-04T10:25:13Z) - Estimation with Uncertainty via Conditional Generative Adversarial
Networks [3.829070379776576]
We propose a predictive probabilistic neural network model, which corresponds to a different manner of using the generator in conditional Generative Adversarial Network (cGAN)
By reversing the input and output of ordinary cGAN, the model can be successfully used as a predictive model.
In addition, to measure the uncertainty of predictions, we introduce the entropy and relative entropy for regression problems and classification problems.
arXiv Detail & Related papers (2020-07-01T08:54:17Z) - An Integer Linear Programming Framework for Mining Constraints from Data [81.60135973848125]
We present a general framework for mining constraints from data.
In particular, we consider the inference in structured output prediction as an integer linear programming (ILP) problem.
We show that our approach can learn to solve 9x9 Sudoku puzzles and minimal spanning tree problems from examples without providing the underlying rules.
arXiv Detail & Related papers (2020-06-18T20:09:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.