Graphical Modelling without Independence Assumptions for Uncentered Data
- URL: http://arxiv.org/abs/2408.02393v1
- Date: Mon, 5 Aug 2024 11:40:23 GMT
- Title: Graphical Modelling without Independence Assumptions for Uncentered Data
- Authors: Bailey Andrew, David R. Westhead, Luisa Cutillo,
- Abstract summary: We show how the zero-mean assumption can cause egregious errors in modelling.
Specifically, we propose a relaxation of the zero-mean assumption that allows the avoidance of such errors.
- Score: 0.30723404270319693
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The independence assumption is a useful tool to increase the tractability of one's modelling framework. However, this assumption does not match reality; failing to take dependencies into account can cause models to fail dramatically. The field of multi-axis graphical modelling (also called multi-way modelling, Kronecker-separable modelling) has seen growth over the past decade, but these models require that the data have zero mean. In the multi-axis case, inference is typically done in the single sample scenario, making mean inference impossible. In this paper, we demonstrate how the zero-mean assumption can cause egregious modelling errors, as well as propose a relaxation to the zero-mean assumption that allows the avoidance of such errors. Specifically, we propose the "Kronecker-sum-structured mean" assumption, which leads to models with nonconvex-but-unimodal log-likelihoods that can be solved efficiently with coordinate descent.
Related papers
- Is Model Collapse Inevitable? Breaking the Curse of Recursion by Accumulating Real and Synthetic Data [49.73114504515852]
We show that replacing the original real data by each generation's synthetic data does indeed tend towards model collapse.
We demonstrate that accumulating the successive generations of synthetic data alongside the original real data avoids model collapse.
arXiv Detail & Related papers (2024-04-01T18:31:24Z) - Meaning Representations from Trajectories in Autoregressive Models [106.63181745054571]
We propose to extract meaning representations from autoregressive language models by considering the distribution of all possible trajectories extending an input text.
This strategy is prompt-free, does not require fine-tuning, and is applicable to any pre-trained autoregressive model.
We empirically show that the representations obtained from large models align well with human annotations, outperform other zero-shot and prompt-free methods on semantic similarity tasks, and can be used to solve more complex entailment and containment tasks that standard embeddings cannot handle.
arXiv Detail & Related papers (2023-10-23T04:35:58Z) - ChiroDiff: Modelling chirographic data with Diffusion Models [132.5223191478268]
We introduce a powerful model-class namely "Denoising Diffusion Probabilistic Models" or DDPMs for chirographic data.
Our model named "ChiroDiff", being non-autoregressive, learns to capture holistic concepts and therefore remains resilient to higher temporal sampling rate.
arXiv Detail & Related papers (2023-04-07T15:17:48Z) - Uncertainty estimation under model misspecification in neural network
regression [3.2622301272834524]
We study the effect of the model choice on uncertainty estimation.
We highlight that under model misspecification, aleatoric uncertainty is not properly captured.
arXiv Detail & Related papers (2021-11-23T10:18:41Z) - Probabilistic Modeling for Human Mesh Recovery [73.11532990173441]
This paper focuses on the problem of 3D human reconstruction from 2D evidence.
We recast the problem as learning a mapping from the input to a distribution of plausible 3D poses.
arXiv Detail & Related papers (2021-08-26T17:55:11Z) - Contextual Dropout: An Efficient Sample-Dependent Dropout Module [60.63525456640462]
Dropout has been demonstrated as a simple and effective module to regularize the training process of deep neural networks.
We propose contextual dropout with an efficient structural design as a simple and scalable sample-dependent dropout module.
Our experimental results show that the proposed method outperforms baseline methods in terms of both accuracy and quality of uncertainty estimation.
arXiv Detail & Related papers (2021-03-06T19:30:32Z) - On Statistical Efficiency in Learning [37.08000833961712]
We address the challenge of model selection to strike a balance between model fitting and model complexity.
We propose an online algorithm that sequentially expands the model complexity to enhance selection stability and reduce cost.
Experimental studies show that the proposed method has desirable predictive power and significantly less computational cost than some popular methods.
arXiv Detail & Related papers (2020-12-24T16:08:29Z) - not-MIWAE: Deep Generative Modelling with Missing not at Random Data [21.977065542645082]
We present an approach for building and fitting deep latent variable models (DLVMs) in cases where the missing process is dependent on the missing data.
Specifically, a deep neural network enables us to flexibly model the conditional distribution of the missingness pattern given the data.
We show on various kinds of data sets and missingness patterns that explicitly modelling the missing process can be invaluable.
arXiv Detail & Related papers (2020-06-23T10:06:21Z) - Causal Inference with Deep Causal Graphs [0.0]
Parametric causal modelling techniques rarely provide functionality for counterfactual estimation.
Deep Causal Graphs is an abstract specification of the required functionality for a neural network to model causal distributions.
We demonstrate its expressive power in modelling complex interactions and showcase applications to machine learning explainability and fairness.
arXiv Detail & Related papers (2020-06-15T13:03:33Z) - Nonparametric Estimation in the Dynamic Bradley-Terry Model [69.70604365861121]
We develop a novel estimator that relies on kernel smoothing to pre-process the pairwise comparisons over time.
We derive time-varying oracle bounds for both the estimation error and the excess risk in the model-agnostic setting.
arXiv Detail & Related papers (2020-02-28T21:52:49Z) - Interventions and Counterfactuals in Tractable Probabilistic Models:
Limitations of Contemporary Transformations [12.47276164048813]
We show that when transforming SPNs to a causal graph interventional reasoning reduces to computing marginal distributions.
We first provide an algorithm for constructing a causal graph from a PSDD, which introduces augmented variables.
arXiv Detail & Related papers (2020-01-29T15:45:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.