Optimal statistical inference in the presence of systematic
uncertainties using neural network optimization based on binned Poisson
likelihoods with nuisance parameters
- URL: http://arxiv.org/abs/2003.07186v3
- Date: Wed, 13 Jan 2021 09:23:38 GMT
- Title: Optimal statistical inference in the presence of systematic
uncertainties using neural network optimization based on binned Poisson
likelihoods with nuisance parameters
- Authors: Stefan Wunsch and Simon J\"orger and Roger Wolf and G\"unter Quast
- Abstract summary: This work presents a novel strategy to construct the dimensionality reduction with neural networks for feature engineering.
We discuss how this approach results in an estimate of the parameters of interest that is close to optimal.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data analysis in science, e.g., high-energy particle physics, is often
subject to an intractable likelihood if the observables and observations span a
high-dimensional input space. Typically the problem is solved by reducing the
dimensionality using feature engineering and histograms, whereby the latter
technique allows to build the likelihood using Poisson statistics. However, in
the presence of systematic uncertainties represented by nuisance parameters in
the likelihood, the optimal dimensionality reduction with a minimal loss of
information about the parameters of interest is not known. This work presents a
novel strategy to construct the dimensionality reduction with neural networks
for feature engineering and a differential formulation of histograms so that
the full workflow can be optimized with the result of the statistical
inference, e.g., the variance of a parameter of interest, as objective. We
discuss how this approach results in an estimate of the parameters of interest
that is close to optimal and the applicability of the technique is demonstrated
with a simple example based on pseudo-experiments and a more complex example
from high-energy particle physics.
Related papers
- Accelerated zero-order SGD under high-order smoothness and overparameterized regime [79.85163929026146]
We present a novel gradient-free algorithm to solve convex optimization problems.
Such problems are encountered in medicine, physics, and machine learning.
We provide convergence guarantees for the proposed algorithm under both types of noise.
arXiv Detail & Related papers (2024-11-21T10:26:17Z) - Probabilistic Iterative Hard Thresholding for Sparse Learning [2.5782973781085383]
We present an approach towards solving expectation objective optimization problems with cardinality constraints.
We prove convergence of the underlying process, and demonstrate the performance on two Machine Learning problems.
arXiv Detail & Related papers (2024-09-02T18:14:45Z) - A Metaheuristic for Amortized Search in High-Dimensional Parameter
Spaces [0.0]
We propose a new metaheuristic that drives dimensionality reductions from feature-informed transformations.
DR-FFIT implements an efficient sampling strategy that facilitates a gradient-free parameter search in high-dimensional spaces.
Our test data show that DR-FFIT boosts the performances of random-search and simulated-annealing against well-established metaheuristics.
arXiv Detail & Related papers (2023-09-28T14:25:14Z) - Learning Unnormalized Statistical Models via Compositional Optimization [73.30514599338407]
Noise-contrastive estimation(NCE) has been proposed by formulating the objective as the logistic loss of the real data and the artificial noise.
In this paper, we study it a direct approach for optimizing the negative log-likelihood of unnormalized models.
arXiv Detail & Related papers (2023-06-13T01:18:16Z) - Stochastic Marginal Likelihood Gradients using Neural Tangent Kernels [78.6096486885658]
We introduce lower bounds to the linearized Laplace approximation of the marginal likelihood.
These bounds are amenable togradient-based optimization and allow to trade off estimation accuracy against computational complexity.
arXiv Detail & Related papers (2023-06-06T19:02:57Z) - A Causality-Based Learning Approach for Discovering the Underlying
Dynamics of Complex Systems from Partial Observations with Stochastic
Parameterization [1.2882319878552302]
This paper develops a new iterative learning algorithm for complex turbulent systems with partial observations.
It alternates between identifying model structures, recovering unobserved variables, and estimating parameters.
Numerical experiments show that the new algorithm succeeds in identifying the model structure and providing suitable parameterizations for many complex nonlinear systems.
arXiv Detail & Related papers (2022-08-19T00:35:03Z) - Extension of Dynamic Mode Decomposition for dynamic systems with
incomplete information based on t-model of optimal prediction [69.81996031777717]
The Dynamic Mode Decomposition has proved to be a very efficient technique to study dynamic data.
The application of this approach becomes problematic if the available data is incomplete because some dimensions of smaller scale either missing or unmeasured.
We consider a first-order approximation of the Mori-Zwanzig decomposition, state the corresponding optimization problem and solve it with the gradient-based optimization method.
arXiv Detail & Related papers (2022-02-23T11:23:59Z) - Low-rank statistical finite elements for scalable model-data synthesis [0.8602553195689513]
statFEM acknowledges a priori model misspecification, by embedding forcing within the governing equations.
The method reconstructs the observed data-generating processes with minimal loss of information.
This article overcomes this hurdle by embedding a low-rank approximation of the underlying dense covariance matrix.
arXiv Detail & Related papers (2021-09-10T09:51:43Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z) - Asymptotic Analysis of an Ensemble of Randomly Projected Linear
Discriminants [94.46276668068327]
In [1], an ensemble of randomly projected linear discriminants is used to classify datasets.
We develop a consistent estimator of the misclassification probability as an alternative to the computationally-costly cross-validation estimator.
We also demonstrate the use of our estimator for tuning the projection dimension on both real and synthetic data.
arXiv Detail & Related papers (2020-04-17T12:47:04Z) - Misspecification-robust likelihood-free inference in high dimensions [13.934999364767918]
We introduce an extension of the popular Bayesian optimisation based approach to approximate discrepancy functions in a probabilistic manner.
Our approach achieves computational scalability for higher dimensional parameter spaces by using separate acquisition functions and discrepancies for each parameter.
The method successfully performs computationally efficient inference in a 100-dimensional space on canonical examples and compares favourably to existing modularised ABC methods.
arXiv Detail & Related papers (2020-02-21T16:06:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.