A machine learning approach for efficient multi-dimensional integration
- URL: http://arxiv.org/abs/2009.06697v1
- Date: Mon, 14 Sep 2020 19:11:14 GMT
- Title: A machine learning approach for efficient multi-dimensional integration
- Authors: Boram Yoon
- Abstract summary: We propose a novel multi-dimensional integration algorithm using a machine learning (ML) technique.
We show that the new algorithm provides integral estimates with more than an order of magnitude smaller uncertainties than those of the algorithm in most of the test cases.
- Score: 3.42658286826597
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel multi-dimensional integration algorithm using a machine
learning (ML) technique. After training a ML regression model to mimic a target
integrand, the regression model is used to evaluate an approximation of the
integral. Then, the difference between the approximation and the true answer is
calculated to correct the bias in the approximation of the integral induced by
a ML prediction error. Because of the bias correction, the final estimate of
the integral is unbiased and has a statistically correct error estimation. The
performance of the proposed algorithm is demonstrated on six different types of
integrands at various dimensions and integrand difficulties. The results show
that, for the same total number of integrand evaluations, the new algorithm
provides integral estimates with more than an order of magnitude smaller
uncertainties than those of the VEGAS algorithm in most of the test cases.
Related papers
- On Discriminative Probabilistic Modeling for Self-Supervised Representation Learning [85.75164588939185]
We study the discriminative probabilistic modeling problem on a continuous domain for (multimodal) self-supervised representation learning.
We conduct generalization error analysis to reveal the limitation of current InfoNCE-based contrastive loss for self-supervised representation learning.
arXiv Detail & Related papers (2024-10-11T18:02:46Z) - Neural Control Variates with Automatic Integration [49.91408797261987]
This paper proposes a novel approach to construct learnable parametric control variates functions from arbitrary neural network architectures.
We use the network to approximate the anti-derivative of the integrand.
We apply our method to solve partial differential equations using the Walk-on-sphere algorithm.
arXiv Detail & Related papers (2024-09-23T06:04:28Z) - A general error analysis for randomized low-rank approximation with application to data assimilation [42.57210316104905]
We propose a framework for the analysis of the low-rank approximation error in Frobenius norm for centered and non-standard matrices.
Under minimal assumptions, we derive accurate bounds in expectation and probability.
Our bounds have clear interpretations that enable us to derive properties and motivate practical choices.
arXiv Detail & Related papers (2024-05-08T04:51:56Z) - Convergence of Expectation-Maximization Algorithm with Mixed-Integer Optimization [5.319361976450982]
This paper introduces a set of conditions that ensure the convergence of a specific class of EM algorithms.
Our results offer a new analysis technique for iterative algorithms that solve mixed-integer non-linear optimization problems.
arXiv Detail & Related papers (2024-01-31T11:42:46Z) - Improving Accuracy Without Losing Interpretability: A ML Approach for
Time Series Forecasting [4.025941501724274]
In time series forecasting, decomposition-based algorithms break aggregate data into meaningful components.
Recent algorithms often combine machine learning (hereafter ML) methodology with decomposition to improve prediction accuracy.
We propose the W-R algorithm, a hybrid algorithm that combines decomposition and ML from a novel perspective.
arXiv Detail & Related papers (2022-12-13T14:51:10Z) - Learning to Bound Counterfactual Inference in Structural Causal Models
from Observational and Randomised Data [64.96984404868411]
We derive a likelihood characterisation for the overall data that leads us to extend a previous EM-based algorithm.
The new algorithm learns to approximate the (unidentifiability) region of model parameters from such mixed data sources.
It delivers interval approximations to counterfactual results, which collapse to points in the identifiable case.
arXiv Detail & Related papers (2022-12-06T12:42:11Z) - Manifold Gaussian Variational Bayes on the Precision Matrix [70.44024861252554]
We propose an optimization algorithm for Variational Inference (VI) in complex models.
We develop an efficient algorithm for Gaussian Variational Inference whose updates satisfy the positive definite constraint on the variational covariance matrix.
Due to its black-box nature, MGVBP stands as a ready-to-use solution for VI in complex models.
arXiv Detail & Related papers (2022-10-26T10:12:31Z) - Splitting numerical integration for matrix completion [0.0]
We propose a new algorithm for low rank matrix approximation.
The algorithm is an adaptation of classical gradient descent within the framework of optimization.
Experimental results show that our approach has good scalability for large-scale problems.
arXiv Detail & Related papers (2022-02-14T04:45:20Z) - Test Set Sizing Via Random Matrix Theory [91.3755431537592]
This paper uses techniques from Random Matrix Theory to find the ideal training-testing data split for a simple linear regression.
It defines "ideal" as satisfying the integrity metric, i.e. the empirical model error is the actual measurement noise.
This paper is the first to solve for the training and test size for any model in a way that is truly optimal.
arXiv Detail & Related papers (2021-12-11T13:18:33Z) - Efficient Consensus Model based on Proximal Gradient Method applied to
Convolutional Sparse Problems [2.335152769484957]
We derive and detail a theoretical analysis of an efficient consensus algorithm based on gradient proximal (PG) approach.
The proposed algorithm is also applied to another particular convolutional problem for the anomaly detection task.
arXiv Detail & Related papers (2020-11-19T20:52:48Z) - Neural Control Variates [71.42768823631918]
We show that a set of neural networks can face the challenge of finding a good approximation of the integrand.
We derive a theoretically optimal, variance-minimizing loss function, and propose an alternative, composite loss for stable online training in practice.
Specifically, we show that the learned light-field approximation is of sufficient quality for high-order bounces, allowing us to omit the error correction and thereby dramatically reduce the noise at the cost of negligible visible bias.
arXiv Detail & Related papers (2020-06-02T11:17:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.