Nonparametric estimation of continuous DPPs with kernel methods
- URL: http://arxiv.org/abs/2106.14210v1
- Date: Sun, 27 Jun 2021 11:57:14 GMT
- Title: Nonparametric estimation of continuous DPPs with kernel methods
- Authors: Micha\"el Fanuel and R\'emi Bardenet
- Abstract summary: Parametric and nonparametric inference methods have been proposed in the finite case, i.e. when the point patterns live in a finite ground set.
We show that a restricted version of this maximum likelihood (MLE) problem falls within the scope of a recent representer theorem for nonnegative functions in an RKHS.
We propose, analyze, and demonstrate a fixed point algorithm to solve this finite-dimensional problem.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Determinantal Point Process (DPPs) are statistical models for repulsive point
patterns. Both sampling and inference are tractable for DPPs, a rare feature
among models with negative dependence that explains their popularity in machine
learning and spatial statistics. Parametric and nonparametric inference methods
have been proposed in the finite case, i.e. when the point patterns live in a
finite ground set. In the continuous case, only parametric methods have been
investigated, while nonparametric maximum likelihood for DPPs -- an
optimization problem over trace-class operators -- has remained an open
question. In this paper, we show that a restricted version of this maximum
likelihood (MLE) problem falls within the scope of a recent representer theorem
for nonnegative functions in an RKHS. This leads to a finite-dimensional
problem, with strong statistical ties to the original MLE. Moreover, we
propose, analyze, and demonstrate a fixed point algorithm to solve this
finite-dimensional problem. Finally, we also provide a controlled estimate of
the correlation kernel of the DPP, thus providing more interpretability.
Related papers
- Total Uncertainty Quantification in Inverse PDE Solutions Obtained with Reduced-Order Deep Learning Surrogate Models [50.90868087591973]
We propose an approximate Bayesian method for quantifying the total uncertainty in inverse PDE solutions obtained with machine learning surrogate models.
We test the proposed framework by comparing it with the iterative ensemble smoother and deep ensembling methods for a non-linear diffusion equation.
arXiv Detail & Related papers (2024-08-20T19:06:02Z) - On Convergence Analysis of Policy Iteration Algorithms for Entropy-Regularized Stochastic Control Problems [19.742628365680353]
We investigate the issues regarding the convergence of the Policy Iteration Algorithm(PIA) for a class of general continuous-time entropy-regularized control problems.
We show that our approach can also be extended to the case when diffusion contains control, in the one dimensional setting but without much extra constraints on the coefficients.
arXiv Detail & Related papers (2024-06-16T14:31:26Z) - Randomized Physics-Informed Machine Learning for Uncertainty
Quantification in High-Dimensional Inverse Problems [49.1574468325115]
We propose a physics-informed machine learning method for uncertainty quantification in high-dimensional inverse problems.
We show analytically and through comparison with Hamiltonian Monte Carlo that the rPICKLE posterior converges to the true posterior given by the Bayes rule.
arXiv Detail & Related papers (2023-12-11T07:33:16Z) - FaDIn: Fast Discretized Inference for Hawkes Processes with General
Parametric Kernels [82.53569355337586]
This work offers an efficient solution to temporal point processes inference using general parametric kernels with finite support.
The method's effectiveness is evaluated by modeling the occurrence of stimuli-induced patterns from brain signals recorded with magnetoencephalography (MEG)
Results show that the proposed approach leads to an improved estimation of pattern latency than the state-of-the-art.
arXiv Detail & Related papers (2022-10-10T12:35:02Z) - Determinantal point processes based on orthogonal polynomials for
sampling minibatches in SGD [0.0]
gradient descent (SGD) is a cornerstone of machine learning.
default minibatch construction involves uniformly sampling a subset of the desired size.
We show how specific DPPs and a string of controlled approximations can lead to gradient estimators with a variance that decays faster with the batchsize than under uniform sampling.
arXiv Detail & Related papers (2021-12-11T15:09:19Z) - Gaussian Determinantal Processes: a new model for directionality in data [10.591948377239921]
In this work, we investigate a parametric family of Gaussian DPPs with a clearly interpretable effect of parametric modulation on the observed points.
We show that parameter modulation impacts the observed points by introducing directionality in their repulsion structure, and the principal directions correspond to the directions of maximal dependency.
This model readily yields a novel and viable alternative to Principal Component Analysis (PCA) as a dimension reduction tool that favors directions along which the data is most spread out.
arXiv Detail & Related papers (2021-11-19T00:57:33Z) - Parallel Stochastic Mirror Descent for MDPs [72.75921150912556]
We consider the problem of learning the optimal policy for infinite-horizon Markov decision processes (MDPs)
Some variant of Mirror Descent is proposed for convex programming problems with Lipschitz-continuous functionals.
We analyze this algorithm in a general case and obtain an estimate of the convergence rate that does not accumulate errors during the operation of the method.
arXiv Detail & Related papers (2021-02-27T19:28:39Z) - Simple and Near-Optimal MAP Inference for Nonsymmetric DPPs [3.3504365823045044]
We study the problem of maximum a posteriori (MAP) inference for determinantal point processes defined by a nonsymmetric kernel matrix.
We obtain the first multiplicative approximation guarantee for this problem using local search.
Our approximation factor of $kO(k)$ is nearly tight, and we show theoretically and experimentally that it compares favorably to the state-of-the-art methods for this problem.
arXiv Detail & Related papers (2021-02-10T09:34:44Z) - Determinantal Point Processes Implicitly Regularize Semi-parametric
Regression Problems [13.136143245702915]
We discuss the use of a finite Determinantal Point Process (DPP) for approximating semi-parametric models.
With the help of this formalism, we derive a key identity illustrating the implicit regularization effect of determinantal sampling.
Also, a novel projected Nystr"om approximation is defined and used to derive a bound on the expected risk for the corresponding approximation.
arXiv Detail & Related papers (2020-11-13T15:22:16Z) - Determinantal Point Processes in Randomized Numerical Linear Algebra [80.27102478796613]
Numerical Linear Algebra (RandNLA) uses randomness to develop improved algorithms for matrix problems that arise in scientific computing, data science, machine learning, etc.
Recent work has uncovered deep and fruitful connections between DPPs and RandNLA which lead to new guarantees and improved algorithms.
arXiv Detail & Related papers (2020-05-07T00:39:52Z) - A Robust Functional EM Algorithm for Incomplete Panel Count Data [66.07942227228014]
We propose a functional EM algorithm to estimate the counting process mean function under a missing completely at random assumption (MCAR)
The proposed algorithm wraps several popular panel count inference methods, seamlessly deals with incomplete counts and is robust to misspecification of the Poisson process assumption.
We illustrate the utility of the proposed algorithm through numerical experiments and an analysis of smoking cessation data.
arXiv Detail & Related papers (2020-03-02T20:04:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.