Probabilistic size-and-shape functional mixed models
- URL: http://arxiv.org/abs/2411.18416v1
- Date: Wed, 27 Nov 2024 15:00:33 GMT
- Title: Probabilistic size-and-shape functional mixed models
- Authors: Fangyi Wang, Karthik Bharath, Oksana Chkrebtii, Sebastian Kurtek,
- Abstract summary: We show that it is possible to recover the size-and-shape of a square-integrable $mu$ under a functional mixed model.
A random object-level unitary transformation then captures size-and-shape emphpreserving deviations of $mu$ from an individual function.
- Score: 6.424762079392286
- License:
- Abstract: The reliable recovery and uncertainty quantification of a fixed effect function $\mu$ in a functional mixed model, for modelling population- and object-level variability in noisily observed functional data, is a notoriously challenging task: variations along the $x$ and $y$ axes are confounded with additive measurement error, and cannot in general be disentangled. The question then as to what properties of $\mu$ may be reliably recovered becomes important. We demonstrate that it is possible to recover the size-and-shape of a square-integrable $\mu$ under a Bayesian functional mixed model. The size-and-shape of $\mu$ is a geometric property invariant to a family of space-time unitary transformations, viewed as rotations of the Hilbert space, that jointly transform the $x$ and $y$ axes. A random object-level unitary transformation then captures size-and-shape \emph{preserving} deviations of $\mu$ from an individual function, while a random linear term and measurement error capture size-and-shape \emph{altering} deviations. The model is regularized by appropriate priors on the unitary transformations, posterior summaries of which may then be suitably interpreted as optimal data-driven rotations of a fixed orthonormal basis for the Hilbert space. Our numerical experiments demonstrate utility of the proposed model, and superiority over the current state-of-the-art.
Related papers
- Scaling Laws in Linear Regression: Compute, Parameters, and Data [86.48154162485712]
We study the theory of scaling laws in an infinite dimensional linear regression setup.
We show that the reducible part of the test error is $Theta(-(a-1) + N-(a-1)/a)$.
Our theory is consistent with the empirical neural scaling laws and verified by numerical simulation.
arXiv Detail & Related papers (2024-06-12T17:53:29Z) - Scaling and renormalization in high-dimensional regression [72.59731158970894]
This paper presents a succinct derivation of the training and generalization performance of a variety of high-dimensional ridge regression models.
We provide an introduction and review of recent results on these topics, aimed at readers with backgrounds in physics and deep learning.
arXiv Detail & Related papers (2024-05-01T15:59:00Z) - Learning with Norm Constrained, Over-parameterized, Two-layer Neural Networks [54.177130905659155]
Recent studies show that a reproducing kernel Hilbert space (RKHS) is not a suitable space to model functions by neural networks.
In this paper, we study a suitable function space for over- parameterized two-layer neural networks with bounded norms.
arXiv Detail & Related papers (2024-04-29T15:04:07Z) - A Bi-variant Variational Model for Diffeomorphic Image Registration with
Relaxed Jacobian Determinant Constraints [17.93018427389816]
We propose a new bi-variant diffeomorphic image registration model.
A soft constraint on the Jacobian equation $det(nablabmvarphi(bmx)) = f(bmx) > 0$ allows local deformations to shrink and grow within a flexible range.
A positive constraint is imposed on the optimization of the relaxation function $f(bmx)$, and a regularizer is used to ensure the smoothness of $f(bmx)$.
arXiv Detail & Related papers (2023-08-04T15:47:06Z) - Regular Variation in Hilbert Spaces and Principal Component Analysis for
Functional Extremes [1.6734018640023431]
We place ourselves in a Peaks-Over-Threshold framework where a functional extreme is defined as an observation $X$ whose $L2$-norm $|X|$ is comparatively large.
Our goal is to propose a dimension reduction framework resulting into finite dimensional projections for such extreme observations.
arXiv Detail & Related papers (2023-08-02T09:12:03Z) - Geometric Neural Diffusion Processes [55.891428654434634]
We extend the framework of diffusion models to incorporate a series of geometric priors in infinite-dimension modelling.
We show that with these conditions, the generative functional model admits the same symmetry.
arXiv Detail & Related papers (2023-07-11T16:51:38Z) - Robust Counterfactual Explanations for Neural Networks With Probabilistic Guarantees [11.841312820944774]
We propose a measure -- that we call $textitStability$ -- to quantify the robustness of counterfactuals to potential model changes for differentiable models.
Our main contribution is to show that counterfactuals with sufficiently high value of $textitStability$ will remain valid after potential model changes with high probability.
arXiv Detail & Related papers (2023-05-19T20:48:05Z) - Inverting brain grey matter models with likelihood-free inference: a
tool for trustable cytoarchitecture measurements [62.997667081978825]
characterisation of the brain grey matter cytoarchitecture with quantitative sensitivity to soma density and volume remains an unsolved challenge in dMRI.
We propose a new forward model, specifically a new system of equations, requiring a few relatively sparse b-shells.
We then apply modern tools from Bayesian analysis known as likelihood-free inference (LFI) to invert our proposed model.
arXiv Detail & Related papers (2021-11-15T09:08:27Z) - Ridge regression with adaptive additive rectangles and other piecewise
functional templates [0.0]
We propose an $L_2$-based penalization algorithm for functional linear regression models.
We show how our algorithm alternates between approximating a suitable template and solving a convex ridge-like problem.
arXiv Detail & Related papers (2020-11-02T15:28:54Z) - Tight Nonparametric Convergence Rates for Stochastic Gradient Descent
under the Noiseless Linear Model [0.0]
We analyze the convergence of single-pass, fixed step-size gradient descent on the least-square risk under this model.
As a special case, we analyze an online algorithm for estimating a real function on the unit interval from the noiseless observation of its value at randomly sampled points.
arXiv Detail & Related papers (2020-06-15T08:25:50Z) - A Random Matrix Analysis of Random Fourier Features: Beyond the Gaussian
Kernel, a Precise Phase Transition, and the Corresponding Double Descent [85.77233010209368]
This article characterizes the exacts of random Fourier feature (RFF) regression, in the realistic setting where the number of data samples $n$ is all large and comparable.
This analysis also provides accurate estimates of training and test regression errors for large $n,p,N$.
arXiv Detail & Related papers (2020-06-09T02:05:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.