Approximation properties of slice-matching operators
- URL: http://arxiv.org/abs/2310.10869v1
- Date: Mon, 16 Oct 2023 22:32:43 GMT
- Title: Approximation properties of slice-matching operators
- Authors: Shiying Li and Caroline Moosmueller
- Abstract summary: Iterative slice-matching procedures are efficient schemes for transferring a source measure to a target measure, especially in high dimensions.
We explore approximation properties related to a single step of such schemes by examining an associated slice-matching operator.
- Score: 3.408452800179907
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Iterative slice-matching procedures are efficient schemes for transferring a
source measure to a target measure, especially in high dimensions. These
schemes have been successfully used in applications such as color transfer and
shape retrieval, and are guaranteed to converge under regularity assumptions.
In this paper, we explore approximation properties related to a single step of
such iterative schemes by examining an associated slice-matching operator,
depending on a source measure, a target measure, and slicing directions. In
particular, we demonstrate an invariance property with respect to the source
measure, an equivariance property with respect to the target measure, and
Lipschitz continuity concerning the slicing directions. We furthermore
establish error bounds corresponding to approximating the target measure by one
step of the slice-matching scheme and characterize situations in which the
slice-matching operator recovers the optimal transport map between two
measures. We also investigate connections to affine registration problems with
respect to (sliced) Wasserstein distances. These connections can be also be
viewed as extensions to the invariance and equivariance properties of the
slice-matching operator and illustrate the extent to which slice-matching
schemes incorporate affine effects.
Related papers
- Canonical Variates in Wasserstein Metric Space [16.668946904062032]
We employ the Wasserstein metric to measure distances between distributions, which are then used by distance-based classification algorithms.
Central to our investigation is dimension reduction within the Wasserstein metric space to enhance classification accuracy.
We introduce a novel approach grounded in the principle of maximizing Fisher's ratio, defined as the quotient of between-class variation to within-class variation.
arXiv Detail & Related papers (2024-05-24T17:59:21Z) - Measure transfer via stochastic slicing and matching [1.4594704809280983]
This paper studies iterative schemes for measure transfer and approximation problems defined through a slicing-and-matching procedure.
The main contribution of this paper is an almost sure convergence proof for slicing-and-matching schemes.
arXiv Detail & Related papers (2023-07-11T18:12:30Z) - Meta Learning Low Rank Covariance Factors for Energy-Based Deterministic
Uncertainty [58.144520501201995]
Bi-Lipschitz regularization of neural network layers preserve relative distances between data instances in the feature spaces of each layer.
With the use of an attentive set encoder, we propose to meta learn either diagonal or diagonal plus low-rank factors to efficiently construct task specific covariance matrices.
We also propose an inference procedure which utilizes scaled energy to achieve a final predictive distribution.
arXiv Detail & Related papers (2021-10-12T22:04:19Z) - Model identification and local linear convergence of coordinate descent [74.87531444344381]
We show that cyclic coordinate descent achieves model identification in finite time for a wide class of functions.
We also prove explicit local linear convergence rates for coordinate descent.
arXiv Detail & Related papers (2020-10-22T16:03:19Z) - Optimal covariant quantum measurements [0.0]
We discuss symmetric quantum measurements and the associated covariant observables modelled, respectively, as instruments and positive-operator-valued measures.
The emphasis of this work are the optimality properties of the measurements, namely, extremality, informational completeness, and the rank-1 property.
arXiv Detail & Related papers (2020-09-29T15:08:07Z) - The Advantage of Conditional Meta-Learning for Biased Regularization and
Fine-Tuning [50.21341246243422]
Biased regularization and fine-tuning are two recent meta-learning approaches.
We propose conditional meta-learning, inferring a conditioning function mapping task's side information into a meta- parameter vector.
We then propose a convex meta-algorithm providing a comparable advantage also in practice.
arXiv Detail & Related papers (2020-08-25T07:32:16Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z) - Asymptotic Analysis of an Ensemble of Randomly Projected Linear
Discriminants [94.46276668068327]
In [1], an ensemble of randomly projected linear discriminants is used to classify datasets.
We develop a consistent estimator of the misclassification probability as an alternative to the computationally-costly cross-validation estimator.
We also demonstrate the use of our estimator for tuning the projection dimension on both real and synthetic data.
arXiv Detail & Related papers (2020-04-17T12:47:04Z) - Deep Hough Transform for Semantic Line Detection [70.28969017874587]
We focus on a fundamental task of detecting meaningful line structures, a.k.a. semantic lines, in natural scenes.
Previous methods neglect the inherent characteristics of lines, leading to sub-optimal performance.
We propose a one-shot end-to-end learning framework for line detection.
arXiv Detail & Related papers (2020-03-10T13:08:42Z) - Learning Flat Latent Manifolds with VAEs [16.725880610265378]
We propose an extension to the framework of variational auto-encoders, where the Euclidean metric is a proxy for the similarity between data points.
We replace the compact prior typically used in variational auto-encoders with a recently presented, more expressive hierarchical one.
We evaluate our method on a range of data-sets, including a video-tracking benchmark.
arXiv Detail & Related papers (2020-02-12T09:54:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.