Asymmetric compressive learning guarantees with applications to
quantized sketches
- URL: http://arxiv.org/abs/2104.10061v1
- Date: Tue, 20 Apr 2021 15:37:59 GMT
- Title: Asymmetric compressive learning guarantees with applications to
quantized sketches
- Authors: Vincent Schellekens and Laurent Jacques
- Abstract summary: We present a framework to train audio event classification on large-scale datasets.
We study the relaxation where this map is allowed to be different for each phase.
We then instantiate this framework to the setting of quantized sketches, by proving that the LPD indeed holds for binary sketch contributions.
- Score: 15.814495790111323
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The compressive learning framework reduces the computational cost of training
on large-scale datasets. In a sketching phase, the data is first compressed to
a lightweight sketch vector, obtained by mapping the data samples through a
well-chosen feature map, and averaging those contributions. In a learning
phase, the desired model parameters are then extracted from this sketch by
solving an optimization problem, which also involves a feature map. When the
feature map is identical during the sketching and learning phases, formal
statistical guarantees (excess risk bounds) have been proven.
However, the desirable properties of the feature map are different during
sketching and learning (e.g. quantized outputs, and differentiability,
respectively). We thus study the relaxation where this map is allowed to be
different for each phase. First, we prove that the existing guarantees carry
over to this asymmetric scheme, up to a controlled error term, provided some
Limited Projected Distortion (LPD) property holds. We then instantiate this
framework to the setting of quantized sketches, by proving that the LPD indeed
holds for binary sketch contributions. Finally, we further validate the
approach with numerical simulations, including a large-scale application in
audio event classification.
Related papers
- From Displacements to Distributions: A Machine-Learning Enabled
Framework for Quantifying Uncertainties in Parameters of Computational Models [0.09208007322096533]
This work presents novel extensions for combining two frameworks for quantifying uncertainties in the modeling of engineered systems.
The data-consistent iteration (DC) framework poses an inverse problem and solution for quantifying aleatoric uncertainties in terms of pullback and push-forward measures for a given Quantity of Interest (QoI) map.
The Learning Uncertain Quantities (LUQ) framework defines a formal three-step machine-learning enabled process for transforming noisy datasets into samples of a learned QoI map.
arXiv Detail & Related papers (2024-03-04T20:40:50Z) - Stochastic Gradient Descent for Nonparametric Regression [11.24895028006405]
This paper introduces an iterative algorithm for training nonparametric additive models.
We show that the resulting inequality satisfies an oracle that allows for model mis-specification.
arXiv Detail & Related papers (2024-01-01T08:03:52Z) - Learning Implicit Feature Alignment Function for Semantic Segmentation [51.36809814890326]
Implicit Feature Alignment function (IFA) is inspired by the rapidly expanding topic of implicit neural representations.
We show that IFA implicitly aligns the feature maps at different levels and is capable of producing segmentation maps in arbitrary resolutions.
Our method can be combined with improvement on various architectures, and it achieves state-of-the-art accuracy trade-off on common benchmarks.
arXiv Detail & Related papers (2022-06-17T09:40:14Z) - Neural Jacobian Fields: Learning Intrinsic Mappings of Arbitrary Meshes [38.157373733083894]
This paper introduces a framework designed to accurately predict piecewise linear mappings of arbitrary meshes via a neural network.
The framework is based on reducing the neural aspect to a prediction of a matrix for a single point, conditioned on a global shape descriptor.
By operating in the intrinsic gradient domain of each individual mesh, it allows the framework to predict highly-accurate mappings.
arXiv Detail & Related papers (2022-05-05T19:51:13Z) - Distributed Sketching for Randomized Optimization: Exact
Characterization, Concentration and Lower Bounds [54.51566432934556]
We consider distributed optimization methods for problems where forming the Hessian is computationally challenging.
We leverage randomized sketches for reducing the problem dimensions as well as preserving privacy and improving straggler resilience in asynchronous distributed systems.
arXiv Detail & Related papers (2022-03-18T05:49:13Z) - Smoothed Embeddings for Certified Few-Shot Learning [63.68667303948808]
We extend randomized smoothing to few-shot learning models that map inputs to normalized embeddings.
Our results are confirmed by experiments on different datasets.
arXiv Detail & Related papers (2022-02-02T18:19:04Z) - Multiway Non-rigid Point Cloud Registration via Learned Functional Map
Synchronization [105.14877281665011]
We present SyNoRiM, a novel way to register multiple non-rigid shapes by synchronizing the maps relating learned functions defined on the point clouds.
We demonstrate via extensive experiments that our method achieves a state-of-the-art performance in registration accuracy.
arXiv Detail & Related papers (2021-11-25T02:37:59Z) - Mean Nystr\"om Embeddings for Adaptive Compressive Learning [25.89586989444021]
We study the idea of performing sketching based on data-dependent Nystr"om approximation.
We show for k-means clustering and Gaussian modeling that for a fixed sketch size, Nystr"om sketches indeed outperform those built with random features.
arXiv Detail & Related papers (2021-10-21T09:05:58Z) - Learning Optical Flow from a Few Matches [67.83633948984954]
We show that the dense correlation volume representation is redundant and accurate flow estimation can be achieved with only a fraction of elements in it.
Experiments show that our method can reduce computational cost and memory use significantly, while maintaining high accuracy.
arXiv Detail & Related papers (2021-04-05T21:44:00Z) - Accumulations of Projections--A Unified Framework for Random Sketches in
Kernel Ridge Regression [12.258887270632869]
Building a sketch of an n-by-n empirical kernel matrix is a common approach to accelerate the computation of many kernel methods.
We propose a unified framework of constructing sketching methods in kernel ridge regression.
arXiv Detail & Related papers (2021-03-06T05:02:17Z) - Model identification and local linear convergence of coordinate descent [74.87531444344381]
We show that cyclic coordinate descent achieves model identification in finite time for a wide class of functions.
We also prove explicit local linear convergence rates for coordinate descent.
arXiv Detail & Related papers (2020-10-22T16:03:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.