A Biased Estimator for MinMax Sampling and Distributed Aggregation
- URL: http://arxiv.org/abs/2404.17690v1
- Date: Fri, 26 Apr 2024 20:39:08 GMT
- Title: A Biased Estimator for MinMax Sampling and Distributed Aggregation
- Authors: Joel Wolfrath, Abhishek Chandra,
- Abstract summary: We propose a biased MinMax estimation scheme, B-MinMax, which trades an increase in estimator bias for a reduction in variance.
B-MinMax is preferable when sample sizes are small or the number of aggregated vectors is limited.
- Score: 0.786519149320184
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: MinMax sampling is a technique for downsampling a real-valued vector which minimizes the maximum variance over all vector components. This approach is useful for reducing the amount of data that must be sent over a constrained network link (e.g. in the wide-area). MinMax can provide unbiased estimates of the vector elements, along with unbiased estimates of aggregates when vectors are combined from multiple locations. In this work, we propose a biased MinMax estimation scheme, B-MinMax, which trades an increase in estimator bias for a reduction in variance. We prove that when no aggregation is performed, B-MinMax obtains a strictly lower MSE compared to the unbiased MinMax estimator. When aggregation is required, B-MinMax is preferable when sample sizes are small or the number of aggregated vectors is limited. Our experiments show that this approach can substantially reduce the MSE for MinMax sampling in many practical settings.
Related papers
- Adaptive Sampled Softmax with Inverted Multi-Index: Methods, Theory and Applications [79.53938312089308]
The MIDX-Sampler is a novel adaptive sampling strategy based on an inverted multi-index approach.
Our method is backed by rigorous theoretical analysis, addressing key concerns such as sampling bias, gradient bias, convergence rates, and generalization error bounds.
arXiv Detail & Related papers (2025-01-15T04:09:21Z) - Minmax Trend Filtering: Generalizations of Total Variation Denoising via a Local Minmax/Maxmin Formula [4.07926531936425]
Total Variation Denoising (TVD) is a fundamental denoising and smoothing method.
In this article, we identify a new local minmax/maxmin formula producing two estimators.
We show how the proposed local definition of TVD/MTF estimator makes it tractable to bound pointwise estimation errors.
arXiv Detail & Related papers (2024-10-03T23:15:35Z) - Minimax-optimal estimation for sparse multi-reference alignment with
collision-free signals [1.3658544194443192]
We investigate minimax optimality for signal estimation under the MRA model for so-called collision-free signals.
We demonstrate that the minimax optimal rate of estimation in for the sparse MRA problem in this setting is $sigma2/sqrtn$, where $n$ is the sample size.
arXiv Detail & Related papers (2023-12-13T02:06:52Z) - Optimal minimax rate of learning interaction kernels [7.329333373512536]
We introduce a tamed least squares estimator (tLSE) that attains the optimal convergence rate for a broad class of exchangeable distributions.
Our findings reveal that provided the inverse problem in the large sample limit satisfies a coercivity condition, the left tail probability does not alter the bias-variance tradeoff.
arXiv Detail & Related papers (2023-11-28T15:01:58Z) - Optimal Estimation and Computational Limit of Low-rank Gaussian Mixtures [12.868722327487752]
We propose a low-rank Gaussian mixture model (LrMM) assuming each matrix-valued observation has a planted low-rank structure.
We prove the minimax optimality of a maximum likelihood estimator which, in general, is computationally infeasible.
Our results reveal multiple phase transitions in the minimax error rates and the statistical-to-computational gap.
arXiv Detail & Related papers (2022-01-22T12:43:25Z) - Learning to Estimate Without Bias [57.82628598276623]
Gauss theorem states that the weighted least squares estimator is a linear minimum variance unbiased estimation (MVUE) in linear models.
In this paper, we take a first step towards extending this result to non linear settings via deep learning with bias constraints.
A second motivation to BCE is in applications where multiple estimates of the same unknown are averaged for improved performance.
arXiv Detail & Related papers (2021-10-24T10:23:51Z) - Maximum sampled conditional likelihood for informative subsampling [4.708378681950648]
Subsampling is a computationally effective approach to extract information from massive data sets when computing resources are limited.
We propose to use the maximum maximum conditional likelihood estimator (MSCLE) based on the sampled data.
arXiv Detail & Related papers (2020-11-11T16:01:17Z) - Consistent Structured Prediction with Max-Min Margin Markov Networks [84.60515484036239]
Max-margin methods for binary classification have been extended to the structured prediction setting under the name of max-margin Markov networks ($M3N$)
We overcome such limitations by defining the learning problem in terms of a "max-min" margin formulation, naming the resulting method max-min margin Markov networks ($M4N$)
Experiments on multi-class classification, ordinal regression, sequence prediction and ranking demonstrate the effectiveness of the proposed method.
arXiv Detail & Related papers (2020-07-02T10:48:42Z) - Learning Minimax Estimators via Online Learning [55.92459567732491]
We consider the problem of designing minimax estimators for estimating parameters of a probability distribution.
We construct an algorithm for finding a mixed-case Nash equilibrium.
arXiv Detail & Related papers (2020-06-19T22:49:42Z) - Breaking the Sample Size Barrier in Model-Based Reinforcement Learning
with a Generative Model [50.38446482252857]
This paper is concerned with the sample efficiency of reinforcement learning, assuming access to a generative model (or simulator)
We first consider $gamma$-discounted infinite-horizon Markov decision processes (MDPs) with state space $mathcalS$ and action space $mathcalA$.
We prove that a plain model-based planning algorithm suffices to achieve minimax-optimal sample complexity given any target accuracy level.
arXiv Detail & Related papers (2020-05-26T17:53:18Z) - Computationally efficient sparse clustering [67.95910835079825]
We provide a finite sample analysis of a new clustering algorithm based on PCA.
We show that it achieves the minimax optimal misclustering rate in the regime $|theta infty$.
arXiv Detail & Related papers (2020-05-21T17:51:30Z) - Robust subgaussian estimation with VC-dimension [0.0]
This work proposes a new general way to bound the excess risk for MOM estimators.
The core technique is the use of VC-dimension (instead of Rademacher complexity) to measure the statistical complexity.
arXiv Detail & Related papers (2020-04-24T13:21:09Z) - Minimax Optimal Estimation of KL Divergence for Continuous Distributions [56.29748742084386]
Esting Kullback-Leibler divergence from identical and independently distributed samples is an important problem in various domains.
One simple and effective estimator is based on the k nearest neighbor between these samples.
arXiv Detail & Related papers (2020-02-26T16:37:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.