Maximum Consensus by Weighted Influences of Monotone Boolean Functions
- URL: http://arxiv.org/abs/2112.00953v1
- Date: Thu, 2 Dec 2021 03:16:10 GMT
- Title: Maximum Consensus by Weighted Influences of Monotone Boolean Functions
- Authors: Erchuan Zhang, David Suter, Ruwan Tennakoon, Tat-Jun Chin, Alireza
Bab-Hadiashar, Giang Truong, Syed Zulqarnain Gilani
- Abstract summary: This paper studies the concept of weighted influences for solving MaxCon.
We prove the weighted influences, under this measure, of points belonging to larger structures are smaller than those of points belonging to smaller structures in general.
We also consider another "natural" family of sampling/weighting strategies, sampling with uniform measure concentrated on a particular (Hamming) level of the cube.
- Score: 40.38142264859375
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Robust model fitting is a fundamental problem in computer vision: used to
pre-process raw data in the presence of outliers. Maximisation of Consensus
(MaxCon) is one of the most popular robust criteria and widely used. Recently
(Tennakoon et al. CVPR2021), a connection has been made between MaxCon and
estimation of influences of a Monotone Boolean function. Equipping the Boolean
cube with different measures and adopting different sampling strategies (two
sides of the same coin) can have differing effects: which leads to the current
study. This paper studies the concept of weighted influences for solving
MaxCon. In particular, we study endowing the Boolean cube with the Bernoulli
measure and performing biased (as opposed to uniform) sampling. Theoretically,
we prove the weighted influences, under this measure, of points belonging to
larger structures are smaller than those of points belonging to smaller
structures in general. We also consider another "natural" family of
sampling/weighting strategies, sampling with uniform measure concentrated on a
particular (Hamming) level of the cube.
Based on weighted sampling, we modify the algorithm of Tennakoon et al., and
test on both synthetic and real datasets. This paper is not promoting a new
approach per se, but rather studying the issue of weighted sampling.
Accordingly, we are not claiming to have produced a superior algorithm: rather
we show some modest gains of Bernoulli sampling, and we illuminate some of the
interactions between structure in data and weighted sampling.
Related papers
- The Polynomial Stein Discrepancy for Assessing Moment Convergence [1.0835264351334324]
We propose a novel method for measuring the discrepancy between a set of samples and a desired posterior distribution for Bayesian inference.
We show that the test has higher power than its competitors in several examples, and at a lower computational cost.
arXiv Detail & Related papers (2024-12-06T15:51:04Z) - On the Sample Complexity of One Hidden Layer Networks with Equivariance, Locality and Weight Sharing [12.845681770287005]
Weight sharing, equivariant, and local filters are believed to contribute to the sample efficiency of neural networks.
We show that locality has generalization benefits, however the uncertainty principle implies a trade-off between locality and expressivity.
arXiv Detail & Related papers (2024-11-21T16:36:01Z) - Mini-batch Submodular Maximization [5.439020425819001]
We present the first mini-batch algorithm for maximizing a monotone decomposable submodular function, $F=sum_i=1N fi$, under a set of constraints.
We consider two sampling approaches: uniform and weighted.
Surprisingly, our experimental results show that uniform sampling is superior to weighted sampling.
arXiv Detail & Related papers (2024-01-23T04:16:58Z) - Optimal Multi-Distribution Learning [88.3008613028333]
Multi-distribution learning seeks to learn a shared model that minimizes the worst-case risk across $k$ distinct data distributions.
We propose a novel algorithm that yields an varepsilon-optimal randomized hypothesis with a sample complexity on the order of (d+k)/varepsilon2.
arXiv Detail & Related papers (2023-12-08T16:06:29Z) - Saliency Grafting: Innocuous Attribution-Guided Mixup with Calibrated
Label Mixing [104.630875328668]
Mixup scheme suggests mixing a pair of samples to create an augmented training sample.
We present a novel, yet simple Mixup-variant that captures the best of both worlds.
arXiv Detail & Related papers (2021-12-16T11:27:48Z) - Sensing Cox Processes via Posterior Sampling and Positive Bases [56.82162768921196]
We study adaptive sensing of point processes, a widely used model from spatial statistics.
We model the intensity function as a sample from a truncated Gaussian process, represented in a specially constructed positive basis.
Our adaptive sensing algorithms use Langevin dynamics and are based on posterior sampling (textscCox-Thompson) and top-two posterior sampling (textscTop2) principles.
arXiv Detail & Related papers (2021-10-21T14:47:06Z) - Consensus Maximisation Using Influences of Monotone Boolean Functions [40.86597150734384]
MaxCon aims to find the largest subset of data that fits the model within some tolerance level.
We show that influences of points belonging to the largest structure in data would generally be smaller under certain conditions.
Results for both synthetic and real visual data experiments show that the MBF based algorithm is capable of generating a near optimal solution relatively quickly.
arXiv Detail & Related papers (2021-03-06T22:01:06Z) - Compressing Large Sample Data for Discriminant Analysis [78.12073412066698]
We consider the computational issues due to large sample size within the discriminant analysis framework.
We propose a new compression approach for reducing the number of training samples for linear and quadratic discriminant analysis.
arXiv Detail & Related papers (2020-05-08T05:09:08Z) - The Simulator: Understanding Adaptive Sampling in the
Moderate-Confidence Regime [52.38455827779212]
We propose a novel technique for analyzing adaptive sampling called the em Simulator.
We prove the first instance-based lower bounds the top-k problem which incorporate the appropriate log-factors.
Our new analysis inspires a simple and near-optimal for the best-arm and top-k identification, the first em practical of its kind for the latter problem.
arXiv Detail & Related papers (2017-02-16T23:42:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.