A New Robust Multivariate Mode Estimator for Eye-tracking Calibration
- URL: http://arxiv.org/abs/2107.08030v1
- Date: Fri, 16 Jul 2021 17:45:19 GMT
- Title: A New Robust Multivariate Mode Estimator for Eye-tracking Calibration
- Authors: Adrien Brilhault, Sergio Neuenschwander, Ricardo Araujo Rios
- Abstract summary: We propose a new method for estimating the main mode of multivariate distributions, with application to eye-tracking calibrations.
In this type of multimodal distributions, most central tendency measures fail at estimating the principal fixation coordinates.
Here, we developed a new algorithm to identify the first mode of multivariate distributions, named BRIL.
We obtained outstanding performances, even for distributions containing very high proportions of outliers, both grouped in clusters and randomly distributed.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We propose in this work a new method for estimating the main mode of
multivariate distributions, with application to eye-tracking calibrations. When
performing eye-tracking experiments with poorly cooperative subjects, such as
infants or monkeys, the calibration data generally suffer from high
contamination. Outliers are typically organized in clusters, corresponding to
the time intervals when subjects were not looking at the calibration points. In
this type of multimodal distributions, most central tendency measures fail at
estimating the principal fixation coordinates (the first mode), resulting in
errors and inaccuracies when mapping the gaze to the screen coordinates. Here,
we developed a new algorithm to identify the first mode of multivariate
distributions, named BRIL, which rely on recursive depth-based filtering. This
novel approach was tested on artificial mixtures of Gaussian and Uniform
distributions, and compared to existing methods (conventional depth medians,
robust estimators of location and scatter, and clustering-based approaches). We
obtained outstanding performances, even for distributions containing very high
proportions of outliers, both grouped in clusters and randomly distributed.
Finally, we demonstrate the strength of our method in a real-world scenario
using experimental data from eye-tracking calibrations with Capuchin monkeys,
especially for distributions where other algorithms typically lack accuracy.
Related papers
- Improving Distribution Alignment with Diversity-based Sampling [0.0]
Domain shifts are ubiquitous in machine learning, and can substantially degrade a model's performance when deployed to real-world data.
This paper proposes to improve these estimates by inducing diversity in each sampled minibatch.
It simultaneously balances the data and reduces the variance of the gradients, thereby enhancing the model's generalisation ability.
arXiv Detail & Related papers (2024-10-05T17:26:03Z) - Collaborative Heterogeneous Causal Inference Beyond Meta-analysis [68.4474531911361]
We propose a collaborative inverse propensity score estimator for causal inference with heterogeneous data.
Our method shows significant improvements over the methods based on meta-analysis when heterogeneity increases.
arXiv Detail & Related papers (2024-04-24T09:04:36Z) - A provable initialization and robust clustering method for general mixture models [6.806940901668607]
Clustering is a fundamental tool in statistical machine learning in the presence of heterogeneous data.
Most recent results focus on optimal mislabeling guarantees when data are distributed around centroids with sub-Gaussian errors.
arXiv Detail & Related papers (2024-01-10T22:56:44Z) - Entropy-MCMC: Sampling from Flat Basins with Ease [10.764160559530849]
We introduce an auxiliary guiding variable, the stationary distribution of which resembles a smoothed posterior free from sharp modes, to lead the MCMC sampler to flat basins.
By integrating this guiding variable with the model parameter, we create a simple joint distribution that enables efficient sampling with minimal computational overhead.
Empirical results demonstrate that our method can successfully sample from flat basins of the posterior, and outperforms all compared baselines on multiple benchmarks.
arXiv Detail & Related papers (2023-10-09T04:40:20Z) - Compound Batch Normalization for Long-tailed Image Classification [77.42829178064807]
We propose a compound batch normalization method based on a Gaussian mixture.
It can model the feature space more comprehensively and reduce the dominance of head classes.
The proposed method outperforms existing methods on long-tailed image classification.
arXiv Detail & Related papers (2022-12-02T07:31:39Z) - Robust Calibration with Multi-domain Temperature Scaling [86.07299013396059]
We develop a systematic calibration model to handle distribution shifts by leveraging data from multiple domains.
Our proposed method -- multi-domain temperature scaling -- uses the robustness in the domains to improve calibration under distribution shift.
arXiv Detail & Related papers (2022-06-06T17:32:12Z) - Distributionally Robust Models with Parametric Likelihood Ratios [123.05074253513935]
Three simple ideas allow us to train models with DRO using a broader class of parametric likelihood ratios.
We find that models trained with the resulting parametric adversaries are consistently more robust to subpopulation shifts when compared to other DRO approaches.
arXiv Detail & Related papers (2022-04-13T12:43:12Z) - Deep Discriminative to Kernel Density Graph for In- and Out-of-distribution Calibrated Inference [7.840433908659846]
Deep discriminative approaches like random forests and deep neural networks have recently found applications in many important real-world scenarios.
However, deploying these learning algorithms in safety-critical applications raises concerns, particularly when it comes to ensuring confidence calibration for both in-distribution and out-of-distribution data points.
In this paper, we address ID and OOD calibration problems jointly.
arXiv Detail & Related papers (2022-01-31T05:07:16Z) - Good Classifiers are Abundant in the Interpolating Regime [64.72044662855612]
We develop a methodology to compute precisely the full distribution of test errors among interpolating classifiers.
We find that test errors tend to concentrate around a small typical value $varepsilon*$, which deviates substantially from the test error of worst-case interpolating model.
Our results show that the usual style of analysis in statistical learning theory may not be fine-grained enough to capture the good generalization performance observed in practice.
arXiv Detail & Related papers (2020-06-22T21:12:31Z) - Log-Likelihood Ratio Minimizing Flows: Towards Robust and Quantifiable
Neural Distribution Alignment [52.02794488304448]
We propose a new distribution alignment method based on a log-likelihood ratio statistic and normalizing flows.
We experimentally verify that minimizing the resulting objective results in domain alignment that preserves the local structure of input domains.
arXiv Detail & Related papers (2020-03-26T22:10:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.