Accurate Estimation of Mutual Information in High Dimensional Data
- URL: http://arxiv.org/abs/2506.00330v1
- Date: Sat, 31 May 2025 01:06:18 GMT
- Title: Accurate Estimation of Mutual Information in High Dimensional Data
- Authors: Eslam Abdelaleem, K. Michael Martini, Ilya Nemenman,
- Abstract summary: Mutual information (MI) is a measure of statistical dependencies between two variables, widely used in data analysis.<n>Recently, promising machine learning-based MI estimation methods have emerged.<n>We propose and validate a protocol for MI estimation that includes explicit checks ensuring reliability and statistical consistency.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Mutual information (MI) is a measure of statistical dependencies between two variables, widely used in data analysis. Thus, accurate methods for estimating MI from empirical data are crucial. Such estimation is a hard problem, and there are provably no estimators that are universally good for finite datasets. Common estimators struggle with high-dimensional data, which is a staple of modern experiments. Recently, promising machine learning-based MI estimation methods have emerged. Yet it remains unclear if and when they produce accurate results, depending on dataset sizes, statistical structure of the data, and hyperparameters of the estimators, such as the embedding dimensionality or the duration of training. There are also no accepted tests to signal when the estimators are inaccurate. Here, we systematically explore these gaps. We propose and validate a protocol for MI estimation that includes explicit checks ensuring reliability and statistical consistency. Contrary to accepted wisdom, we demonstrate that reliable MI estimation is achievable even with severely undersampled, high-dimensional datasets, provided these data admit accurate low-dimensional representations. These findings broaden the potential use of machine learning-based MI estimation methods in real-world data analysis and provide new insights into when and why modern high-dimensional, self-supervised algorithms perform effectively.
Related papers
- A Survey of Dimension Estimation Methods [0.0]
It is important to understand the real dimension of the data, hence the complexity of the dataset at hand.<n>This survey reviews a wide range of dimension estimation methods, categorising them by the geometric information they exploit.<n>The paper evaluates the performance of these methods, as well as investigating varying responses to curvature and noise.
arXiv Detail & Related papers (2025-07-18T13:05:42Z) - DUPRE: Data Utility Prediction for Efficient Data Valuation [49.60564885180563]
Cooperative game theory-based data valuation, such as Data Shapley, requires evaluating the data utility and retraining the ML model for multiple data subsets.<n>Our framework, textttDUPRE, takes an alternative yet complementary approach that reduces the cost per subset evaluation by predicting data utilities instead of evaluating them by model retraining.<n>Specifically, given the evaluated data utilities of some data subsets, textttDUPRE fits a emphGaussian process (GP) regression model to predict the utility of every other data subset.
arXiv Detail & Related papers (2025-02-22T08:53:39Z) - Inference for Large Scale Regression Models with Dependent Errors [3.3160726548489015]
This work defines and proves the statistical properties of the Generalized Method of Wavelet Moments with Exogenous variables (GMWMX)
It is a highly scalable, stable, and statistically valid method for estimating and delivering inference for linear models using processes in the presence of data complexities like latent dependence structures and missing data.
arXiv Detail & Related papers (2024-09-08T17:01:05Z) - Evaluation of Missing Data Analytical Techniques in Longitudinal Research: Traditional and Machine Learning Approaches [11.048092826888412]
This study utilizes Monte Carlo simulations to assess and compare the effectiveness of six analytical techniques for missing data within the growth curve modeling framework.
We investigate the influence of sample size, missing data rate, missing data mechanism, and data distribution on the accuracy and efficiency of model estimation.
arXiv Detail & Related papers (2024-06-19T20:20:30Z) - Information Leakage Detection through Approximate Bayes-optimal Prediction [22.04308347355652]
Information leakage (IL) involves unintentionally exposing sensitive information to unauthorized parties.<n> Conventional statistical approaches rely on estimating mutual information between observable and secret information for detecting ILs.<n>We establish a theoretical framework using statistical learning theory and information theory to quantify and detect IL accurately.
arXiv Detail & Related papers (2024-01-25T16:15:27Z) - Minimally Supervised Learning using Topological Projections in
Self-Organizing Maps [55.31182147885694]
We introduce a semi-supervised learning approach based on topological projections in self-organizing maps (SOMs)
Our proposed method first trains SOMs on unlabeled data and then a minimal number of available labeled data points are assigned to key best matching units (BMU)
Our results indicate that the proposed minimally supervised model significantly outperforms traditional regression techniques.
arXiv Detail & Related papers (2024-01-12T22:51:48Z) - Distributed Semi-Supervised Sparse Statistical Inference [6.685997976921953]
A debiased estimator is a crucial tool in statistical inference for high-dimensional model parameters.
Traditional methods require computing a debiased estimator on every machine.
An efficient multi-round distributed debiased estimator, which integrates both labeled and unlabelled data, is developed.
arXiv Detail & Related papers (2023-06-17T17:30:43Z) - Conditional expectation with regularization for missing data imputation [19.254291863337347]
Missing data frequently occurs in datasets across various domains, such as medicine, sports, and finance.
We propose a new algorithm named "conditional Distribution-based Imputation of Missing Values with Regularization" (DIMV)
DIMV operates by determining the conditional distribution of a feature that has missing entries, using the information from the fully observed features as a basis.
arXiv Detail & Related papers (2023-02-02T06:59:15Z) - Learning to be a Statistician: Learned Estimator for Number of Distinct
Values [54.629042119819744]
Estimating the number of distinct values (NDV) in a column is useful for many tasks in database systems.
In this work, we focus on how to derive accurate NDV estimations from random (online/offline) samples.
We propose to formulate the NDV estimation task in a supervised learning framework, and aim to learn a model as the estimator.
arXiv Detail & Related papers (2022-02-06T15:42:04Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - RIFLE: Imputation and Robust Inference from Low Order Marginals [10.082738539201804]
We develop a statistical inference framework for regression and classification in the presence of missing data without imputation.
Our framework, RIFLE, estimates low-order moments of the underlying data distribution with corresponding confidence intervals to learn a distributionally robust model.
Our experiments demonstrate that RIFLE outperforms other benchmark algorithms when the percentage of missing values is high and/or when the number of data points is relatively small.
arXiv Detail & Related papers (2021-09-01T23:17:30Z) - OR-Net: Pointwise Relational Inference for Data Completion under Partial
Observation [51.083573770706636]
This work uses relational inference to fill in the incomplete data.
We propose Omni-Relational Network (OR-Net) to model the pointwise relativity in two aspects.
arXiv Detail & Related papers (2021-05-02T06:05:54Z) - Neural Methods for Point-wise Dependency Estimation [129.93860669802046]
We focus on estimating point-wise dependency (PD), which quantitatively measures how likely two outcomes co-occur.
We demonstrate the effectiveness of our approaches in 1) MI estimation, 2) self-supervised representation learning, and 3) cross-modal retrieval task.
arXiv Detail & Related papers (2020-06-09T23:26:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.