In-Run Data Shapley for Adam Optimizer
- URL: http://arxiv.org/abs/2602.00329v2
- Date: Fri, 06 Feb 2026 15:27:57 GMT
- Title: In-Run Data Shapley for Adam Optimizer
- Authors: Meng Ding, Zeqing Zhang, Di Wang, Lijie Hu,
- Abstract summary: We propose Adam-Aware In-Run Data Shapley, which restores additivity by redefining utility under a fixed-state assumption.<n>Our method achieves near-perfect fidelity to ground-Pearson marginal contributions while retaining $sim$95% of standard training.
- Score: 13.904612598915165
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reliable data attribution is essential for mitigating bias and reducing computational waste in modern machine learning, with the Shapley value serving as the theoretical gold standard. While recent "In-Run" methods bypass the prohibitive cost of retraining by estimating contributions dynamically, they heavily rely on the linear structure of Stochastic Gradient Descent (SGD) and fail to capture the complex dynamics of adaptive optimizers like Adam. In this work, we demonstrate that data attribution is inherently optimizer-dependent: we show that SGD-based proxies diverge significantly from true contributions under Adam (Pearson $R \approx 0.11$), rendering them ineffective for modern training pipelines. To bridge this gap, we propose Adam-Aware In-Run Data Shapley. We derive a closed-form approximation that restores additivity by redefining utility under a fixed-state assumption and enable scalable computation via a novel Linearized Ghost Approximation. This technique linearizes the variance-dependent scaling term, allowing us to compute pairwise gradient dot-products without materializing per-sample gradients. Extensive experiments show that our method achieves near-perfect fidelity to ground-truth marginal contributions ($R > 0.99$) while retaining $\sim$95\% of standard training throughput. Furthermore, our Adam-aware attribution significantly outperforms SGD-based baselines in data attribution downstream tasks.
Related papers
- Unbiased Dynamic Pruning for Efficient Group-Based Policy Optimization [60.87651283510059]
Group Relative Policy Optimization (GRPO) effectively scales LLM reasoning but incurs prohibitive computational costs.<n>We propose Dynamic Pruning Policy Optimization (DPPO), a framework that enables dynamic pruning while preserving unbiased gradient estimation.<n>To mitigate the data sparsity induced by pruning, we introduce Dense Prompt Packing, a window-based greedy strategy.
arXiv Detail & Related papers (2026-03-04T14:48:53Z) - Labels or Preferences? Budget-Constrained Learning with Human Judgments over AI-Generated Outputs [17.028710603629026]
We show how to optimally allocate a fixed annotation budget between ground-truth labels and pairwise preferences in AI.<n>We introduce Preference-Calibrated Active Learning (PCAL), a novel robustness method that learns optimal data acquisition strategy.<n>This work provides a principled and statistically efficient approach for budget-constrained learning in modern AI.
arXiv Detail & Related papers (2026-01-19T23:23:29Z) - Algorithmic Guarantees for Distilling Supervised and Offline RL Datasets [16.403657943391188]
We develop and analyze an efficient dataset distillation algorithm for supervised learning.<n>We prove that our algorithm needs only $tildeO(d2)$ sampled regressors to derive a synthetic dataset.<n>We extend our algorithm to offline RL dataset distillation by matching the Bellman loss.
arXiv Detail & Related papers (2025-11-29T16:04:38Z) - Adam Reduces a Unique Form of Sharpness: Theoretical Insights Near the Minimizer Manifold [14.185079197889806]
We show that Adam implicitly reduces a unique form of sharpness measure shaped by its adaptive updates, leading to qualitatively different solutions from Gradient Descent.<n>More specifically, when the loss is small, Adam wanders around the manifold of minimizers and takes semi-gradients to minimize this sharpness measure in an adaptive manner.
arXiv Detail & Related papers (2025-11-04T17:58:57Z) - Distributionally Robust Optimization with Adversarial Data Contamination [49.89480853499918]
We focus on optimizing Wasserstein-1 DRO objectives for generalized linear models with convex Lipschitz loss functions.<n>Our primary contribution lies in a novel modeling framework that integrates robustness against training data contamination with robustness against distributional shifts.<n>This work establishes the first rigorous guarantees, supported by efficient computation, for learning under the dual challenges of data contamination and distributional shifts.
arXiv Detail & Related papers (2025-07-14T18:34:10Z) - A Simplified Analysis of SGD for Linear Regression with Weight Averaging [64.2393952273612]
Recent work bycitetzou 2021benign provides sharp rates for SGD optimization in linear regression using constant learning rate.<n>We provide a simplified analysis recovering the same bias and variance bounds provided incitepzou 2021benign based on simple linear algebra tools.<n>We believe our work makes the analysis of gradient descent on linear regression very accessible and will be helpful in further analyzing mini-batching and learning rate scheduling.
arXiv Detail & Related papers (2025-06-18T15:10:38Z) - SIM-Shapley: A Stable and Computationally Efficient Approach to Shapley Value Approximation [10.009607907227293]
Shapley value (SV) methods provide a principled framework for feature attribution in complex models but incur high computational costs.<n>We propose Iterative Momentum for Shapley Value Approximation (SIM-Shapley), a stable and efficient approximation method inspired by optimization.<n>In our numerical experiments, SIM-Shapley reduces computation time by up to 85% relative to state-of-the-art baselines.
arXiv Detail & Related papers (2025-05-13T03:23:10Z) - Online non-parametric likelihood-ratio estimation by Pearson-divergence
functional minimization [55.98760097296213]
We introduce a new framework for online non-parametric LRE (OLRE) for the setting where pairs of iid observations $(x_t sim p, x'_t sim q)$ are observed over time.
We provide theoretical guarantees for the performance of the OLRE method along with empirical validation in synthetic experiments.
arXiv Detail & Related papers (2023-11-03T13:20:11Z) - Uncovering the Hidden Cost of Model Compression [43.62624133952414]
Visual Prompting has emerged as a pivotal method for transfer learning in computer vision.
Model compression detrimentally impacts the performance of visual prompting-based transfer.
However, negative effects on calibration are not present when models are compressed via quantization.
arXiv Detail & Related papers (2023-08-29T01:47:49Z) - SIMPLE: A Gradient Estimator for $k$-Subset Sampling [42.38652558807518]
In this work, we fall back to discrete $k$-subset sampling on the forward pass.
We show that our gradient estimator, SIMPLE, exhibits lower bias and variance compared to state-of-the-art estimators.
Empirical results show improved performance on learning to explain and sparse linear regression.
arXiv Detail & Related papers (2022-10-04T22:33:16Z) - The Power and Limitation of Pretraining-Finetuning for Linear Regression
under Covariate Shift [127.21287240963859]
We investigate a transfer learning approach with pretraining on the source data and finetuning based on the target data.
For a large class of linear regression instances, transfer learning with $O(N2)$ source data is as effective as supervised learning with $N$ target data.
arXiv Detail & Related papers (2022-08-03T05:59:49Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.