DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and
Diffusion Models
- URL: http://arxiv.org/abs/2310.00902v3
- Date: Wed, 13 Mar 2024 14:27:46 GMT
- Title: DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and
Diffusion Models
- Authors: Yongchan Kwon, Eric Wu, Kevin Wu, James Zou
- Abstract summary: We propose DataInf, an efficient influence approximation method that is practical for large-scale generative AI models.
Our theoretical analysis shows that DataInf is particularly well-suited for parameter-efficient fine-tuning techniques such as LoRA.
In applications to RoBERTa-large, Llama-2-13B-chat, and stable-diffusion-v1.5 models, DataInf effectively identifies the most influential fine-tuning examples better than other approximate influence scores.
- Score: 31.65198592956842
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Quantifying the impact of training data points is crucial for understanding
the outputs of machine learning models and for improving the transparency of
the AI pipeline. The influence function is a principled and popular data
attribution method, but its computational cost often makes it challenging to
use. This issue becomes more pronounced in the setting of large language models
and text-to-image models. In this work, we propose DataInf, an efficient
influence approximation method that is practical for large-scale generative AI
models. Leveraging an easy-to-compute closed-form expression, DataInf
outperforms existing influence computation algorithms in terms of computational
and memory efficiency. Our theoretical analysis shows that DataInf is
particularly well-suited for parameter-efficient fine-tuning techniques such as
LoRA. Through systematic empirical evaluations, we show that DataInf accurately
approximates influence scores and is orders of magnitude faster than existing
methods. In applications to RoBERTa-large, Llama-2-13B-chat, and
stable-diffusion-v1.5 models, DataInf effectively identifies the most
influential fine-tuning examples better than other approximate influence
scores. Moreover, it can help to identify which data points are mislabeled.
Related papers
- Data Valuation using Neural Networks for Efficient Instruction Fine-Tuning [11.153153731598634]
Influence functions provide crucial insights into model training.
Existing methods suffer from large computational costs and limited generalization.
In this paper, we explore the use of small neural networks to estimate influence values, achieving up to 99% cost reduction.
arXiv Detail & Related papers (2025-02-14T07:55:47Z) - Efficient Multi-Agent System Training with Data Influence-Oriented Tree Search [59.75749613951193]
We propose Data Influence-oriented Tree Search (DITS) to guide both tree search and data selection.
By leveraging influence scores, we effectively identify the most impactful data for system improvement.
We derive influence score estimation methods tailored for non-differentiable metrics.
arXiv Detail & Related papers (2025-02-02T23:20:16Z) - Capturing the Temporal Dependence of Training Data Influence [100.91355498124527]
We formalize the concept of trajectory-specific leave-one-out influence, which quantifies the impact of removing a data point during training.
We propose data value embedding, a novel technique enabling efficient approximation of trajectory-specific LOO.
As data value embedding captures training data ordering, it offers valuable insights into model training dynamics.
arXiv Detail & Related papers (2024-12-12T18:28:55Z) - HyperINF: Unleashing the HyperPower of the Schulz's Method for Data Influence Estimation [37.62285675595782]
We propose HyperINF, an efficient and accurate influence function approximation method.
We incorporate the generalized fisher information (GFIM) as a low-rank approximation of the Hessian matrix.
On LoRA-tuned models, HyperINF achieves superior downstream performance with minimal memory and computational overhead.
arXiv Detail & Related papers (2024-10-07T14:42:45Z) - LESS: Selecting Influential Data for Targeted Instruction Tuning [64.78894228923619]
We propose LESS, an efficient algorithm to estimate data influences and perform Low-rank gradiEnt Similarity Search for instruction data selection.
We show that training on a LESS-selected 5% of the data can often outperform training on the full dataset across diverse downstream tasks.
Our method goes beyond surface form cues to identify data that the necessary reasoning skills for the intended downstream application.
arXiv Detail & Related papers (2024-02-06T19:18:04Z) - Improved Distribution Matching for Dataset Condensation [91.55972945798531]
We propose a novel dataset condensation method based on distribution matching.
Our simple yet effective method outperforms most previous optimization-oriented methods with much fewer computational resources.
arXiv Detail & Related papers (2023-07-19T04:07:33Z) - Recommendation Unlearning via Influence Function [42.4931807753579]
We propose a new Influence Function-based Recommendation Unlearning (IFRU) framework, which efficiently updates the model without retraining.
IFRU achieves more than 250 times acceleration compared to retraining-based methods with recommendation performance comparable to full retraining.
arXiv Detail & Related papers (2023-07-05T09:42:51Z) - Striving for data-model efficiency: Identifying data externalities on
group performance [75.17591306911015]
Building trustworthy, effective, and responsible machine learning systems hinges on understanding how differences in training data and modeling decisions interact to impact predictive performance.
We focus on a particular type of data-model inefficiency, in which adding training data from some sources can actually lower performance evaluated on key sub-groups of the population.
Our results indicate that data-efficiency is a key component of both accurate and trustworthy machine learning.
arXiv Detail & Related papers (2022-11-11T16:48:27Z) - FastIF: Scalable Influence Functions for Efficient Model Interpretation
and Debugging [112.19994766375231]
Influence functions approximate the 'influences' of training data-points for test predictions.
We present FastIF, a set of simple modifications to influence functions that significantly improves their run-time.
Our experiments demonstrate the potential of influence functions in model interpretation and correcting model errors.
arXiv Detail & Related papers (2020-12-31T18:02:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.