Revisiting the Fragility of Influence Functions
- URL: http://arxiv.org/abs/2303.12922v2
- Date: Fri, 7 Apr 2023 15:46:06 GMT
- Title: Revisiting the Fragility of Influence Functions
- Authors: Jacob R. Epifano, Ravi P. Ramachandran, Aaron J. Masino, Ghulam Rasool
- Abstract summary: influence functions, which approximates the effect that leave-one-out has on loss, has been proposed to verify accuracy or faithfulness of deep learning models.
Here, we analyze key metrics that are used to validate influence functions.
Our results indicate that the procedures may cause their validation work to be unclear.
- Score: 1.4699455652461724
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the last few years, many works have tried to explain the predictions of
deep learning models. Few methods, however, have been proposed to verify the
accuracy or faithfulness of these explanations. Recently, influence functions,
which is a method that approximates the effect that leave-one-out training has
on the loss function, has been shown to be fragile. The proposed reason for
their fragility remains unclear. Although previous work suggests the use of
regularization to increase robustness, this does not hold in all cases. In this
work, we seek to investigate the experiments performed in the prior work in an
effort to understand the underlying mechanisms of influence function fragility.
First, we verify influence functions using procedures from the literature under
conditions where the convexity assumptions of influence functions are met.
Then, we relax these assumptions and study the effects of non-convexity by
using deeper models and more complex datasets. Here, we analyze the key metrics
and procedures that are used to validate influence functions. Our results
indicate that the validation procedures may cause the observed fragility.
Related papers
- Do Influence Functions Work on Large Language Models? [10.463762448166714]
Influence functions aim to quantify the impact of individual training data points on a model's predictions.
We evaluate influence functions across multiple tasks and find that they consistently perform poorly in most settings.
arXiv Detail & Related papers (2024-09-30T06:50:18Z) - A Causal Framework for Decomposing Spurious Variations [68.12191782657437]
We develop tools for decomposing spurious variations in Markovian and Semi-Markovian models.
We prove the first results that allow a non-parametric decomposition of spurious effects.
The described approach has several applications, ranging from explainable and fair AI to questions in epidemiology and medicine.
arXiv Detail & Related papers (2023-06-08T09:40:28Z) - If Influence Functions are the Answer, Then What is the Question? [7.873458431535409]
Influence functions efficiently estimate the effect of removing a single training data point on a model's learned parameters.
While influence estimates align well with leave-one-out retraining for linear models, recent works have shown this alignment is often poor in neural networks.
arXiv Detail & Related papers (2022-09-12T16:17:43Z) - What's the Harm? Sharp Bounds on the Fraction Negatively Affected by
Treatment [58.442274475425144]
We develop a robust inference algorithm that is efficient almost regardless of how and how fast these functions are learned.
We demonstrate our method in simulation studies and in a case study of career counseling for the unemployed.
arXiv Detail & Related papers (2022-05-20T17:36:33Z) - Inf-CP: A Reliable Channel Pruning based on Channel Influence [4.692400531340393]
One of the most effective methods of channel pruning is to trim on the basis of the importance of each neuron.
Previous works have proposed to trim by considering the statistics of a single layer or a plurality of successive layers of neurons.
We propose to use ensemble learning to train a model for different batches of data.
arXiv Detail & Related papers (2021-12-05T09:30:43Z) - Causal Effect Estimation using Variational Information Bottleneck [19.6760527269791]
Causal inference is to estimate the causal effect in a causal relationship when intervention is applied.
We propose a method to estimate Causal Effect by using Variational Information Bottleneck (CEVIB)
arXiv Detail & Related papers (2021-10-26T13:46:12Z) - Causal Inference Under Unmeasured Confounding With Negative Controls: A
Minimax Learning Approach [84.29777236590674]
We study the estimation of causal parameters when not all confounders are observed and instead negative controls are available.
Recent work has shown how these can enable identification and efficient estimation via two so-called bridge functions.
arXiv Detail & Related papers (2021-03-25T17:59:19Z) - Influence Functions in Deep Learning Are Fragile [52.31375893260445]
influence functions approximate the effect of samples in test-time predictions.
influence estimates are fairly accurate for shallow networks.
Hessian regularization is important to get highquality influence estimates.
arXiv Detail & Related papers (2020-06-25T18:25:59Z) - Amnesic Probing: Behavioral Explanation with Amnesic Counterfactuals [53.484562601127195]
We point out the inability to infer behavioral conclusions from probing results.
We offer an alternative method that focuses on how the information is being used, rather than on what information is encoded.
arXiv Detail & Related papers (2020-06-01T15:00:11Z) - Explaining Black Box Predictions and Unveiling Data Artifacts through
Influence Functions [55.660255727031725]
Influence functions explain the decisions of a model by identifying influential training examples.
We conduct a comparison between influence functions and common word-saliency methods on representative tasks.
We develop a new measure based on influence functions that can reveal artifacts in training data.
arXiv Detail & Related papers (2020-05-14T00:45:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.