Concept Distillation: Leveraging Human-Centered Explanations for Model
Improvement
- URL: http://arxiv.org/abs/2311.15303v1
- Date: Sun, 26 Nov 2023 14:00:14 GMT
- Title: Concept Distillation: Leveraging Human-Centered Explanations for Model
Improvement
- Authors: Avani Gupta, Saurabh Saini, P J Narayanan
- Abstract summary: Concept Activation Vectors (CAVs) estimate a model's sensitivity and possible biases to a given concept.
We extend CAVs from post-hoc analysis to ante-hoc training in order to reduce model bias through fine-tuning.
We show applications of concept-sensitive training to debias several classification problems.
- Score: 3.026365073195727
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Humans use abstract concepts for understanding instead of hard features.
Recent interpretability research has focused on human-centered concept
explanations of neural networks. Concept Activation Vectors (CAVs) estimate a
model's sensitivity and possible biases to a given concept. In this paper, we
extend CAVs from post-hoc analysis to ante-hoc training in order to reduce
model bias through fine-tuning using an additional Concept Loss. Concepts were
defined on the final layer of the network in the past. We generalize it to
intermediate layers using class prototypes. This facilitates class learning in
the last convolution layer, which is known to be most informative. We also
introduce Concept Distillation to create richer concepts using a pre-trained
knowledgeable model as the teacher. Our method can sensitize or desensitize a
model towards concepts. We show applications of concept-sensitive training to
debias several classification problems. We also use concepts to induce prior
knowledge into IID, a reconstruction problem. Concept-sensitive training can
improve model interpretability, reduce biases, and induce prior knowledge.
Please visit https://avani17101.github.io/Concept-Distilllation/ for code and
more details.
Related papers
- How to Continually Adapt Text-to-Image Diffusion Models for Flexible Customization? [91.49559116493414]
We propose a novel Concept-Incremental text-to-image Diffusion Model (CIDM)
It can resolve catastrophic forgetting and concept neglect to learn new customization tasks in a concept-incremental manner.
Experiments validate that our CIDM surpasses existing custom diffusion models.
arXiv Detail & Related papers (2024-10-23T06:47:29Z) - Discover-then-Name: Task-Agnostic Concept Bottlenecks via Automated Concept Discovery [52.498055901649025]
Concept Bottleneck Models (CBMs) have been proposed to address the 'black-box' problem of deep neural networks.
We propose a novel CBM approach -- called Discover-then-Name-CBM (DN-CBM) -- that inverts the typical paradigm.
Our concept extraction strategy is efficient, since it is agnostic to the downstream task, and uses concepts already known to the model.
arXiv Detail & Related papers (2024-07-19T17:50:11Z) - Concept Bottleneck Models Without Predefined Concepts [26.156636891713745]
We introduce an input-dependent concept selection mechanism that ensures only a small subset of concepts is used across all classes.
We show that our approach improves downstream performance and narrows the performance gap to black-box models.
arXiv Detail & Related papers (2024-07-04T13:34:50Z) - Restyling Unsupervised Concept Based Interpretable Networks with Generative Models [14.604305230535026]
We propose a novel method that relies on mapping the concept features to the latent space of a pretrained generative model.
We quantitatively ascertain the efficacy of our method in terms of accuracy of the interpretable prediction network, fidelity of reconstruction, as well as faithfulness and consistency of learnt concepts.
arXiv Detail & Related papers (2024-07-01T14:39:41Z) - ClassDiffusion: More Aligned Personalization Tuning with Explicit Class Guidance [78.44823280247438]
We present ClassDiffusion, a technique that leverages a semantic preservation loss to explicitly regulate the concept space when learning the new concept.
Despite its simplicity, this helps avoid semantic drift when fine-tuning on the target concepts.
In response to the ineffective evaluation of CLIP-T metrics, we introduce BLIP2-T metric.
arXiv Detail & Related papers (2024-05-27T17:50:10Z) - Interpreting Pretrained Language Models via Concept Bottlenecks [55.47515772358389]
Pretrained language models (PLMs) have made significant strides in various natural language processing tasks.
The lack of interpretability due to their black-box'' nature poses challenges for responsible implementation.
We propose a novel approach to interpreting PLMs by employing high-level, meaningful concepts that are easily understandable for humans.
arXiv Detail & Related papers (2023-11-08T20:41:18Z) - Attributing Learned Concepts in Neural Networks to Training Data [5.930268338525991]
We find evidence for convergence, where removing the 10,000 top attributing images for a concept and retraining the model does not change the location of the concept in the network.
This suggests that the features that inform the development of a concept are spread in a more diffuse manner across its exemplars, implying robustness in concept formation.
arXiv Detail & Related papers (2023-10-04T20:26:59Z) - Concept Gradient: Concept-based Interpretation Without Linear Assumption [77.96338722483226]
Concept Activation Vector (CAV) relies on learning a linear relation between some latent representation of a given model and concepts.
We proposed Concept Gradient (CG), extending concept-based interpretation beyond linear concept functions.
We demonstrated CG outperforms CAV in both toy examples and real world datasets.
arXiv Detail & Related papers (2022-08-31T17:06:46Z) - Human-Centered Concept Explanations for Neural Networks [47.71169918421306]
We introduce concept explanations including the class of Concept Activation Vectors (CAV)
We then discuss approaches to automatically extract concepts, and approaches to address some of their caveats.
Finally, we discuss some case studies that showcase the utility of such concept-based explanations in synthetic settings and real world applications.
arXiv Detail & Related papers (2022-02-25T01:27:31Z) - Concept Bottleneck Model with Additional Unsupervised Concepts [0.5939410304994348]
We propose a novel interpretable model based on the concept bottleneck model (CBM)
CBM uses concept labels to train an intermediate layer as the additional visible layer.
By seamlessly training these two types of concepts while reducing the amount of computation, we can obtain both supervised and unsupervised concepts simultaneously.
arXiv Detail & Related papers (2022-02-03T08:30:51Z) - MACE: Model Agnostic Concept Extractor for Explaining Image
Classification Networks [10.06397994266945]
We propose MACE: a Model Agnostic Concept Extractor, which can explain the working of a convolutional network through smaller concepts.
We validate our framework using VGG16 and ResNet50 CNN architectures, and on datasets like Animals With Attributes 2 (AWA2) and Places365.
arXiv Detail & Related papers (2020-11-03T04:40:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.