Learning Bottleneck Concepts in Image Classification
- URL: http://arxiv.org/abs/2304.10131v1
- Date: Thu, 20 Apr 2023 07:32:05 GMT
- Title: Learning Bottleneck Concepts in Image Classification
- Authors: Bowen Wang, Liangzhi Li, Yuta Nakashima, Hajime Nagahara
- Abstract summary: Bottleneck Concept Learner (BotCL) represents an image solely by the presence/absence of concepts learned through training over the target task without explicit supervision over the concepts.
BotCL uses self-supervision and tailored regularizers so that learned concepts can be human-understandable.
- Score: 24.624603699966094
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Interpreting and explaining the behavior of deep neural networks is critical
for many tasks. Explainable AI provides a way to address this challenge, mostly
by providing per-pixel relevance to the decision. Yet, interpreting such
explanations may require expert knowledge. Some recent attempts toward
interpretability adopt a concept-based framework, giving a higher-level
relationship between some concepts and model decisions. This paper proposes
Bottleneck Concept Learner (BotCL), which represents an image solely by the
presence/absence of concepts learned through training over the target task
without explicit supervision over the concepts. It uses self-supervision and
tailored regularizers so that learned concepts can be human-understandable.
Using some image classification tasks as our testbed, we demonstrate BotCL's
potential to rebuild neural networks for better interpretability. Code is
available at https://github.com/wbw520/BotCL and a simple demo is available at
https://botcl.liangzhili.com/.
Related papers
- Explainable Concept Generation through Vision-Language Preference Learning [7.736445799116692]
Concept-based explanations have become a popular choice for explaining deep neural networks post-hoc.
We devise a reinforcement learning-based preference optimization algorithm that fine-tunes the vision-language generative model.
In addition to showing the efficacy and reliability of our method, we show how our method can be used as a diagnostic tool for analyzing neural networks.
arXiv Detail & Related papers (2024-08-24T02:26:42Z) - Discover-then-Name: Task-Agnostic Concept Bottlenecks via Automated Concept Discovery [52.498055901649025]
Concept Bottleneck Models (CBMs) have been proposed to address the 'black-box' problem of deep neural networks.
We propose a novel CBM approach -- called Discover-then-Name-CBM (DN-CBM) -- that inverts the typical paradigm.
Our concept extraction strategy is efficient, since it is agnostic to the downstream task, and uses concepts already known to the model.
arXiv Detail & Related papers (2024-07-19T17:50:11Z) - Concept Distillation: Leveraging Human-Centered Explanations for Model
Improvement [3.026365073195727]
Concept Activation Vectors (CAVs) estimate a model's sensitivity and possible biases to a given concept.
We extend CAVs from post-hoc analysis to ante-hoc training in order to reduce model bias through fine-tuning.
We show applications of concept-sensitive training to debias several classification problems.
arXiv Detail & Related papers (2023-11-26T14:00:14Z) - Identifying Interpretable Subspaces in Image Representations [54.821222487956355]
We propose a framework to explain features of image representations using Contrasting Concepts (FALCON)
For a target feature, FALCON captions its highly activating cropped images using a large captioning dataset and a pre-trained vision-language model like CLIP.
Each word among the captions is scored and ranked leading to a small number of shared, human-understandable concepts.
arXiv Detail & Related papers (2023-07-20T00:02:24Z) - Seeing in Words: Learning to Classify through Language Bottlenecks [59.97827889540685]
Humans can explain their predictions using succinct and intuitive descriptions.
We show that a vision model whose feature representations are text can effectively classify ImageNet images.
arXiv Detail & Related papers (2023-06-29T00:24:42Z) - ALSO: Automotive Lidar Self-supervision by Occupancy estimation [70.70557577874155]
We propose a new self-supervised method for pre-training the backbone of deep perception models operating on point clouds.
The core idea is to train the model on a pretext task which is the reconstruction of the surface on which the 3D points are sampled.
The intuition is that if the network is able to reconstruct the scene surface, given only sparse input points, then it probably also captures some fragments of semantic information.
arXiv Detail & Related papers (2022-12-12T13:10:19Z) - TCNL: Transparent and Controllable Network Learning Via Embedding
Human-Guided Concepts [10.890006696574803]
We propose a novel method, Transparent and Controllable Network Learning (TCNL), to overcome such challenges.
Towards the goal of improving transparency-interpretability, in TCNL, we define some concepts for specific classification tasks through scientific human-intuition study.
We also build the concept mapper to visualize features extracted by the concept extractor in a human-intuitive way.
arXiv Detail & Related papers (2022-10-07T01:18:37Z) - SegDiscover: Visual Concept Discovery via Unsupervised Semantic
Segmentation [29.809900593362844]
SegDiscover is a novel framework that discovers semantically meaningful visual concepts from imagery datasets with complex scenes without supervision.
Our method generates concept primitives from raw images, discovering concepts by clustering in the latent space of a self-supervised pretrained encoder, and concept refinement via neural network smoothing.
arXiv Detail & Related papers (2022-04-22T20:44:42Z) - Expressive Explanations of DNNs by Combining Concept Analysis with ILP [0.3867363075280543]
We use inherent features learned by the network to build a global, expressive, verbal explanation of the rationale of a feed-forward convolutional deep neural network (DNN)
We show that our explanation is faithful to the original black-box model.
arXiv Detail & Related papers (2021-05-16T07:00:27Z) - Bongard-LOGO: A New Benchmark for Human-Level Concept Learning and
Reasoning [78.13740873213223]
Bongard problems (BPs) were introduced as an inspirational challenge for visual cognition in intelligent systems.
We propose a new benchmark Bongard-LOGO for human-level concept learning and reasoning.
arXiv Detail & Related papers (2020-10-02T03:19:46Z) - Self-Supervised Viewpoint Learning From Image Collections [116.56304441362994]
We propose a novel learning framework which incorporates an analysis-by-synthesis paradigm to reconstruct images in a viewpoint aware manner.
We show that our approach performs competitively to fully-supervised approaches for several object categories like human faces, cars, buses, and trains.
arXiv Detail & Related papers (2020-04-03T22:01:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.