Concept Activation Vectors for Generating User-Defined 3D Shapes
- URL: http://arxiv.org/abs/2205.02102v1
- Date: Fri, 29 Apr 2022 13:09:18 GMT
- Title: Concept Activation Vectors for Generating User-Defined 3D Shapes
- Authors: Stefan Druc, Aditya Balu, Peter Wooldridge, Adarsh Krishnamurthy,
Soumik Sarkar
- Abstract summary: We explore the interpretability of 3D geometric deep learning models in the context of Computer-Aided Design (CAD)
We use a deep learning architectures to encode high dimensional 3D shapes into a vectorized latent representation that can be used to describe arbitrary concepts.
- Score: 11.325580593182414
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We explore the interpretability of 3D geometric deep learning models in the
context of Computer-Aided Design (CAD). The field of parametric CAD can be
limited by the difficulty of expressing high-level design concepts in terms of
a few numeric parameters. In this paper, we use a deep learning architectures
to encode high dimensional 3D shapes into a vectorized latent representation
that can be used to describe arbitrary concepts. Specifically, we train a
simple auto-encoder to parameterize a dataset of complex shapes. To understand
the latent encoded space, we use the idea of Concept Activation Vectors (CAV)
to reinterpret the latent space in terms of user-defined concepts. This allows
modification of a reference design to exhibit more or fewer characteristics of
a chosen concept or group of concepts. We also test the statistical
significance of the identified concepts and determine the sensitivity of a
physical quantity of interest across the dataset.
Related papers
- Discover-then-Name: Task-Agnostic Concept Bottlenecks via Automated Concept Discovery [52.498055901649025]
Concept Bottleneck Models (CBMs) have been proposed to address the 'black-box' problem of deep neural networks.
We propose a novel CBM approach -- called Discover-then-Name-CBM (DN-CBM) -- that inverts the typical paradigm.
Our concept extraction strategy is efficient, since it is agnostic to the downstream task, and uses concepts already known to the model.
arXiv Detail & Related papers (2024-07-19T17:50:11Z) - Knowledge graphs for empirical concept retrieval [1.06378109904813]
Concept-based explainable AI is promising as a tool to improve the understanding of complex models at the premises of a given user.
Here, we present a workflow for user-driven data collection in both text and image domains.
We test the retrieved concept datasets on two concept-based explainability methods, namely concept activation vectors (CAVs) and concept activation regions (CARs)
arXiv Detail & Related papers (2024-04-10T13:47:22Z) - GTA: A Geometry-Aware Attention Mechanism for Multi-View Transformers [63.41460219156508]
We argue that existing positional encoding schemes are suboptimal for 3D vision tasks.
We propose a geometry-aware attention mechanism that encodes the geometric structure of tokens as relative transformation.
We show that our attention, called Geometric Transform Attention (GTA), improves learning efficiency and performance of state-of-the-art transformer-based NVS models.
arXiv Detail & Related papers (2023-10-16T13:16:09Z) - Discovering Design Concepts for CAD Sketches [13.140310747416983]
We propose a learning based approach that discovers the modular concepts by induction over raw sketches.
We demonstrate the design concept learning on a large scale CAD sketch dataset and show its applications for design intent interpretation and auto-completion.
arXiv Detail & Related papers (2022-10-26T03:53:33Z) - 3D Concept Grounding on Neural Fields [99.33215488324238]
Existing visual reasoning approaches typically utilize supervised methods to extract 2D segmentation masks on which concepts are grounded.
Humans are capable of grounding concepts on the underlying 3D representation of images.
We propose to leverage the continuous, differentiable nature of neural fields to segment and learn concepts.
arXiv Detail & Related papers (2022-07-13T17:59:33Z) - Concept Identification for Complex Engineering Datasets [0.0]
A novel concept quality measure is proposed, which provides an objective value for a given definition of concepts in a dataset.
It is demonstrated how these concepts can be used to select archetypal representatives of the dataset which exhibit characteristic features of each concept.
arXiv Detail & Related papers (2022-06-09T09:39:46Z) - A Minimalist Dataset for Systematic Generalization of Perception,
Syntax, and Semantics [131.93113552146195]
We present a new dataset, Handwritten arithmetic with INTegers (HINT), to examine machines' capability of learning generalizable concepts.
In HINT, machines are tasked with learning how concepts are perceived from raw signals such as images.
We undertake extensive experiments with various sequence-to-sequence models, including RNNs, Transformers, and GPT-3.
arXiv Detail & Related papers (2021-03-02T01:32:54Z) - Interpretable Visual Reasoning via Induced Symbolic Space [75.95241948390472]
We study the problem of concept induction in visual reasoning, i.e., identifying concepts and their hierarchical relationships from question-answer pairs associated with images.
We first design a new framework named object-centric compositional attention model (OCCAM) to perform the visual reasoning task with object-level visual features.
We then come up with a method to induce concepts of objects and relations using clues from the attention patterns between objects' visual features and question words.
arXiv Detail & Related papers (2020-11-23T18:21:49Z) - Analyzing the Capacity of Distributed Vector Representations to Encode
Spatial Information [0.0]
We focus on simple superposition and more complex, structured representations involving convolutive powers to encode spatial information.
In two experiments, we find upper bounds for the number of concepts that can effectively be stored in a single vector.
arXiv Detail & Related papers (2020-09-30T18:49:29Z) - 3D Sketch-aware Semantic Scene Completion via Semi-supervised Structure
Prior [50.73148041205675]
The goal of the Semantic Scene Completion (SSC) task is to simultaneously predict a completed 3D voxel representation of volumetric occupancy and semantic labels of objects in the scene from a single-view observation.
We propose to devise a new geometry-based strategy to embed depth information with low-resolution voxel representation.
Our proposed geometric embedding works better than the depth feature learning from habitual SSC frameworks.
arXiv Detail & Related papers (2020-03-31T09:33:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.