Granule Description based on Compound Concepts
- URL: http://arxiv.org/abs/2111.00004v1
- Date: Fri, 29 Oct 2021 01:56:29 GMT
- Title: Granule Description based on Compound Concepts
- Authors: Jianqin Zhou, Sichun Yang, Xifeng Wang and Wanquan Liu
- Abstract summary: We propose two new types of compound concepts in this paper: bipolar concept and common-and-necessary concept.
We have derived concise and unified equivalent conditions for describable granules and approaching description methods for indescribable granules for all five kinds of concepts.
- Score: 5.657202839641533
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Concise granule descriptions for describable granules and approaching
description methods for indescribable granules are challenging and important
issues in granular computing. The concept with only common attributes has been
frequently studied. To investigate the granules with some special needs, we
propose two new types of compound concepts in this paper: bipolar concept and
common-and-necessary concept. Based on the definitions of concept-forming
operations, the logical formulas are derived for each of the following types of
concepts: formal concept, three-way concept, object oriented concept, bipolar
concept and common-and-necessary concept. Furthermore, by utilizing the logical
relationship among various concepts, we have derived concise and unified
equivalent conditions for describable granules and approaching description
methods for indescribable granules for all five kinds of concepts.
Related papers
- On the Power and Limitations of Examples for Description Logic Concepts [6.776119962781556]
We investigate the power of labeled examples for describing description-logic concepts.
Specifically, we study the existence and efficient computability of finite characterisations.
arXiv Detail & Related papers (2024-12-23T07:17:58Z) - OmniPrism: Learning Disentangled Visual Concept for Image Generation [57.21097864811521]
Creative visual concept generation often draws inspiration from specific concepts in a reference image to produce relevant outcomes.
We propose OmniPrism, a visual concept disentangling approach for creative image generation.
Our method learns disentangled concept representations guided by natural language and trains a diffusion model to incorporate these concepts.
arXiv Detail & Related papers (2024-12-16T18:59:52Z) - Separable Multi-Concept Erasure from Diffusion Models [52.51972530398691]
We propose a Separable Multi-concept Eraser (SepME) to eliminate unsafe concepts from large-scale diffusion models.
The latter separates optimizable model weights, making each weight increment correspond to a specific concept erasure.
Extensive experiments indicate the efficacy of our approach in eliminating concepts, preserving model performance, and offering flexibility in the erasure or recovery of various concepts.
arXiv Detail & Related papers (2024-02-03T11:10:57Z) - Concept Activation Regions: A Generalized Framework For Concept-Based
Explanations [95.94432031144716]
Existing methods assume that the examples illustrating a concept are mapped in a fixed direction of the deep neural network's latent space.
In this work, we propose allowing concept examples to be scattered across different clusters in the DNN's latent space.
This concept activation region (CAR) formalism yields global concept-based explanations and local concept-based feature importance.
arXiv Detail & Related papers (2022-09-22T17:59:03Z) - Concept Gradient: Concept-based Interpretation Without Linear Assumption [77.96338722483226]
Concept Activation Vector (CAV) relies on learning a linear relation between some latent representation of a given model and concepts.
We proposed Concept Gradient (CG), extending concept-based interpretation beyond linear concept functions.
We demonstrated CG outperforms CAV in both toy examples and real world datasets.
arXiv Detail & Related papers (2022-08-31T17:06:46Z) - Automatic Concept Extraction for Concept Bottleneck-based Video
Classification [58.11884357803544]
We present an automatic Concept Discovery and Extraction module that rigorously composes a necessary and sufficient set of concept abstractions for concept-based video classification.
Our method elicits inherent complex concept abstractions in natural language to generalize concept-bottleneck methods to complex tasks.
arXiv Detail & Related papers (2022-06-21T06:22:35Z) - ConceptDistil: Model-Agnostic Distillation of Concept Explanations [4.462334751640166]
Concept-based explanations aims to fill the model interpretability gap for non-technical humans-in-the-loop.
We propose ConceptDistil, a method to bring concept explanations to any black-box classifier using knowledge distillation.
We validate ConceptDistil in a real world use-case, showing that it is able to optimize both tasks.
arXiv Detail & Related papers (2022-05-07T08:58:54Z) - Human-Centered Concept Explanations for Neural Networks [47.71169918421306]
We introduce concept explanations including the class of Concept Activation Vectors (CAV)
We then discuss approaches to automatically extract concepts, and approaches to address some of their caveats.
Finally, we discuss some case studies that showcase the utility of such concept-based explanations in synthetic settings and real world applications.
arXiv Detail & Related papers (2022-02-25T01:27:31Z) - Concept and Attribute Reduction Based on Rectangle Theory of Formal
Concept [5.657202839641533]
It is known that there are three types of formal concepts: core concepts, relative necessary concepts and unnecessary concepts.
We present the new judgment results for relative necessary concepts and unnecessary concepts.
A fast algorithm for reducing attributes while preserving the extensions for a set of formal concepts is proposed.
arXiv Detail & Related papers (2021-10-29T02:10:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.