Cones: Concept Neurons in Diffusion Models for Customized Generation
- URL: http://arxiv.org/abs/2303.05125v1
- Date: Thu, 9 Mar 2023 09:16:04 GMT
- Title: Cones: Concept Neurons in Diffusion Models for Customized Generation
- Authors: Zhiheng Liu, Ruili Feng, Kai Zhu, Yifei Zhang, Kecheng Zheng, Yu Liu,
Deli Zhao, Jingren Zhou, Yang Cao
- Abstract summary: This paper finds a small cluster of neurons in a diffusion model corresponding to a particular subject.
The concept neurons demonstrate magnetic properties in interpreting and manipulating generation results.
For large-scale applications, the concept neurons are environmentally friendly as we only need to store a sparse cluster of int index instead of dense float32 values.
- Score: 41.212255848052514
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human brains respond to semantic features of presented stimuli with different
neurons. It is then curious whether modern deep neural networks admit a similar
behavior pattern. Specifically, this paper finds a small cluster of neurons in
a diffusion model corresponding to a particular subject. We call those neurons
the concept neurons. They can be identified by statistics of network gradients
to a stimulation connected with the given subject. The concept neurons
demonstrate magnetic properties in interpreting and manipulating generation
results. Shutting them can directly yield the related subject contextualized in
different scenes. Concatenating multiple clusters of concept neurons can
vividly generate all related concepts in a single image. A few steps of further
fine-tuning can enhance the multi-concept capability, which may be the first to
manage to generate up to four different subjects in a single image. For
large-scale applications, the concept neurons are environmentally friendly as
we only need to store a sparse cluster of int index instead of dense float32
values of the parameters, which reduces storage consumption by 90\% compared
with previous subject-driven generation methods. Extensive qualitative and
quantitative studies on diverse scenarios show the superiority of our method in
interpreting and manipulating diffusion models.
Related papers
- Interpreting the Second-Order Effects of Neurons in CLIP [73.54377859089801]
We interpret the function of individual neurons in CLIP by automatically describing them using text.
We present the "second-order lens", analyzing the effect flowing from a neuron through the later attention heads, directly to the output.
Our results indicate that a scalable understanding of neurons can be used for model deception and for introducing new model capabilities.
arXiv Detail & Related papers (2024-06-06T17:59:52Z) - MindBridge: A Cross-Subject Brain Decoding Framework [60.58552697067837]
Brain decoding aims to reconstruct stimuli from acquired brain signals.
Currently, brain decoding is confined to a per-subject-per-model paradigm.
We present MindBridge, that achieves cross-subject brain decoding by employing only one model.
arXiv Detail & Related papers (2024-04-11T15:46:42Z) - Identifying Interpretable Visual Features in Artificial and Biological
Neural Systems [3.604033202771937]
Single neurons in neural networks are often interpretable in that they represent individual, intuitively meaningful features.
Many neurons exhibit $textitmixed selectivity$, i.e., they represent multiple unrelated features.
We propose an automated method for quantifying visual interpretability and an approach for finding meaningful directions in network activation space.
arXiv Detail & Related papers (2023-10-17T17:41:28Z) - Disentangling Neuron Representations with Concept Vectors [0.0]
The main contribution of this paper is a method to disentangle polysemantic neurons into concept vectors encapsulating distinct features.
Our evaluations show that the concept vectors found encode coherent, human-understandable features.
arXiv Detail & Related papers (2023-04-19T14:55:31Z) - POPPINS : A Population-Based Digital Spiking Neuromorphic Processor with
Integer Quadratic Integrate-and-Fire Neurons [50.591267188664666]
We propose a population-based digital spiking neuromorphic processor in 180nm process technology with two hierarchy populations.
The proposed approach enables the developments of biomimetic neuromorphic system and various low-power, and low-latency inference processing applications.
arXiv Detail & Related papers (2022-01-19T09:26:34Z) - Drop, Swap, and Generate: A Self-Supervised Approach for Generating
Neural Activity [33.06823702945747]
We introduce a novel unsupervised approach for learning disentangled representations of neural activity called Swap-VAE.
Our approach combines a generative modeling framework with an instance-specific alignment loss.
We show that it is possible to build representations that disentangle neural datasets along relevant latent dimensions linked to behavior.
arXiv Detail & Related papers (2021-11-03T16:39:43Z) - NeuroCartography: Scalable Automatic Visual Summarization of Concepts in
Deep Neural Networks [18.62960153659548]
NeuroCartography is an interactive system that summarizes and visualizes concepts learned by neural networks.
It automatically discovers and groups neurons that detect the same concepts.
It describes how such neuron groups interact to form higher-level concepts and the subsequent predictions.
arXiv Detail & Related papers (2021-08-29T22:43:52Z) - The Neural Coding Framework for Learning Generative Models [91.0357317238509]
We propose a novel neural generative model inspired by the theory of predictive processing in the brain.
In a similar way, artificial neurons in our generative model predict what neighboring neurons will do, and adjust their parameters based on how well the predictions matched reality.
arXiv Detail & Related papers (2020-12-07T01:20:38Z) - Compositional Explanations of Neurons [52.71742655312625]
We describe a procedure for explaining neurons in deep representations by identifying compositional logical concepts.
We use this procedure to answer several questions on interpretability in models for vision and natural language processing.
arXiv Detail & Related papers (2020-06-24T20:37:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.