Generative Neurosymbolic Machines
- URL: http://arxiv.org/abs/2010.12152v2
- Date: Sat, 6 Feb 2021 19:48:21 GMT
- Title: Generative Neurosymbolic Machines
- Authors: Jindong Jiang and Sungjin Ahn
- Abstract summary: Reconciling symbolic and distributed representations is a crucial challenge that can potentially resolve the limitations of current deep learning.
We propose Generative Neurosymbolic Machines, a generative model that combines the benefits of distributed and symbolic representations to support both structured representations of symbolic components and density-based generation.
- Score: 26.364503276512153
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reconciling symbolic and distributed representations is a crucial challenge
that can potentially resolve the limitations of current deep learning.
Remarkable advances in this direction have been achieved recently via
generative object-centric representation models. While learning a recognition
model that infers object-centric symbolic representations like bounding boxes
from raw images in an unsupervised way, no such model can provide another
important ability of a generative model, i.e., generating (sampling) according
to the structure of learned world density. In this paper, we propose Generative
Neurosymbolic Machines, a generative model that combines the benefits of
distributed and symbolic representations to support both structured
representations of symbolic components and density-based generation. These two
crucial properties are achieved by a two-layer latent hierarchy with the global
distributed latent for flexible density modeling and the structured symbolic
latent map. To increase the model flexibility in this hierarchical structure,
we also propose the StructDRAW prior. In experiments, we show that the proposed
model significantly outperforms the previous structured representation models
as well as the state-of-the-art non-structured generative models in terms of
both structure accuracy and image generation quality. Our code, datasets, and
trained models are available at https://github.com/JindongJiang/GNM
Related papers
- Hierarchical Clustering for Conditional Diffusion in Image Generation [12.618079575423868]
This paper introduces TreeDiffusion, a deep generative model that conditions Diffusion Models on hierarchical clusters to obtain high-quality, cluster-specific generations.
The proposed pipeline consists of two steps: a VAE-based clustering model that learns the hierarchical structure of the data, and a conditional diffusion model that generates realistic images for each cluster.
arXiv Detail & Related papers (2024-10-22T11:35:36Z) - ProtoS-ViT: Visual foundation models for sparse self-explainable classifications [0.6249768559720122]
Prototypical networks aim to build intrinsically explainable models based on the linear summation of concepts.
This work first proposes an extensive set of quantitative and qualitative metrics which allow to identify drawbacks in current prototypical networks.
It then introduces a novel architecture which provides compact explanations, outperforming current prototypical models in terms of explanation quality.
arXiv Detail & Related papers (2024-06-14T13:36:30Z) - A Recursive Bateson-Inspired Model for the Generation of Semantic Formal
Concepts from Spatial Sensory Data [77.34726150561087]
This paper presents a new symbolic-only method for the generation of hierarchical concept structures from complex sensory data.
The approach is based on Bateson's notion of difference as the key to the genesis of an idea or a concept.
The model is able to produce fairly rich yet human-readable conceptual representations without training.
arXiv Detail & Related papers (2023-07-16T15:59:13Z) - SlotDiffusion: Object-Centric Generative Modeling with Diffusion Models [47.986381326169166]
We introduce SlotDiffusion -- an object-centric Latent Diffusion Model (LDM) designed for both image and video data.
Thanks to the powerful modeling capacity of LDMs, SlotDiffusion surpasses previous slot models in unsupervised object segmentation and visual generation.
Our learned object features can be utilized by existing object-centric dynamics models, improving video prediction quality and downstream temporal reasoning tasks.
arXiv Detail & Related papers (2023-05-18T19:56:20Z) - Object-Centric Relational Representations for Image Generation [18.069747511100132]
This paper explores a novel method to condition image generation, based on object-centric relational representations.
We show that such architectural biases entail properties that facilitate the manipulation and conditioning of the generative process.
We also propose a novel benchmark for image generation consisting of a synthetic dataset of images paired with their relational representation.
arXiv Detail & Related papers (2023-03-26T11:17:17Z) - GrannGAN: Graph annotation generative adversarial networks [72.66289932625742]
We consider the problem of modelling high-dimensional distributions and generating new examples of data with complex relational feature structure coherent with a graph skeleton.
The model we propose tackles the problem of generating the data features constrained by the specific graph structure of each data point by splitting the task into two phases.
In the first it models the distribution of features associated with the nodes of the given graph, in the second it complements the edge features conditionally on the node features.
arXiv Detail & Related papers (2022-12-01T11:49:07Z) - S2RMs: Spatially Structured Recurrent Modules [105.0377129434636]
We take a step towards exploiting dynamic structure that are capable of simultaneously exploiting both modular andtemporal structures.
We find our models to be robust to the number of available views and better capable of generalization to novel tasks without additional training.
arXiv Detail & Related papers (2020-07-13T17:44:30Z) - Deep Autoencoding Topic Model with Scalable Hybrid Bayesian Inference [55.35176938713946]
We develop deep autoencoding topic model (DATM) that uses a hierarchy of gamma distributions to construct its multi-stochastic-layer generative network.
We propose a Weibull upward-downward variational encoder that deterministically propagates information upward via a deep neural network, followed by a downward generative model.
The efficacy and scalability of our models are demonstrated on both unsupervised and supervised learning tasks on big corpora.
arXiv Detail & Related papers (2020-06-15T22:22:56Z) - Network Bending: Expressive Manipulation of Deep Generative Models [0.2062593640149624]
We introduce a new framework for manipulating and interacting with deep generative models that we call network bending.
We show how it allows for the direct manipulation of semantically meaningful aspects of the generative process as well as allowing for a broad range of expressive outcomes.
arXiv Detail & Related papers (2020-05-25T21:48:45Z) - High-Fidelity Synthesis with Disentangled Representation [60.19657080953252]
We propose an Information-Distillation Generative Adrial Network (ID-GAN) for disentanglement learning and high-fidelity synthesis.
Our method learns disentangled representation using VAE-based models, and distills the learned representation with an additional nuisance variable to the separate GAN-based generator for high-fidelity synthesis.
Despite the simplicity, we show that the proposed method is highly effective, achieving comparable image generation quality to the state-of-the-art methods using the disentangled representation.
arXiv Detail & Related papers (2020-01-13T14:39:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.