Logic Tensor Network-Enhanced Generative Adversarial Network
- URL: http://arxiv.org/abs/2601.03839v1
- Date: Wed, 07 Jan 2026 12:04:49 GMT
- Title: Logic Tensor Network-Enhanced Generative Adversarial Network
- Authors: Nijesh Upreti, Vaishak Belle,
- Abstract summary: We introduce Logic Network-Enhanced Generative Adversarial Network (LTN-GAN), a novel framework that enhances Generative Adversarial Networks (GANs)<n>We evaluate LTN-GAN across multiple datasets, including synthetic datasets (gaussian, grid, rings) and the MNIST dataset.<n>This work highlights the potential of neuro-symbolic approaches to enhance generative modeling in knowledge-intensive domains.
- Score: 2.3791444696448085
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we introduce Logic Tensor Network-Enhanced Generative Adversarial Network (LTN-GAN), a novel framework that enhances Generative Adversarial Networks (GANs) by incorporating Logic Tensor Networks (LTNs) to enforce domain-specific logical constraints during the sample generation process. Although GANs have shown remarkable success in generating realistic data, they often lack mechanisms to incorporate prior knowledge or enforce logical consistency, limiting their applicability in domains requiring rule adherence. LTNs provide a principled way to integrate first-order logic with neural networks, enabling models to reason over and satisfy logical constraints. By combining the strengths of GANs for realistic data synthesis with LTNs for logical reasoning, we gain valuable insights into how logical constraints influence the generative process while improving both the diversity and logical consistency of the generated samples. We evaluate LTN-GAN across multiple datasets, including synthetic datasets (gaussian, grid, rings) and the MNIST dataset, demonstrating that our model significantly outperforms traditional GANs in terms of adherence to predefined logical constraints while maintaining the quality and diversity of generated samples. This work highlights the potential of neuro-symbolic approaches to enhance generative modeling in knowledge-intensive domains.
Related papers
- Deriving Equivalent Symbol-Based Decision Models from Feedforward Neural Networks [0.0]
Despite its rapid adoption, the opacity of AI systems poses significant challenges to trust and acceptance.<n>This work focuses on the derivation of symbolic models, such as decision trees, from feed-forward neural networks (FNNs)
arXiv Detail & Related papers (2025-04-16T19:22:53Z) - CaTs and DAGs: Integrating Directed Acyclic Graphs with Transformers and Fully-Connected Neural Networks for Causally Constrained Predictions [6.745494093127968]
We introduce Causal Fully-Connected Neural Networks (CFCNs) and Causal Transformers (CaTs)
CFCNs andCaTs operate under predefined causal constraints, as specified by a Directed Acyclic Graph (DAG)
These models retain the powerful function approximation abilities of traditional neural networks while adhering to the underlying structural constraints.
arXiv Detail & Related papers (2024-10-18T14:10:16Z) - A Neuro-Symbolic Approach to Multi-Agent RL for Interpretability and
Probabilistic Decision Making [42.503612515214044]
Multi-agent reinforcement learning (MARL) is well-suited for runtime decision-making in systems where multiple agents coexist and compete for shared resources.
Applying common deep learning-based MARL solutions to real-world problems suffers from issues of interpretability, sample efficiency, partial observability, etc.
We present an event-driven formulation, where decision-making is handled by distributed co-operative MARL agents using neuro-symbolic methods.
arXiv Detail & Related papers (2024-02-21T00:16:08Z) - LogicMP: A Neuro-symbolic Approach for Encoding First-order Logic Constraints [46.60806942245395]
This paper proposes a novel neural layer, LogicMP, whose layers perform mean-field variational inference over an MLN.<n>It can be plugged into any off-the-shelf neural network to encode FOLCs while retaining modularity and efficiency.<n> Empirical results in three kinds of tasks over graphs, images, and text show that LogicMP outperforms advanced competitors in both performance and efficiency.
arXiv Detail & Related papers (2023-09-27T07:52:30Z) - Efficient and Flexible Neural Network Training through Layer-wise Feedback Propagation [49.44309457870649]
Layer-wise Feedback feedback (LFP) is a novel training principle for neural network-like predictors.<n>LFP decomposes a reward to individual neurons based on their respective contributions.<n>Our method then implements a greedy reinforcing approach helpful parts of the network and weakening harmful ones.
arXiv Detail & Related papers (2023-08-23T10:48:28Z) - Neuro-Symbolic Inductive Logic Programming with Logical Neural Networks [65.23508422635862]
We propose learning rules with the recently proposed logical neural networks (LNN)
Compared to others, LNNs offer strong connection to classical Boolean logic.
Our experiments on standard benchmarking tasks confirm that LNN rules are highly interpretable.
arXiv Detail & Related papers (2021-12-06T19:38:30Z) - Reinforcement Learning with External Knowledge by using Logical Neural
Networks [67.46162586940905]
A recent neuro-symbolic framework called the Logical Neural Networks (LNNs) can simultaneously provide key-properties of both neural networks and symbolic logic.
We propose an integrated method that enables model-free reinforcement learning from external knowledge sources.
arXiv Detail & Related papers (2021-03-03T12:34:59Z) - Resource Allocation via Graph Neural Networks in Free Space Optical
Fronthaul Networks [119.81868223344173]
This paper investigates the optimal resource allocation in free space optical (FSO) fronthaul networks.
We consider the graph neural network (GNN) for the policy parameterization to exploit the FSO network structure.
The primal-dual learning algorithm is developed to train the GNN in a model-free manner, where the knowledge of system models is not required.
arXiv Detail & Related papers (2020-06-26T14:20:48Z) - Randomly Weighted, Untrained Neural Tensor Networks Achieve Greater
Relational Expressiveness [3.5408022972081694]
We propose Randomly Weighted Networks (RWTNs), which incorporate randomly drawn untrained tensors into a network with a trained decoder network.
We show that RWTNs meet or surpass the performance of traditionally trained LTNs for Image Interpretation (SIITNs)
We demonstrate that RWTNs can achieve similar performance as LTNs for object classification while using fewer parameters for learning.
arXiv Detail & Related papers (2020-06-01T19:36:29Z) - Logical Natural Language Generation from Open-Domain Tables [107.04385677577862]
We propose a new task where a model is tasked with generating natural language statements that can be emphlogically entailed by the facts.
To facilitate the study of the proposed logical NLG problem, we use the existing TabFact dataset citechen 2019tabfact featured with a wide range of logical/symbolic inferences.
The new task poses challenges to the existing monotonic generation frameworks due to the mismatch between sequence order and logical order.
arXiv Detail & Related papers (2020-04-22T06:03:10Z) - Efficient Probabilistic Logic Reasoning with Graph Neural Networks [63.099999467118245]
Markov Logic Networks (MLNs) can be used to address many knowledge graph problems.
Inference in MLN is computationally intensive, making the industrial-scale application of MLN very difficult.
We propose a graph neural network (GNN) variant, named ExpressGNN, which strikes a nice balance between the representation power and the simplicity of the model.
arXiv Detail & Related papers (2020-01-29T23:34:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.