Symbolic Techniques for Deep Learning: Challenges and Opportunities
- URL: http://arxiv.org/abs/2010.02727v1
- Date: Thu, 1 Oct 2020 23:02:45 GMT
- Title: Symbolic Techniques for Deep Learning: Challenges and Opportunities
- Authors: Belinda Fang, Elaine Yang, and Fei Xie
- Abstract summary: We look at some of the most popular deep learning frameworks being used today, including Keras, PyTorch, and MXNet.
We focus this paper on symbolic techniques because they influence not only how neural networks are built but also the way in which they are executed.
- Score: 4.416697289927759
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As the number of deep learning frameworks increase and certain ones gain
popularity, it spurs the discussion of what methodologies are employed by these
frameworks and the reasoning behind them. The goal of this survey is to study
how symbolic techniques are utilized in deep learning. To do this, we look at
some of the most popular deep learning frameworks being used today, including
TensorFlow, Keras, PyTorch, and MXNet. While these frameworks greatly differ
from one another, many of them use symbolic techniques, whether it be symbolic
execution, graphs, or programming. We focus this paper on symbolic techniques
because they influence not only how neural networks are built but also the way
in which they are executed.
Limitations of symbolic techniques have led to efforts in integrating
symbolic and nonsymbolic aspects in deep learning, opening up new possibilities
for symbolic techniques. For example, the Gluon API by Apache MXNet bridges the
gap between imperative programming and symbolic execution through
hybridization. Frameworks such as JANUS attempt to translate imperative
programs into symbolic graphs, while approaches like DeepCheck attempt to use
symbolic execution to analyze and validate imperative neural network programs.
Symbolic analysis has also been paired with concrete execution in a technique
called concolic testing in order to better test deep neural networks. Our study
of these developments exemplifies just a few of the many ways the symbolic
techniques employed by popular frameworks have the opportunity to be altered
and utilized to achieve better performance.
Related papers
- Symbolic Execution in Practice: A Survey of Applications in Vulnerability, Malware, Firmware, and Protocol Analysis [3.1844358655583846]
Symbolic execution is a powerful program analysis technique that allows for the systematic exploration of all program paths.<n>This paper introduces a systematic taxonomy of strategies to enable symbolic execution on complex software systems.<n>We survey applications of symbolic executions in several domains such as vulnerability analysis, malware analysis, firmware re-hosting, and network protocol analysis.
arXiv Detail & Related papers (2025-08-08T18:43:45Z) - Enhancing Symbolic Machine Learning by Subsymbolic Representations [2.4280350854512673]
We propose to enhance symbolic machine learning schemes by giving them access to neural embeddings.<n>In experiments in three real-world domain, we show that this simple, yet effective, approach outperforms all other baseline methods in terms of the F1 score.
arXiv Detail & Related papers (2025-06-17T14:26:21Z) - Mapping the Neuro-Symbolic AI Landscape by Architectures: A Handbook on Augmenting Deep Learning Through Symbolic Reasoning [11.418327158608664]
Symbolic techniques with statistical strengths is a long-standing problem in artificial intelligence.
Neuro-symbolic AI focuses on this integration where the methods are in particular neural networks.
We present the first mapping of symbolic techniques into families of frameworks based on their architectures.
arXiv Detail & Related papers (2024-10-29T14:35:59Z) - Efficient Symbolic Reasoning for Neural-Network Verification [48.384446430284676]
We present a novel program reasoning framework for neural-network verification.
The key components of our framework are the use of the symbolic domain and the quadratic relation.
We believe that our framework can bring new theoretical insights and practical tools to verification problems for neural networks.
arXiv Detail & Related papers (2023-03-23T18:08:11Z) - Symbolic Visual Reinforcement Learning: A Scalable Framework with
Object-Level Abstraction and Differentiable Expression Search [63.3745291252038]
We propose DiffSES, a novel symbolic learning approach that discovers discrete symbolic policies.
By using object-level abstractions instead of raw pixel-level inputs, DiffSES is able to leverage the simplicity and scalability advantages of symbolic expressions.
Our experiments demonstrate that DiffSES is able to generate symbolic policies that are simpler and more scalable than state-of-the-art symbolic RL methods.
arXiv Detail & Related papers (2022-12-30T17:50:54Z) - Neuro-Symbolic Learning of Answer Set Programs from Raw Data [54.56905063752427]
Neuro-Symbolic AI aims to combine interpretability of symbolic techniques with the ability of deep learning to learn from raw data.
We introduce Neuro-Symbolic Inductive Learner (NSIL), an approach that trains a general neural network to extract latent concepts from raw data.
NSIL learns expressive knowledge, solves computationally complex problems, and achieves state-of-the-art performance in terms of accuracy and data efficiency.
arXiv Detail & Related papers (2022-05-25T12:41:59Z) - RICE: Refining Instance Masks in Cluttered Environments with Graph
Neural Networks [53.15260967235835]
We propose a novel framework that refines the output of such methods by utilizing a graph-based representation of instance masks.
We train deep networks capable of sampling smart perturbations to the segmentations, and a graph neural network, which can encode relations between objects, to evaluate the segmentations.
We demonstrate an application that uses uncertainty estimates generated by our method to guide a manipulator, leading to efficient understanding of cluttered scenes.
arXiv Detail & Related papers (2021-06-29T20:29:29Z) - Representing Partial Programs with Blended Abstract Semantics [62.20775388513027]
We introduce a technique for representing partially written programs in a program synthesis engine.
We learn an approximate execution model implemented as a modular neural network.
We show that these hybrid neuro-symbolic representations enable execution-guided synthesizers to use more powerful language constructs.
arXiv Detail & Related papers (2020-12-23T20:40:18Z) - Graphs for deep learning representations [1.0152838128195467]
We introduce a graph formalism based on the recent advances in Graph Signal Processing (GSP)
Namely, we use graphs to represent the latent spaces of deep neural networks.
We showcase that this graph formalism allows us to answer various questions including: ensuring robustness, reducing the amount of arbitrary choices in the design of the learning process, improving to small generalizations added to the inputs, and reducing computational complexity.
arXiv Detail & Related papers (2020-12-14T11:51:23Z) - Learned Greedy Method (LGM): A Novel Neural Architecture for Sparse
Coding and Beyond [24.160276545294288]
We propose an unfolded version of a greedy pursuit algorithm for the same goal.
Key features of our Learned Greedy Method (LGM) are the ability to accommodate a dynamic number of unfolded layers.
arXiv Detail & Related papers (2020-10-14T13:17:02Z) - Synbols: Probing Learning Algorithms with Synthetic Datasets [112.45883250213272]
Synbols is a tool for rapidly generating new datasets with a rich composition of latent features rendered in low resolution images.
Our tool's high-level interface provides a language for rapidly generating new distributions on the latent features.
To showcase the versatility of Synbols, we use it to dissect the limitations and flaws in standard learning algorithms in various learning setups.
arXiv Detail & Related papers (2020-09-14T13:03:27Z) - Learning from Few Samples: A Survey [1.4146420810689422]
We study the existing few-shot meta-learning techniques in the computer vision domain based on their method and evaluation metrics.
We provide a taxonomy for the techniques and categorize them as data-augmentation, embedding, optimization and semantics based learning for few-shot, one-shot and zero-shot settings.
arXiv Detail & Related papers (2020-07-30T14:28:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.