Discrete and continuous representations and processing in deep learning:
Looking forward
- URL: http://arxiv.org/abs/2201.01233v1
- Date: Tue, 4 Jan 2022 16:30:18 GMT
- Title: Discrete and continuous representations and processing in deep learning:
Looking forward
- Authors: Ruben Cartuyvels, Graham Spinks, Marie-Francine Moens
- Abstract summary: We argue that combining discrete and continuous representations and their processing will be essential to build systems that exhibit a general form of intelligence.
We suggest and discuss several avenues that could improve current neural networks with the inclusion of discrete elements to combine the advantages of both types of representations.
- Score: 18.28761409764605
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Discrete and continuous representations of content (e.g., of language or
images) have interesting properties to be explored for the understanding of or
reasoning with this content by machines. This position paper puts forward our
opinion on the role of discrete and continuous representations and their
processing in the deep learning field. Current neural network models compute
continuous-valued data. Information is compressed into dense, distributed
embeddings. By stark contrast, humans use discrete symbols in their
communication with language. Such symbols represent a compressed version of the
world that derives its meaning from shared contextual information.
Additionally, human reasoning involves symbol manipulation at a cognitive
level, which facilitates abstract reasoning, the composition of knowledge and
understanding, generalization and efficient learning. Motivated by these
insights, in this paper we argue that combining discrete and continuous
representations and their processing will be essential to build systems that
exhibit a general form of intelligence. We suggest and discuss several avenues
that could improve current neural networks with the inclusion of discrete
elements to combine the advantages of both types of representations.
Related papers
- LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - On the Generalization of Learned Structured Representations [5.1398743023989555]
We study methods that learn, with little or no supervision, representations of unstructured data that capture its hidden structure.
The second part of this thesis focuses on object-centric representations, which capture the compositional structure of the input in terms of symbol-like entities.
arXiv Detail & Related papers (2023-04-25T17:14:36Z) - Learning in Factored Domains with Information-Constrained Visual
Representations [14.674830543204317]
We present a model of human factored representation learning based on an altered form of a $beta$-Variational Auto-encoder used in a visual learning task.
Results demonstrate a trade-off in the informational complexity of model latent dimension spaces, between the speed of learning and the accuracy of reconstructions.
arXiv Detail & Related papers (2023-03-30T16:22:10Z) - Emergence of Machine Language: Towards Symbolic Intelligence with Neural
Networks [73.94290462239061]
We propose to combine symbolism and connectionism principles by using neural networks to derive a discrete representation.
By designing an interactive environment and task, we demonstrated that machines could generate a spontaneous, flexible, and semantic language.
arXiv Detail & Related papers (2022-01-14T14:54:58Z) - Dynamic Inference with Neural Interpreters [72.90231306252007]
We present Neural Interpreters, an architecture that factorizes inference in a self-attention network as a system of modules.
inputs to the model are routed through a sequence of functions in a way that is end-to-end learned.
We show that Neural Interpreters perform on par with the vision transformer using fewer parameters, while being transferrable to a new task in a sample efficient manner.
arXiv Detail & Related papers (2021-10-12T23:22:45Z) - Constellation: Learning relational abstractions over objects for
compositional imagination [64.99658940906917]
We introduce Constellation, a network that learns relational abstractions of static visual scenes.
This work is a first step in the explicit representation of visual relationships and using them for complex cognitive procedures.
arXiv Detail & Related papers (2021-07-23T11:59:40Z) - Compositional Processing Emerges in Neural Networks Solving Math
Problems [100.80518350845668]
Recent progress in artificial neural networks has shown that when large models are trained on enough linguistic data, grammatical structure emerges in their representations.
We extend this work to the domain of mathematical reasoning, where it is possible to formulate precise hypotheses about how meanings should be composed.
Our work shows that neural networks are not only able to infer something about the structured relationships implicit in their training data, but can also deploy this knowledge to guide the composition of individual meanings into composite wholes.
arXiv Detail & Related papers (2021-05-19T07:24:42Z) - This is not the Texture you are looking for! Introducing Novel
Counterfactual Explanations for Non-Experts using Generative Adversarial
Learning [59.17685450892182]
counterfactual explanation systems try to enable a counterfactual reasoning by modifying the input image.
We present a novel approach to generate such counterfactual image explanations based on adversarial image-to-image translation techniques.
Our results show that our approach leads to significantly better results regarding mental models, explanation satisfaction, trust, emotions, and self-efficacy than two state-of-the art systems.
arXiv Detail & Related papers (2020-12-22T10:08:05Z) - Interpreting Neural Networks as Gradual Argumentation Frameworks
(Including Proof Appendix) [0.34265828682659694]
We show that an interesting class of feed-forward neural networks can be understood as quantitative argumentation frameworks.
This connection creates a bridge between research in Formal Argumentation and Machine Learning.
arXiv Detail & Related papers (2020-12-10T15:18:15Z) - Interpretable Representations in Explainable AI: From Theory to Practice [7.031336702345381]
Interpretable representations are the backbone of many explainers that target black-box predictive systems.
We study properties of interpretable representations that encode presence and absence of human-comprehensible concepts.
arXiv Detail & Related papers (2020-08-16T21:44:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.