Learning Concepts Described by Weight Aggregation Logic
- URL: http://arxiv.org/abs/2009.10574v1
- Date: Tue, 22 Sep 2020 14:32:42 GMT
- Title: Learning Concepts Described by Weight Aggregation Logic
- Authors: Steffen van Bergerem, Nicole Schweikardt
- Abstract summary: We introduce an extension of first-order logic that allows to aggregate weights ofs, compare such aggregates, and use them to build more complex formulas.
We show that concepts definable in FOWA1 over a weighted background structure at most polylogarithmic degree are agnostically PAC-learnable in polylogarithmic time after pseudo-linear time preprocessing.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider weighted structures, which extend ordinary relational structures
by assigning weights, i.e. elements from a particular group or ring, to tuples
present in the structure. We introduce an extension of first-order logic that
allows to aggregate weights of tuples, compare such aggregates, and use them to
build more complex formulas. We provide locality properties of fragments of
this logic including Feferman-Vaught decompositions and a Gaifman normal form
for a fragment called FOW1, as well as a localisation theorem for a larger
fragment called FOWA1. This fragment can express concepts from various machine
learning scenarios. Using the locality properties, we show that concepts
definable in FOWA1 over a weighted background structure of at most
polylogarithmic degree are agnostically PAC-learnable in polylogarithmic time
after pseudo-linear time preprocessing.
Related papers
- Sort & Slice: A Simple and Superior Alternative to Hash-Based Folding for Extended-Connectivity Fingerprints [0.873811641236639]
We introduce a general mathematical framework for the vectorisation of structural fingerprints via a formal operation called substructure pooling.
Sort & Slice is an easy-to-implement and bit-free alternative to hash-based folding for the pooling of ECFP substructures.
arXiv Detail & Related papers (2024-03-10T16:49:04Z) - Parrot Mind: Towards Explaining the Complex Task Reasoning of Pretrained Large Language Models with Template-Content Structure [66.33623392497599]
We show that a structure called template-content structure (T-C structure) can reduce the possible space from exponential level to linear level.
We demonstrate that models can achieve task composition, further reducing the space needed to learn from linear to logarithmic.
arXiv Detail & Related papers (2023-10-09T06:57:45Z) - Query Structure Modeling for Inductive Logical Reasoning Over Knowledge
Graphs [67.043747188954]
We propose a structure-modeled textual encoding framework for inductive logical reasoning over KGs.
It encodes linearized query structures and entities using pre-trained language models to find answers.
We conduct experiments on two inductive logical reasoning datasets and three transductive datasets.
arXiv Detail & Related papers (2023-05-23T01:25:29Z) - Linear Spaces of Meanings: Compositional Structures in Vision-Language
Models [110.00434385712786]
We investigate compositional structures in data embeddings from pre-trained vision-language models (VLMs)
We first present a framework for understanding compositional structures from a geometric perspective.
We then explain what these structures entail probabilistically in the case of VLM embeddings, providing intuitions for why they arise in practice.
arXiv Detail & Related papers (2023-02-28T08:11:56Z) - Understanding the Covariance Structure of Convolutional Filters [86.0964031294896]
Recent ViT-inspired convolutional networks such as ConvMixer and ConvNeXt use large-kernel depthwise convolutions with notable structure.
We first observe that such learned filters have highly-structured covariance matrices, and we find that covariances calculated from small networks may be used to effectively initialize a variety of larger networks.
arXiv Detail & Related papers (2022-10-07T15:59:13Z) - Weighted Model Counting in FO2 with Cardinality Constraints and Counting
Quantifiers: A Closed Form Formula [4.18804572788063]
Weighted First-Order Model Counting (WFOMC) computes the weighted sum of the models of a first-order logic theory on a finite domain.
We introduce the concept of lifted interpretations as a tool for formulating closed-forms for WFOMC.
We then expand this closed-form to incorporate cardinality constraints, existential quantifiers, and counting quantifiers (a.k.a C2) without losing domain-liftability.
arXiv Detail & Related papers (2021-10-12T13:28:50Z) - Structural Learning of Probabilistic Sentential Decision Diagrams under
Partial Closed-World Assumption [127.439030701253]
Probabilistic sentential decision diagrams are a class of structured-decomposable circuits.
We propose a new scheme based on a partial closed-world assumption: data implicitly provide the logical base of the circuit.
Preliminary experiments show that the proposed approach might properly fit training data, and generalize well to test data, provided that these remain consistent with the underlying logical base.
arXiv Detail & Related papers (2021-07-26T12:01:56Z) - Finite-Function-Encoding Quantum States [52.77024349608834]
We introduce finite-function-encoding (FFE) states which encode arbitrary $d$-valued logic functions.
We investigate some of their structural properties.
arXiv Detail & Related papers (2020-12-01T13:53:23Z) - Weighted Model Counting in the two variable fragment with Cardinality
Constraints: A Closed Form Formula [7.766921168069532]
Weighted First-Order Model Counting (WFOMC) computes the weighted sum of the models of a first-order theory on a given finite domain.
We introduce the concept of lifted interpretations as a tool for formulating inferences for WFOMC.
We then expand this closed-form to incorporate existential quantifiers and cardinality constraints without losing domain-liftability.
arXiv Detail & Related papers (2020-09-25T13:50:18Z) - Generalising Recursive Neural Models by Tensor Decomposition [12.069862650316262]
We introduce a general approach to model aggregation of structural context leveraging a tensor-based formulation.
We show how the exponential growth in the size of the parameter space can be controlled through an approximation based on the Tucker decomposition.
By this means, we can effectively regulate the trade-off between expressivity of the encoding, controlled by the hidden size, computational complexity and model generalisation.
arXiv Detail & Related papers (2020-06-17T17:28:19Z) - Learning Concepts Definable in First-Order Logic with Counting [0.0]
We show that FOCN over classes of structures of polylogarithmic degree can be consistently learned in sublinear time.
We extend the result to probably approximately correct (PAC) learning for classes of structures of degree at most $(log log n)c$ for some constant $c$.
arXiv Detail & Related papers (2019-09-09T12:57:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.