FACT: Learning Governing Abstractions Behind Integer Sequences
- URL: http://arxiv.org/abs/2209.09543v1
- Date: Tue, 20 Sep 2022 08:20:03 GMT
- Title: FACT: Learning Governing Abstractions Behind Integer Sequences
- Authors: Peter Belc\'ak, Ard Kastrati, Flavio Schenker, Roger Wattenhofer
- Abstract summary: We introduce a novel view on the learning of concepts admitting complete finitary descriptions.
We lay down a set of benchmarking tasks aimed at conceptual understanding by machine learning models.
To further aid research in knowledge representation and reasoning, we present FACT, the Finitary Abstraction Toolkit.
- Score: 7.895232155155041
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Integer sequences are of central importance to the modeling of concepts
admitting complete finitary descriptions. We introduce a novel view on the
learning of such concepts and lay down a set of benchmarking tasks aimed at
conceptual understanding by machine learning models. These tasks indirectly
assess model ability to abstract, and challenge them to reason both
interpolatively and extrapolatively from the knowledge gained by observing
representative examples. To further aid research in knowledge representation
and reasoning, we present FACT, the Finitary Abstraction Comprehension Toolkit.
The toolkit surrounds a large dataset of integer sequences comprising both
organic and synthetic entries, a library for data pre-processing and
generation, a set of model performance evaluation tools, and a collection of
baseline model implementations, enabling the making of the future advancements
with ease.
Related papers
- Zero-Shot Object-Centric Representation Learning [72.43369950684057]
We study current object-centric methods through the lens of zero-shot generalization.
We introduce a benchmark comprising eight different synthetic and real-world datasets.
We find that training on diverse real-world images improves transferability to unseen scenarios.
arXiv Detail & Related papers (2024-08-17T10:37:07Z) - mlr3summary: Concise and interpretable summaries for machine learning models [9.191045750996524]
This work introduces a novel R package for concise, informative summaries of machine learning models.
We take inspiration from the summary function for (generalized) linear models in R, but extend it in several directions.
arXiv Detail & Related papers (2024-04-25T08:57:35Z) - Corpus Considerations for Annotator Modeling and Scaling [9.263562546969695]
We show that the commonly used user token model consistently outperforms more complex models.
Our findings shed light on the relationship between corpus statistics and annotator modeling performance.
arXiv Detail & Related papers (2024-04-02T22:27:24Z) - Neural Clustering based Visual Representation Learning [61.72646814537163]
Clustering is one of the most classic approaches in machine learning and data analysis.
We propose feature extraction with clustering (FEC), which views feature extraction as a process of selecting representatives from data.
FEC alternates between grouping pixels into individual clusters to abstract representatives and updating the deep features of pixels with current representatives.
arXiv Detail & Related papers (2024-03-26T06:04:50Z) - Self-Supervised Representation Learning with Meta Comprehensive
Regularization [11.387994024747842]
We introduce a module called CompMod with Meta Comprehensive Regularization (MCR), embedded into existing self-supervised frameworks.
We update our proposed model through a bi-level optimization mechanism, enabling it to capture comprehensive features.
We provide theoretical support for our proposed method from information theory and causal counterfactual perspective.
arXiv Detail & Related papers (2024-03-03T15:53:48Z) - CAManim: Animating end-to-end network activation maps [0.2509487459755192]
We propose a novel XAI visualization method denoted CAManim that seeks to broaden and focus end-user understanding of CNN predictions.
We additionally propose a novel quantitative assessment that expands upon the Remove and Debias (ROAD) metric.
This builds upon prior research to address the increasing demand for interpretable, robust, and transparent model assessment methodology.
arXiv Detail & Related papers (2023-12-19T01:07:36Z) - One-Shot Open Affordance Learning with Foundation Models [54.15857111929812]
We introduce One-shot Open Affordance Learning (OOAL), where a model is trained with just one example per base object category.
We propose a vision-language framework with simple and effective designs that boost the alignment between visual features and affordance text embeddings.
Experiments on two affordance segmentation benchmarks show that the proposed method outperforms state-of-the-art models with less than 1% of the full training data.
arXiv Detail & Related papers (2023-11-29T16:23:06Z) - A Recursive Bateson-Inspired Model for the Generation of Semantic Formal
Concepts from Spatial Sensory Data [77.34726150561087]
This paper presents a new symbolic-only method for the generation of hierarchical concept structures from complex sensory data.
The approach is based on Bateson's notion of difference as the key to the genesis of an idea or a concept.
The model is able to produce fairly rich yet human-readable conceptual representations without training.
arXiv Detail & Related papers (2023-07-16T15:59:13Z) - SummVis: Interactive Visual Analysis of Models, Data, and Evaluation for
Text Summarization [14.787106201073154]
SummVis is an open-source tool for visualizing abstractive summaries.
It enables fine-grained analysis of the models, data, and evaluation metrics associated with text summarization.
arXiv Detail & Related papers (2021-04-15T17:13:00Z) - A Minimalist Dataset for Systematic Generalization of Perception,
Syntax, and Semantics [131.93113552146195]
We present a new dataset, Handwritten arithmetic with INTegers (HINT), to examine machines' capability of learning generalizable concepts.
In HINT, machines are tasked with learning how concepts are perceived from raw signals such as images.
We undertake extensive experiments with various sequence-to-sequence models, including RNNs, Transformers, and GPT-3.
arXiv Detail & Related papers (2021-03-02T01:32:54Z) - Rethinking Generalization of Neural Models: A Named Entity Recognition
Case Study [81.11161697133095]
We take the NER task as a testbed to analyze the generalization behavior of existing models from different perspectives.
Experiments with in-depth analyses diagnose the bottleneck of existing neural NER models.
As a by-product of this paper, we have open-sourced a project that involves a comprehensive summary of recent NER papers.
arXiv Detail & Related papers (2020-01-12T04:33:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.