Testing the Quantitative Spacetime Hypothesis using Artificial Narrative
Comprehension (I) : Bootstrapping Meaning from Episodic Narrative viewed as a
Feature Landscape
- URL: http://arxiv.org/abs/2010.08126v1
- Date: Wed, 23 Sep 2020 11:10:12 GMT
- Title: Testing the Quantitative Spacetime Hypothesis using Artificial Narrative
Comprehension (I) : Bootstrapping Meaning from Episodic Narrative viewed as a
Feature Landscape
- Authors: Mark Burgess
- Abstract summary: This work studies the problem of extracting meaningful parts of a sensory data stream, without prior training, by using symbolic sequences.
Using lightweight procedures that can be run in just a few seconds on a single CPU, this work studies the validity of the Semantic Spacetime Hypothesis.
The results suggest that what we consider important and interesting about sensory experience is not solely based on higher reasoning, but on simple spacetime process cues, and this may be how cognitive processing is bootstrapped in the beginning.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The problem of extracting important and meaningful parts of a sensory data
stream, without prior training, is studied for symbolic sequences, by using
textual narrative as a test case. This is part of a larger study concerning the
extraction of concepts from spacetime processes, and their knowledge
representations within hybrid symbolic-learning `Artificial Intelligence'. Most
approaches to text analysis make extensive use of the evolved human sense of
language and semantics. In this work, streams are parsed without knowledge of
semantics, using only measurable patterns (size and time) within the changing
stream of symbols -- as an event `landscape'. This is a form of interferometry.
Using lightweight procedures that can be run in just a few seconds on a single
CPU, this work studies the validity of the Semantic Spacetime Hypothesis, for
the extraction of concepts as process invariants. This `semantic preprocessor'
may then act as a front-end for more sophisticated long-term graph-based
learning techniques. The results suggest that what we consider important and
interesting about sensory experience is not solely based on higher reasoning,
but on simple spacetime process cues, and this may be how cognitive processing
is bootstrapped in the beginning.
Related papers
- SLAck: Semantic, Location, and Appearance Aware Open-Vocabulary Tracking [89.43370214059955]
Open-vocabulary Multiple Object Tracking (MOT) aims to generalize trackers to novel categories not in the training set.
We present a unified framework that jointly considers semantics, location, and appearance priors in the early steps of association.
Our method eliminates complex post-processings for fusing different cues and boosts the association performance significantly for large-scale open-vocabulary tracking.
arXiv Detail & Related papers (2024-09-17T14:36:58Z) - Disentangling Dense Embeddings with Sparse Autoencoders [0.0]
Sparse autoencoders (SAEs) have shown promise in extracting interpretable features from complex neural networks.
We present one of the first applications of SAEs to dense text embeddings from large language models.
We show that the resulting sparse representations maintain semantic fidelity while offering interpretability.
arXiv Detail & Related papers (2024-08-01T15:46:22Z) - Pixel Sentence Representation Learning [67.4775296225521]
In this work, we conceptualize the learning of sentence-level textual semantics as a visual representation learning process.
We employ visually-grounded text perturbation methods like typos and word order shuffling, resonating with human cognitive patterns, and enabling perturbation to be perceived as continuous.
Our approach is further bolstered by large-scale unsupervised topical alignment training and natural language inference supervision.
arXiv Detail & Related papers (2024-02-13T02:46:45Z) - Subspace Chronicles: How Linguistic Information Emerges, Shifts and
Interacts during Language Model Training [56.74440457571821]
We analyze tasks covering syntax, semantics and reasoning, across 2M pre-training steps and five seeds.
We identify critical learning phases across tasks and time, during which subspaces emerge, share information, and later disentangle to specialize.
Our findings have implications for model interpretability, multi-task learning, and learning from limited data.
arXiv Detail & Related papers (2023-10-25T09:09:55Z) - Topic-DPR: Topic-based Prompts for Dense Passage Retrieval [6.265789210037749]
We present Topic-DPR, a dense passage retrieval model that uses topic-based prompts.
We introduce a novel positive and negative sampling strategy, leveraging semi-structured data to boost dense retrieval efficiency.
arXiv Detail & Related papers (2023-10-10T13:45:24Z) - Preliminary study on using vector quantization latent spaces for TTS/VC
systems with consistent performance [55.10864476206503]
We investigate the use of quantized vectors to model the latent linguistic embedding.
By enforcing different policies over the latent spaces in the training, we are able to obtain a latent linguistic embedding.
Our experiments show that the voice cloning system built with vector quantization has only a small degradation in terms of perceptive evaluations.
arXiv Detail & Related papers (2021-06-25T07:51:35Z) - Abstract Spatial-Temporal Reasoning via Probabilistic Abduction and
Execution [97.50813120600026]
Spatial-temporal reasoning is a challenging task in Artificial Intelligence (AI)
Recent works have focused on an abstract reasoning task of this kind -- Raven's Progressive Matrices ( RPM)
We propose a neuro-symbolic Probabilistic Abduction and Execution learner (PrAE) learner.
arXiv Detail & Related papers (2021-03-26T02:42:18Z) - Testing the Quantitative Spacetime Hypothesis using Artificial Narrative
Comprehension (II) : Establishing the Geometry of Invariant Concepts, Themes,
and Namespaces [0.0]
This study contributes to an ongoing application of the Semantic Spacetime Hypothesis, and demonstrates the unsupervised analysis of narrative texts.
Data streams are parsed and fractionated into small constituents, by multiscale interferometry, in the manner of bioinformatic analysis.
Fragments of the input act as symbols in a hierarchy of alphabets that define new effective languages at each scale.
arXiv Detail & Related papers (2020-09-23T11:19:17Z) - Visual Question Answering with Prior Class Semantics [50.845003775809836]
We show how to exploit additional information pertaining to the semantics of candidate answers.
We extend the answer prediction process with a regression objective in a semantic space.
Our method brings improvements in consistency and accuracy over a range of question types.
arXiv Detail & Related papers (2020-05-04T02:46:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.