Testing the Quantitative Spacetime Hypothesis using Artificial Narrative
Comprehension (II) : Establishing the Geometry of Invariant Concepts, Themes,
and Namespaces
- URL: http://arxiv.org/abs/2010.08125v1
- Date: Wed, 23 Sep 2020 11:19:17 GMT
- Title: Testing the Quantitative Spacetime Hypothesis using Artificial Narrative
Comprehension (II) : Establishing the Geometry of Invariant Concepts, Themes,
and Namespaces
- Authors: Mark Burgess
- Abstract summary: This study contributes to an ongoing application of the Semantic Spacetime Hypothesis, and demonstrates the unsupervised analysis of narrative texts.
Data streams are parsed and fractionated into small constituents, by multiscale interferometry, in the manner of bioinformatic analysis.
Fragments of the input act as symbols in a hierarchy of alphabets that define new effective languages at each scale.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Given a pool of observations selected from a sensor stream, input data can be
robustly represented, via a multiscale process, in terms of invariant concepts,
and themes. Applying this to episodic natural language data, one may obtain a
graph geometry associated with the decomposition, which is a direct encoding of
spacetime relationships for the events. This study contributes to an ongoing
application of the Semantic Spacetime Hypothesis, and demonstrates the
unsupervised analysis of narrative texts using inexpensive computational
methods without knowledge of linguistics. Data streams are parsed and
fractionated into small constituents, by multiscale interferometry, in the
manner of bioinformatic analysis. Fragments may then be recombined to construct
original sensory episodes---or form new narratives by a chemistry of
association and pattern reconstruction, based only on the four fundamental
spacetime relationships. There is a straightforward correspondence between
bioinformatic processes and this cognitive representation of natural language.
Features identifiable as `concepts' and `narrative themes' span three main
scales (micro, meso, and macro). Fragments of the input act as symbols in a
hierarchy of alphabets that define new effective languages at each scale.
Related papers
- Linguistic Structure from a Bottleneck on Sequential Information Processing [5.850665541267672]
We show that natural-language-like systematicity arises in codes that are constrained by predictive information.
We show that human languages are structured to have low predictive information at the levels of phonology, morphology, syntax, and semantics.
arXiv Detail & Related papers (2024-05-20T15:25:18Z) - Complex systems approach to natural language [0.0]
Review summarizes the main methodological concepts used in studying natural language from the perspective of complexity science.
Three main complexity-related research trends in quantitative linguistics are covered.
arXiv Detail & Related papers (2024-01-05T12:01:26Z) - Variational Cross-Graph Reasoning and Adaptive Structured Semantics
Learning for Compositional Temporal Grounding [143.5927158318524]
Temporal grounding is the task of locating a specific segment from an untrimmed video according to a query sentence.
We introduce a new Compositional Temporal Grounding task and construct two new dataset splits.
We argue that the inherent structured semantics inside the videos and language is the crucial factor to achieve compositional generalization.
arXiv Detail & Related papers (2023-01-22T08:02:23Z) - An Informational Space Based Semantic Analysis for Scientific Texts [62.997667081978825]
This paper introduces computational methods for semantic analysis and the quantifying the meaning of short scientific texts.
The representation of scientific-specific meaning is standardised by replacing the situation representations, rather than psychological properties.
The research in this paper conducts the base for the geometric representation of the meaning of texts.
arXiv Detail & Related papers (2022-05-31T11:19:32Z) - A Latent-Variable Model for Intrinsic Probing [93.62808331764072]
We propose a novel latent-variable formulation for constructing intrinsic probes.
We find empirical evidence that pre-trained representations develop a cross-lingually entangled notion of morphosyntax.
arXiv Detail & Related papers (2022-01-20T15:01:12Z) - Bridging between Cognitive Processing Signals and Linguistic Features
via a Unified Attentional Network [25.235060468310696]
We propose a data-driven method to investigate the relationship between cognitive processing signals and linguistic features.
We present a unified attentional framework that is composed of embedding, attention, encoding and predicting layers.
The proposed framework can be used to detect a wide range of linguistic features with a single cognitive dataset.
arXiv Detail & Related papers (2021-12-16T12:25:11Z) - Low-Dimensional Structure in the Space of Language Representations is
Reflected in Brain Responses [62.197912623223964]
We show a low-dimensional structure where language models and translation models smoothly interpolate between word embeddings, syntactic and semantic tasks, and future word embeddings.
We find that this representation embedding can predict how well each individual feature space maps to human brain responses to natural language stimuli recorded using fMRI.
This suggests that the embedding captures some part of the brain's natural language representation structure.
arXiv Detail & Related papers (2021-06-09T22:59:12Z) - Bird's Eye: Probing for Linguistic Graph Structures with a Simple
Information-Theoretic Approach [23.66191446048298]
We propose a new information-theoretic probe, Bird's Eye, for detecting if and how representations encode the information in linguistic graphs.
We also propose an approach to use our probe to investigate localized linguistic information in the linguistic graphs using perturbation analysis.
arXiv Detail & Related papers (2021-05-06T13:01:57Z) - Prototypical Representation Learning for Relation Extraction [56.501332067073065]
This paper aims to learn predictive, interpretable, and robust relation representations from distantly-labeled data.
We learn prototypes for each relation from contextual information to best explore the intrinsic semantics of relations.
Results on several relation learning tasks show that our model significantly outperforms the previous state-of-the-art relational models.
arXiv Detail & Related papers (2021-03-22T08:11:43Z) - Intrinsic Probing through Dimension Selection [69.52439198455438]
Most modern NLP systems make use of pre-trained contextual representations that attain astonishingly high performance on a variety of tasks.
Such high performance should not be possible unless some form of linguistic structure inheres in these representations, and a wealth of research has sprung up on probing for it.
In this paper, we draw a distinction between intrinsic probing, which examines how linguistic information is structured within a representation, and the extrinsic probing popular in prior work, which only argues for the presence of such information by showing that it can be successfully extracted.
arXiv Detail & Related papers (2020-10-06T15:21:08Z) - Testing the Quantitative Spacetime Hypothesis using Artificial Narrative
Comprehension (I) : Bootstrapping Meaning from Episodic Narrative viewed as a
Feature Landscape [0.0]
This work studies the problem of extracting meaningful parts of a sensory data stream, without prior training, by using symbolic sequences.
Using lightweight procedures that can be run in just a few seconds on a single CPU, this work studies the validity of the Semantic Spacetime Hypothesis.
The results suggest that what we consider important and interesting about sensory experience is not solely based on higher reasoning, but on simple spacetime process cues, and this may be how cognitive processing is bootstrapped in the beginning.
arXiv Detail & Related papers (2020-09-23T11:10:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.