Mobile Sequencers
- URL: http://arxiv.org/abs/2405.06710v1
- Date: Thu, 9 May 2024 12:39:50 GMT
- Title: Mobile Sequencers
- Authors: Cem Bozsahin,
- Abstract summary: The article is an attempt to contribute to explorations of a common origin for language and planned-collaborative action.
It gives semantics of change' the central stage in the synthesis, from its history and recordkeeping to its development, its syntax, delivery and reception.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The article is an attempt to contribute to explorations of a common origin for language and planned-collaborative action. It gives `semantics of change' the central stage in the synthesis, from its history and recordkeeping to its development, its syntax, delivery and reception, including substratal aspects. It is suggested that to arrive at a common core, linguistic semantics must be understood as studying through syntax mobile agent's representing, tracking and coping with change and no change. Semantics of actions can be conceived the same way, but through plans instead of syntax. The key point is the following: Sequencing itself, of words and action sequences, brings in more structural interpretation to the sequence than which is immediately evident from the sequents themselves. Mobile sequencers can be understood as subjects structuring reporting, understanding and keeping track of change and no change. The idea invites rethinking of the notion of category, both in language and in planning. Understanding understanding change by mobile agents is suggested to be about human extended practice, not extended-human practice. That's why linguistics is as important as computer science in the synthesis. It must rely on representational history of acts, thoughts and expressions, personal and public, crosscutting overtness and covertness of these phenomena. It has implication for anthropology in the extended practice, which is covered briefly.
Related papers
- Situated Instruction Following [87.37244711380411]
We propose situated instruction following, which embraces the inherent underspecification and ambiguity of real-world communication.
The meaning of situated instructions naturally unfold through the past actions and the expected future behaviors of the human involved.
Our experiments indicate that state-of-the-art Embodied Instruction Following (EIF) models lack holistic understanding of situated human intention.
arXiv Detail & Related papers (2024-07-15T19:32:30Z) - Survey in Characterization of Semantic Change [0.1474723404975345]
Understanding the meaning of words is vital for interpreting texts from different cultures.
Semantic changes can potentially impact the quality of the outcomes of computational linguistics algorithms.
arXiv Detail & Related papers (2024-02-29T12:13:50Z) - Verbal behavior without syntactic structures: beyond Skinner and Chomsky [0.0]
We must rediscover the extent to which language is like any other human behavior.
Recent psychological, computational, neurobiological, and evolutionary insights into the shaping and structure of behavior may point us toward a new, viable account of language.
arXiv Detail & Related papers (2023-03-11T00:01:21Z) - How Do Transformers Learn Topic Structure: Towards a Mechanistic
Understanding [56.222097640468306]
We provide mechanistic understanding of how transformers learn "semantic structure"
We show, through a combination of mathematical analysis and experiments on Wikipedia data, that the embedding layer and the self-attention layer encode the topical structure.
arXiv Detail & Related papers (2023-03-07T21:42:17Z) - Variational Cross-Graph Reasoning and Adaptive Structured Semantics
Learning for Compositional Temporal Grounding [143.5927158318524]
Temporal grounding is the task of locating a specific segment from an untrimmed video according to a query sentence.
We introduce a new Compositional Temporal Grounding task and construct two new dataset splits.
We argue that the inherent structured semantics inside the videos and language is the crucial factor to achieve compositional generalization.
arXiv Detail & Related papers (2023-01-22T08:02:23Z) - Comprehending and Ordering Semantics for Image Captioning [124.48670699658649]
We propose a new recipe of Transformer-style structure, namely Comprehending and Ordering Semantics Networks (COS-Net)
COS-Net unifies an enriched semantic comprehending and a learnable semantic ordering processes into a single architecture.
arXiv Detail & Related papers (2022-06-14T15:51:14Z) - A Paradigm Change for Formal Syntax: Computational Algorithms in the
Grammar of English [0.0]
We turn to programming languages as models for a process-based syntax of English.
The combination of a functional word and a content word was chosen as the topic of modeling.
The fit of the model was tested by deriving three functional characteristics crucial for the algorithm and checking their presence in English grammar.
arXiv Detail & Related papers (2022-05-24T07:28:47Z) - Target Languages (vs. Inductive Biases) for Learning to Act and Plan [13.820550902006078]
I articulate a different learning approach where representations do not emerge from biases in a neural architecture but are learned over a given target language with a known semantics.
The goals of the paper and talk are to make these ideas explicit, to place them in a broader context where the design of the target language is crucial, and to illustrate them in the context of learning to act and plan.
arXiv Detail & Related papers (2021-09-15T10:24:13Z) - Grounding Spatio-Temporal Language with Transformers [22.46291815734606]
We introduce a novel-temporal language task to learn the meaning of behavioral traces of an embodied agent.
This is achieved by training a function that predicts if a description matches a given history of observations.
To study the role of architectural generalization in this task, we train several models including multimodal Transformer architectures.
arXiv Detail & Related papers (2021-06-16T15:28:22Z) - Compositional Processing Emerges in Neural Networks Solving Math
Problems [100.80518350845668]
Recent progress in artificial neural networks has shown that when large models are trained on enough linguistic data, grammatical structure emerges in their representations.
We extend this work to the domain of mathematical reasoning, where it is possible to formulate precise hypotheses about how meanings should be composed.
Our work shows that neural networks are not only able to infer something about the structured relationships implicit in their training data, but can also deploy this knowledge to guide the composition of individual meanings into composite wholes.
arXiv Detail & Related papers (2021-05-19T07:24:42Z) - Unsupervised Distillation of Syntactic Information from Contextualized
Word Representations [62.230491683411536]
We tackle the task of unsupervised disentanglement between semantics and structure in neural language representations.
To this end, we automatically generate groups of sentences which are structurally similar but semantically different.
We demonstrate that our transformation clusters vectors in space by structural properties, rather than by lexical semantics.
arXiv Detail & Related papers (2020-10-11T15:13:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.