Verbal behavior without syntactic structures: beyond Skinner and Chomsky
- URL: http://arxiv.org/abs/2303.08080v1
- Date: Sat, 11 Mar 2023 00:01:21 GMT
- Title: Verbal behavior without syntactic structures: beyond Skinner and Chomsky
- Authors: Shimon Edelman
- Abstract summary: We must rediscover the extent to which language is like any other human behavior.
Recent psychological, computational, neurobiological, and evolutionary insights into the shaping and structure of behavior may point us toward a new, viable account of language.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: What does it mean to know language? Since the Chomskian revolution, one
popular answer to this question has been: to possess a generative grammar that
exclusively licenses certain syntactic structures. Decades later, not even an
approximation to such a grammar, for any language, has been formulated; the
idea that grammar is universal and innately specified has proved barren; and
attempts to show how it could be learned from experience invariably come up
short. To move on from this impasse, we must rediscover the extent to which
language is like any other human behavior: dynamic, social, multimodal,
patterned, and purposive, its purpose being to promote desirable actions (or
thoughts) in others and self. Recent psychological, computational,
neurobiological, and evolutionary insights into the shaping and structure of
behavior may then point us toward a new, viable account of language.
Related papers
- Mobile Sequencers [0.0]
The article is an attempt to contribute to explorations of a common origin for language and planned-collaborative action.
It gives semantics of change' the central stage in the synthesis, from its history and recordkeeping to its development, its syntax, delivery and reception.
arXiv Detail & Related papers (2024-05-09T12:39:50Z) - Universal Syntactic Structures: Modeling Syntax for Various Natural
Languages [0.0]
We aim to provide an explanation for how the human brain might connect words for sentence formation.
A novel approach to modeling syntactic representation is introduced, potentially showing the existence of universal syntactic structures for all natural languages.
arXiv Detail & Related papers (2023-12-28T20:44:26Z) - The Neuro-Symbolic Inverse Planning Engine (NIPE): Modeling
Probabilistic Social Inferences from Linguistic Inputs [50.32802502923367]
We study the process of language driving and influencing social reasoning in a probabilistic goal inference domain.
We propose a neuro-symbolic model that carries out goal inference from linguistic inputs of agent scenarios.
Our model closely matches human response patterns and better predicts human judgements than using an LLM alone.
arXiv Detail & Related papers (2023-06-25T19:38:01Z) - From Word Models to World Models: Translating from Natural Language to
the Probabilistic Language of Thought [124.40905824051079]
We propose rational meaning construction, a computational framework for language-informed thinking.
We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought.
We show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings.
We extend our framework to integrate cognitively-motivated symbolic modules.
arXiv Detail & Related papers (2023-06-22T05:14:00Z) - Why can neural language models solve next-word prediction? A
mathematical perspective [53.807657273043446]
We study a class of formal languages that can be used to model real-world examples of English sentences.
Our proof highlights the different roles of the embedding layer and the fully connected component within the neural language model.
arXiv Detail & Related papers (2023-06-20T10:41:23Z) - Word class representations spontaneously emerge in a deep neural network
trained on next word prediction [7.240611820374677]
How do humans learn language, and can the first language be learned at all?
These fundamental questions are still hotly debated.
In particular, we train an artificial deep neural network on predicting the next word.
We find that the internal representations of nine-word input sequences cluster according to the word class of the tenth word to be predicted as output.
arXiv Detail & Related papers (2023-02-15T11:02:50Z) - Transparency Helps Reveal When Language Models Learn Meaning [71.96920839263457]
Our systematic experiments with synthetic data reveal that, with languages where all expressions have context-independent denotations, both autoregressive and masked language models learn to emulate semantic relations between expressions.
Turning to natural language, our experiments with a specific phenomenon -- referential opacity -- add to the growing body of evidence that current language models do not well-represent natural language semantics.
arXiv Detail & Related papers (2022-10-14T02:35:19Z) - How to talk so your robot will learn: Instructions, descriptions, and
pragmatics [14.289220844201695]
We study how a human might communicate preferences over behaviors.
We show that in traditional reinforcement learning settings, pragmatic social learning can integrate with and accelerate individual learning.
Our findings suggest that social learning from a wider range of language is a promising approach for value alignment and reinforcement learning more broadly.
arXiv Detail & Related papers (2022-06-16T01:33:38Z) - Emergent Communication for Understanding Human Language Evolution:
What's Missing? [1.2891210250935146]
We discuss three important phenomena with respect to the emergence and benefits of compositionality.
We argue that one possible reason for these mismatches is that key cognitive and communicative constraints of humans are not yet integrated.
arXiv Detail & Related papers (2022-04-22T09:21:53Z) - Emergence of Machine Language: Towards Symbolic Intelligence with Neural
Networks [73.94290462239061]
We propose to combine symbolism and connectionism principles by using neural networks to derive a discrete representation.
By designing an interactive environment and task, we demonstrated that machines could generate a spontaneous, flexible, and semantic language.
arXiv Detail & Related papers (2022-01-14T14:54:58Z) - Ethical-Advice Taker: Do Language Models Understand Natural Language
Interventions? [62.74872383104381]
We investigate the effectiveness of natural language interventions for reading-comprehension systems.
We propose a new language understanding task, Linguistic Ethical Interventions (LEI), where the goal is to amend a question-answering (QA) model's unethical behavior.
arXiv Detail & Related papers (2021-06-02T20:57:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.