Language Cognition and Language Computation -- Human and Machine
Language Understanding
- URL: http://arxiv.org/abs/2301.04788v1
- Date: Thu, 12 Jan 2023 02:37:00 GMT
- Title: Language Cognition and Language Computation -- Human and Machine
Language Understanding
- Authors: Shaonan Wang, Nai Ding, Nan Lin, Jiajun Zhang, Chengqing Zong
- Abstract summary: Language understanding is a key scientific issue in the fields of cognitive and computer science.
Can a combination of the disciplines offer new insights for building intelligent language models?
- Score: 51.56546543716759
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Language understanding is a key scientific issue in the fields of cognitive
and computer science. However, the two disciplines differ substantially in the
specific research questions. Cognitive science focuses on analyzing the
specific mechanism of the brain and investigating the brain's response to
language; few studies have examined the brain's language system as a whole. By
contrast, computer scientists focus on the efficiency of practical applications
when choosing research questions but may ignore the most essential laws of
language. Given these differences, can a combination of the disciplines offer
new insights for building intelligent language models and studying language
cognitive mechanisms? In the following text, we first review the research
questions, history, and methods of language understanding in cognitive and
computer science, focusing on the current progress and challenges. We then
compare and contrast the research of language understanding in cognitive and
computer sciences. Finally, we review existing work that combines insights from
language cognition and language computation and offer prospects for future
development trends.
Related papers
- Decoding Linguistic Representations of Human Brain [21.090956290947275]
We present a taxonomy of brain-to-language decoding of both textual and speech formats.
This work integrates two types of research: neuroscience focusing on language understanding and deep learning-based brain decoding.
arXiv Detail & Related papers (2024-07-30T07:55:44Z) - Lost in Translation: The Algorithmic Gap Between LMs and the Brain [8.799971499357499]
Language Models (LMs) have achieved impressive performance on various linguistic tasks, but their relationship to human language processing in the brain remains unclear.
This paper examines the gaps and overlaps between LMs and the brain at different levels of analysis.
We discuss how insights from neuroscience, such as sparsity, modularity, internal states, and interactive learning, can inform the development of more biologically plausible language models.
arXiv Detail & Related papers (2024-07-05T17:43:16Z) - Language Evolution with Deep Learning [49.879239655532324]
Computational modeling plays an essential role in the study of language emergence.
It aims to simulate the conditions and learning processes that could trigger the emergence of a structured language.
This chapter explores another class of computational models that have recently revolutionized the field of machine learning: deep learning models.
arXiv Detail & Related papers (2024-03-18T16:52:54Z) - Tapping into the Natural Language System with Artificial Languages when
Learning Programming [7.5520627446611925]
The goal of this study is to investigate the feasibility of this idea, such that we can enhance learning programming by activating language learning mechanisms.
We observed that the training of the artificial language can be easily integrated into our curriculum.
However, within the context of our study, we did not find a significant benefit for programming competency when students learned an artificial language first.
arXiv Detail & Related papers (2024-01-12T07:08:55Z) - Causal Graph in Language Model Rediscovers Cortical Hierarchy in Human
Narrative Processing [0.0]
Previous studies have demonstrated that the features of language models can be mapped to fMRI brain activity.
This raises the question: is there a commonality between information processing in language models and the human brain?
To estimate information flow patterns in a language model, we examined the causal relationships between different layers.
arXiv Detail & Related papers (2023-11-17T10:09:12Z) - Retentive or Forgetful? Diving into the Knowledge Memorizing Mechanism
of Language Models [49.39276272693035]
Large-scale pre-trained language models have shown remarkable memorizing ability.
Vanilla neural networks without pre-training have been long observed suffering from the catastrophic forgetting problem.
We find that 1) Vanilla language models are forgetful; 2) Pre-training leads to retentive language models; 3) Knowledge relevance and diversification significantly influence the memory formation.
arXiv Detail & Related papers (2023-05-16T03:50:38Z) - A Survey of Deep Learning for Mathematical Reasoning [71.88150173381153]
We review the key tasks, datasets, and methods at the intersection of mathematical reasoning and deep learning over the past decade.
Recent advances in large-scale neural language models have opened up new benchmarks and opportunities to use deep learning for mathematical reasoning.
arXiv Detail & Related papers (2022-12-20T18:46:16Z) - Perception Point: Identifying Critical Learning Periods in Speech for
Bilingual Networks [58.24134321728942]
We compare and identify cognitive aspects on deep neural-based visual lip-reading models.
We observe a strong correlation between these theories in cognitive psychology and our unique modeling.
arXiv Detail & Related papers (2021-10-13T05:30:50Z) - Information-Theoretic Probing for Linguistic Structure [74.04862204427944]
We propose an information-theoretic operationalization of probing as estimating mutual information.
We evaluate on a set of ten typologically diverse languages often underrepresented in NLP research.
arXiv Detail & Related papers (2020-04-07T01:06:36Z) - Data-driven models and computational tools for neurolinguistics: a
language technology perspective [12.082438928980087]
We present a review of brain imaging-based neurolinguistic studies with a focus on the natural language representations.
Mutual enrichment of neurolinguistics and language technologies leads to development of brain-aware natural language representations.
arXiv Detail & Related papers (2020-03-23T20:41:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.