Imitation versus Innovation: What children can do that large language
and language-and-vision models cannot (yet)?
- URL: http://arxiv.org/abs/2305.07666v1
- Date: Mon, 8 May 2023 18:26:39 GMT
- Title: Imitation versus Innovation: What children can do that large language
and language-and-vision models cannot (yet)?
- Authors: Eunice Yiu, Eliza Kosoy and Alison Gopnik
- Abstract summary: We argue that large language models and language-and-vision models are efficient imitation engines.
Our findings suggest that machines may need more than large scale language and images to achieve what a child can do.
- Score: 0.10312968200748115
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Much discussion about large language models and language-and-vision models
has focused on whether these models are intelligent agents. We present an
alternative perspective. We argue that these artificial intelligence models are
cultural technologies that enhance cultural transmission in the modern world,
and are efficient imitation engines. We explore what AI models can tell us
about imitation and innovation by evaluating their capacity to design new tools
and discover novel causal structures, and contrast their responses with those
of human children. Our work serves as a first step in determining which
particular representations and competences, as well as which kinds of knowledge
or skill, can be derived from particular learning techniques and data.
Critically, our findings suggest that machines may need more than large scale
language and images to achieve what a child can do.
Related papers
- Proceedings of the First International Workshop on Next-Generation Language Models for Knowledge Representation and Reasoning (NeLaMKRR 2024) [16.282850445579857]
Reasoning is an essential component of human intelligence as it plays a fundamental role in our ability to think critically.
Recent leap forward in natural language processing, with the emergence of language models based on transformers, is hinting at the possibility that these models exhibit reasoning abilities.
Despite ongoing discussions about what reasoning is in language models, it is still not easy to pin down to what extent these models are actually capable of reasoning.
arXiv Detail & Related papers (2024-10-07T02:31:47Z) - Visual Knowledge in the Big Model Era: Retrospect and Prospect [63.282425615863]
Visual knowledge is a new form of knowledge representation that can encapsulate visual concepts and their relations in a succinct, comprehensive, and interpretable manner.
As the knowledge about the visual world has been identified as an indispensable component of human cognition and intelligence, visual knowledge is poised to have a pivotal role in establishing machine intelligence.
arXiv Detail & Related papers (2024-04-05T07:31:24Z) - LVLM-Interpret: An Interpretability Tool for Large Vision-Language Models [50.259006481656094]
We present a novel interactive application aimed towards understanding the internal mechanisms of large vision-language models.
Our interface is designed to enhance the interpretability of the image patches, which are instrumental in generating an answer.
We present a case study of how our application can aid in understanding failure mechanisms in a popular large multi-modal model: LLaVA.
arXiv Detail & Related papers (2024-04-03T23:57:34Z) - Video as the New Language for Real-World Decision Making [100.68643056416394]
Video data captures important information about the physical world that is difficult to express in language.
Video can serve as a unified interface that can absorb internet knowledge and represent diverse tasks.
We identify major impact opportunities in domains such as robotics, self-driving, and science.
arXiv Detail & Related papers (2024-02-27T02:05:29Z) - Large Language Models for Scientific Synthesis, Inference and
Explanation [56.41963802804953]
We show how large language models can perform scientific synthesis, inference, and explanation.
We show that the large language model can augment this "knowledge" by synthesizing from the scientific literature.
This approach has the further advantage that the large language model can explain the machine learning system's predictions.
arXiv Detail & Related papers (2023-10-12T02:17:59Z) - Are Emergent Abilities in Large Language Models just In-Context Learning? [46.561464069450444]
We present a novel theory that explains emergent abilities, taking into account their potential confounding factors.
Our findings suggest that purported emergent abilities are not truly emergent, but result from a combination of in-context learning, model memory, and linguistic knowledge.
arXiv Detail & Related papers (2023-09-04T20:54:11Z) - On the application of Large Language Models for language teaching and
assessment technology [18.735612275207853]
We look at the potential for incorporating large language models in AI-driven language teaching and assessment systems.
We find that larger language models offer improvements over previous models in text generation.
For automated grading and grammatical error correction, tasks whose progress is checked on well-known benchmarks, early investigations indicate that large language models on their own do not improve on state-of-the-art results.
arXiv Detail & Related papers (2023-07-17T11:12:56Z) - Turning large language models into cognitive models [0.0]
We show that large language models can be turned into cognitive models.
These models offer accurate representations of human behavior, even outperforming traditional cognitive models in two decision-making domains.
Taken together, these results suggest that large, pre-trained models can be adapted to become generalist cognitive models.
arXiv Detail & Related papers (2023-06-06T18:00:01Z) - What Artificial Neural Networks Can Tell Us About Human Language
Acquisition [47.761188531404066]
Rapid progress in machine learning for natural language processing has the potential to transform debates about how humans learn language.
To increase the relevance of learnability results from computational models, we need to train model learners without significant advantages over humans.
arXiv Detail & Related papers (2022-08-17T00:12:37Z) - Vygotskian Autotelic Artificial Intelligence: Language and Culture
Internalization for Human-Like AI [16.487953861478054]
This perspective paper proposes a new AI paradigm in the quest for artificial lifelong skill discovery.
We focus on language especially, and how its structure and content may support the development of new cognitive functions in artificial agents.
It justifies the approach by uncovering examples of new artificial cognitive functions emerging from interactions between language and embodiment.
arXiv Detail & Related papers (2022-06-02T16:35:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.