Concept-Oriented Deep Learning with Large Language Models
- URL: http://arxiv.org/abs/2306.17089v2
- Date: Tue, 19 Sep 2023 21:15:52 GMT
- Title: Concept-Oriented Deep Learning with Large Language Models
- Authors: Daniel T. Chang
- Abstract summary: Large Language Models (LLMs) have been successfully used in many natural-language tasks and applications including text generation and AI chatbots.
They also are a promising new technology for concept-oriented deep learning (CODL)
We discuss conceptual understanding in visual-language LLMs, the most important multimodal LLMs, and major uses of them for CODL including concept extraction from image, concept graph extraction from image, and concept learning.
- Score: 0.4548998901594072
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large Language Models (LLMs) have been successfully used in many
natural-language tasks and applications including text generation and AI
chatbots. They also are a promising new technology for concept-oriented deep
learning (CODL). However, the prerequisite is that LLMs understand concepts and
ensure conceptual consistency. We discuss these in this paper, as well as major
uses of LLMs for CODL including concept extraction from text, concept graph
extraction from text, and concept learning. Human knowledge consists of both
symbolic (conceptual) knowledge and embodied (sensory) knowledge. Text-only
LLMs, however, can represent only symbolic (conceptual) knowledge. Multimodal
LLMs, on the other hand, are capable of representing the full range (conceptual
and sensory) of human knowledge. We discuss conceptual understanding in
visual-language LLMs, the most important multimodal LLMs, and major uses of
them for CODL including concept extraction from image, concept graph extraction
from image, and concept learning. While uses of LLMs for CODL are valuable
standalone, they are particularly valuable as part of LLM applications such as
AI chatbots.
Related papers
- Can Large Language Models Understand DL-Lite Ontologies? An Empirical Study [10.051572826948762]
Large models (LLMs) have shown significant achievements in solving a wide range of tasks.
We empirically analyze the LLMs' capability of understanding Description Logic (DL-Lite)
We find that LLMs understand formal syntax and model-theoretic semantics of concepts and roles.
arXiv Detail & Related papers (2024-06-25T13:16:34Z) - Large Language Models are Interpretable Learners [53.56735770834617]
In this paper, we show a combination of Large Language Models (LLMs) and symbolic programs can bridge the gap between expressiveness and interpretability.
The pretrained LLM with natural language prompts provides a massive set of interpretable modules that can transform raw input into natural language concepts.
As the knowledge learned by LSP is a combination of natural language descriptions and symbolic rules, it is easily transferable to humans (interpretable) and other LLMs.
arXiv Detail & Related papers (2024-06-25T02:18:15Z) - A Concept-Based Explainability Framework for Large Multimodal Models [52.37626977572413]
We propose a dictionary learning based approach, applied to the representation of tokens.
We show that these concepts are well semantically grounded in both vision and text.
We show that the extracted multimodal concepts are useful to interpret representations of test samples.
arXiv Detail & Related papers (2024-06-12T10:48:53Z) - Reasoning about concepts with LLMs: Inconsistencies abound [13.042591838719936]
Large language models (LLMs) often display and demonstrate significant inconsistencies in their knowledge.
In particular, we have been able to significantly enhance the performance of LLMs of various sizes with openly available weights.
arXiv Detail & Related papers (2024-05-30T15:38:54Z) - Large Knowledge Model: Perspectives and Challenges [37.42721596964844]
emphLarge Language Models (LLMs) epitomize the pre-training of extensive, sequence-based world knowledge into neural networks.
This article explores large models through the lens of "knowledge"
Considering the intricate nature of human knowledge, we advocate for the creation of emphLarge Knowledge Models (LKM)
arXiv Detail & Related papers (2023-12-05T12:07:30Z) - Enabling Large Language Models to Learn from Rules [99.16680531261987]
We are inspired that humans can learn the new tasks or knowledge in another way by learning from rules.
We propose rule distillation, which first uses the strong in-context abilities of LLMs to extract the knowledge from the textual rules.
Our experiments show that making LLMs learn from rules by our method is much more efficient than example-based learning in both the sample size and generalization ability.
arXiv Detail & Related papers (2023-11-15T11:42:41Z) - Towards Concept-Aware Large Language Models [56.48016300758356]
Concepts play a pivotal role in various human cognitive functions, including learning, reasoning and communication.
There is very little work on endowing machines with the ability to form and reason with concepts.
In this work, we analyze how well contemporary large language models (LLMs) capture human concepts and their structure.
arXiv Detail & Related papers (2023-11-03T12:19:22Z) - TouchStone: Evaluating Vision-Language Models by Language Models [91.69776377214814]
We propose an evaluation method that uses strong large language models as judges to comprehensively evaluate the various abilities of LVLMs.
We construct a comprehensive visual dialogue dataset TouchStone, consisting of open-world images and questions, covering five major categories of abilities and 27 subtasks.
We demonstrate that powerful LVLMs, such as GPT-4, can effectively score dialogue quality by leveraging their textual capabilities alone.
arXiv Detail & Related papers (2023-08-31T17:52:04Z) - Link-Context Learning for Multimodal LLMs [40.923816691928536]
Link-context learning (LCL) emphasizes "reasoning from cause and effect" to augment the learning capabilities of MLLMs.
LCL guides the model to discern not only the analogy but also the underlying causal associations between data points.
To facilitate the evaluation of this novel approach, we introduce the ISEKAI dataset.
arXiv Detail & Related papers (2023-08-15T17:33:24Z) - COPEN: Probing Conceptual Knowledge in Pre-trained Language Models [60.10147136876669]
Conceptual knowledge is fundamental to human cognition and knowledge bases.
Existing knowledge probing works only focus on factual knowledge of pre-trained language models (PLMs) and ignore conceptual knowledge.
We design three tasks to probe whether PLMs organize entities by conceptual similarities, learn conceptual properties, and conceptualize entities in contexts.
For the tasks, we collect and annotate 24k data instances covering 393 concepts, which is COPEN, a COnceptual knowledge Probing bENchmark.
arXiv Detail & Related papers (2022-11-08T08:18:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.