Latent Lab: Large Language Models for Knowledge Exploration
- URL: http://arxiv.org/abs/2311.13051v1
- Date: Tue, 21 Nov 2023 23:23:16 GMT
- Title: Latent Lab: Large Language Models for Knowledge Exploration
- Authors: Kevin Dunnell, Trudy Painter, Andrew Stoddard, Andy Lippman
- Abstract summary: We present "Latent Lab," an interactive tool for discovering connections among MIT Media Lab research projects.
The work offers insights into collaborative AI systems by addressing the challenges of organizing, searching, and synthesizing content.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper investigates the potential of AI models, particularly large
language models (LLMs), to support knowledge exploration and augment human
creativity during ideation. We present "Latent Lab" an interactive tool for
discovering connections among MIT Media Lab research projects, emphasizing
"exploration" over search. The work offers insights into collaborative AI
systems by addressing the challenges of organizing, searching, and synthesizing
content. In a user study, the tool's success was evaluated based on its ability
to introduce users to an unfamiliar knowledge base, ultimately setting the
groundwork for the ongoing advancement of human-AI knowledge exploration
systems.
Related papers
- User-centric evaluation of explainability of AI with and for humans: a comprehensive empirical study [5.775094401949666]
This study is located in the Human-Centered Artificial Intelligence (HCAI)
It focuses on the results of a user-centered assessment of commonly used eXplainable Artificial Intelligence (XAI) algorithms.
arXiv Detail & Related papers (2024-10-21T12:32:39Z) - Data Analysis in the Era of Generative AI [56.44807642944589]
This paper explores the potential of AI-powered tools to reshape data analysis, focusing on design considerations and challenges.
We explore how the emergence of large language and multimodal models offers new opportunities to enhance various stages of data analysis workflow.
We then examine human-centered design principles that facilitate intuitive interactions, build user trust, and streamline the AI-assisted analysis workflow across multiple apps.
arXiv Detail & Related papers (2024-09-27T06:31:03Z) - Synthesizing Evolving Symbolic Representations for Autonomous Systems [2.4233709516962785]
This paper presents an open-ended learning system able to synthesize from scratch its experience into a PPDDL representation and update it over time.
The system explores the environment and iteratively: (a) discover options, (b) explore the environment using options, (c) abstract the knowledge collected and (d) plan.
arXiv Detail & Related papers (2024-09-18T07:23:26Z) - Automating Knowledge Discovery from Scientific Literature via LLMs: A Dual-Agent Approach with Progressive Ontology Prompting [59.97247234955861]
We introduce a novel framework based on large language models (LLMs) that combines a progressive prompting algorithm with a dual-agent system, named LLM-Duo.
Our method identifies 2,421 interventions from 64,177 research articles in the speech-language therapy domain.
arXiv Detail & Related papers (2024-08-20T16:42:23Z) - Enhancing Exploratory Learning through Exploratory Search with the Emergence of Large Language Models [3.1997856595607024]
This study attempts to unpack this complexity by combining exploratory search strategies with the theories of exploratory learning.
Our work adapts Kolb's learning model by incorporating high-frequency exploration and feedback loops, aiming to promote deep cognitive and higher-order cognitive skill development in students.
arXiv Detail & Related papers (2024-08-09T04:30:16Z) - WESE: Weak Exploration to Strong Exploitation for LLM Agents [95.6720931773781]
This paper proposes a novel approach, Weak Exploration to Strong Exploitation (WESE) to enhance LLM agents in solving open-world interactive tasks.
WESE involves decoupling the exploration and exploitation process, employing a cost-effective weak agent to perform exploration tasks for global knowledge.
A knowledge graph-based strategy is then introduced to store the acquired knowledge and extract task-relevant knowledge, enhancing the stronger agent in success rate and efficiency for the exploitation task.
arXiv Detail & Related papers (2024-04-11T03:31:54Z) - Large Language Models for Generative Information Extraction: A Survey [89.71273968283616]
Large Language Models (LLMs) have demonstrated remarkable capabilities in text understanding and generation.
We present an extensive overview by categorizing these works in terms of various IE subtasks and techniques.
We empirically analyze the most advanced methods and discover the emerging trend of IE tasks with LLMs.
arXiv Detail & Related papers (2023-12-29T14:25:22Z) - Beyond Factuality: A Comprehensive Evaluation of Large Language Models
as Knowledge Generators [78.63553017938911]
Large language models (LLMs) outperform information retrieval techniques for downstream knowledge-intensive tasks.
However, community concerns abound regarding the factuality and potential implications of using this uncensored knowledge.
We introduce CONNER, designed to evaluate generated knowledge from six important perspectives.
arXiv Detail & Related papers (2023-10-11T08:22:37Z) - Exploiting Language Models as a Source of Knowledge for Cognitive Agents [4.557963624437782]
Large language models (LLMs) provide capabilities far beyond sentence completion, including question answering, summarization, and natural-language inference.
While many of these capabilities have potential application to cognitive systems, our research is exploiting language models as a source of task knowledge for cognitive agents, that is, agents realized via a cognitive architecture.
arXiv Detail & Related papers (2023-09-05T15:18:04Z) - Language Models as a Knowledge Source for Cognitive Agents [9.061356032792954]
Language models (LMs) are sentence-completion engines trained on massive corpora.
This paper outlines the challenges and opportunities for using language models as a new knowledge source for cognitive systems.
It also identifies possible ways to improve knowledge extraction from language models using the capabilities provided by cognitive systems.
arXiv Detail & Related papers (2021-09-17T01:12:34Z) - A Review on Intelligent Object Perception Methods Combining
Knowledge-based Reasoning and Machine Learning [60.335974351919816]
Object perception is a fundamental sub-field of Computer Vision.
Recent works seek ways to integrate knowledge engineering in order to expand the level of intelligence of the visual interpretation of objects.
arXiv Detail & Related papers (2019-12-26T13:26:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.