Ask and You Shall Receive (a Graph Drawing): Testing ChatGPT's Potential
to Apply Graph Layout Algorithms
- URL: http://arxiv.org/abs/2303.08819v1
- Date: Fri, 3 Mar 2023 04:26:43 GMT
- Title: Ask and You Shall Receive (a Graph Drawing): Testing ChatGPT's Potential
to Apply Graph Layout Algorithms
- Authors: Sara Di Bartolomeo, Giorgio Severi, Victor Schetinger, Cody Dunne
- Abstract summary: Large language models (LLMs) have recently taken the world by storm.
LLMs' ability to learn from vast amounts of data and apply complex operations may lead to interesting graph drawing results.
Natural language specification would make data visualization more accessible and user-friendly for a wider range of users.
- Score: 12.328414568667037
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Large language models (LLMs) have recently taken the world by storm. They can
generate coherent text, hold meaningful conversations, and be taught concepts
and basic sets of instructions - such as the steps of an algorithm. In this
context, we are interested in exploring the application of LLMs to graph
drawing algorithms by performing experiments on ChatGPT. These algorithms are
used to improve the readability of graph visualizations. The probabilistic
nature of LLMs presents challenges to implementing algorithms correctly, but we
believe that LLMs' ability to learn from vast amounts of data and apply complex
operations may lead to interesting graph drawing results. For example, we could
enable users with limited coding backgrounds to use simple natural language to
create effective graph visualizations. Natural language specification would
make data visualization more accessible and user-friendly for a wider range of
users. Exploring LLMs' capabilities for graph drawing can also help us better
understand how to formulate complex algorithms for LLMs; a type of knowledge
that could transfer to other areas of computer science. Overall, our goal is to
shed light on the exciting possibilities of using LLMs for graph drawing while
providing a balanced assessment of the challenges and opportunities they
present. A free copy of this paper with all supplemental materials required to
reproduce our results is available on
https://osf.io/n5rxd/?view_only=f09cbc2621f44074810b7d843f1e12f9
Related papers
- Can Graph Learning Improve Planning in LLM-based Agents? [61.47027387839096]
Task planning in language agents is emerging as an important research topic alongside the development of large language models (LLMs)
In this paper, we explore graph learning-based methods for task planning, a direction that is to the prevalent focus on prompt design.
Our interest in graph learning stems from a theoretical discovery: the biases of attention and auto-regressive loss impede LLMs' ability to effectively navigate decision-making on graphs.
arXiv Detail & Related papers (2024-05-29T14:26:24Z) - Parameter-Efficient Tuning Large Language Models for Graph Representation Learning [62.26278815157628]
We introduce Graph-aware.
Efficient Fine-Tuning - GPEFT, a novel approach for efficient graph representation learning.
We use a graph neural network (GNN) to encode structural information from neighboring nodes into a graph prompt.
We validate our approach through comprehensive experiments conducted on 8 different text-rich graphs, observing an average improvement of 2% in hit@1 and Mean Reciprocal Rank (MRR) in link prediction evaluations.
arXiv Detail & Related papers (2024-04-28T18:36:59Z) - Can we Soft Prompt LLMs for Graph Learning Tasks? [22.286189757942054]
GraphPrompter is a framework designed to align graph information with Large Language Models (LLMs) via soft prompts.
The framework unveils the substantial capabilities of LLMs as predictors in graph-related tasks.
arXiv Detail & Related papers (2024-02-15T23:09:42Z) - Large Language Models on Graphs: A Comprehensive Survey [77.16803297418201]
We provide a systematic review of scenarios and techniques related to large language models on graphs.
We first summarize potential scenarios of adopting LLMs on graphs into three categories, namely pure graphs, text-attributed graphs, and text-paired graphs.
We discuss the real-world applications of such methods and summarize open-source codes and benchmark datasets.
arXiv Detail & Related papers (2023-12-05T14:14:27Z) - Integrating Graphs with Large Language Models: Methods and Prospects [68.37584693537555]
Large language models (LLMs) have emerged as frontrunners, showcasing unparalleled prowess in diverse applications.
Merging the capabilities of LLMs with graph-structured data has been a topic of keen interest.
This paper bifurcates such integrations into two predominant categories.
arXiv Detail & Related papers (2023-10-09T07:59:34Z) - GraphText: Graph Reasoning in Text Space [32.00258972022153]
GraphText is a framework that translates graphs into natural language.
GraphText can achieve on par with, or even surpassing, the performance of supervised-trained graph neural networks.
It paves the way for interactive graph reasoning, allowing both humans and LLMs to communicate with the model seamlessly using natural language.
arXiv Detail & Related papers (2023-10-02T11:03:57Z) - Exploring the Potential of Large Language Models (LLMs) in Learning on
Graphs [59.74814230246034]
Large Language Models (LLMs) have been proven to possess extensive common knowledge and powerful semantic comprehension abilities.
We investigate two possible pipelines: LLMs-as-Enhancers and LLMs-as-Predictors.
arXiv Detail & Related papers (2023-07-07T05:31:31Z) - Can Language Models Solve Graph Problems in Natural Language? [51.28850846990929]
Large language models (LLMs) are increasingly adopted for a variety of tasks with implicit graphical structures.
We propose NLGraph, a benchmark of graph-based problem solving simulating in natural language.
arXiv Detail & Related papers (2023-05-17T08:29:21Z) - Graph-ToolFormer: To Empower LLMs with Graph Reasoning Ability via
Prompt Augmented by ChatGPT [10.879701971582502]
We aim to develop a large language model (LLM) with the reasoning ability on complex graph data.
Inspired by the latest ChatGPT and Toolformer models, we propose the Graph-ToolFormer framework to teach LLMs themselves with prompts augmented by ChatGPT to use external graph reasoning API tools.
arXiv Detail & Related papers (2023-04-10T05:25:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.