Show, Don't Tell: Evaluating Large Language Models Beyond Textual Understanding with ChildPlay
- URL: http://arxiv.org/abs/2407.11068v4
- Date: Thu, 30 Jan 2025 01:04:40 GMT
- Title: Show, Don't Tell: Evaluating Large Language Models Beyond Textual Understanding with ChildPlay
- Authors: Gonçalo Hora de Carvalho, Oscar Knap, Robert Pollice,
- Abstract summary: We develop a benchmark to test the generalization of state-of-the-art large language models on broader problems beyond linguistic tasks.
Using well-known simple games like Tic-Tac-Toe, Connect Four, and Battleship, we test their strategic capabilities and spatial reasoning.
Results show GPT models perform poorly in these games, failing to anticipate losing moves, play correctly, or recognize spatial relationships.
- Score: 0.0
- License:
- Abstract: We develop a systematic benchmark set to test the generalization of state-of-the-art large language models on broader problems beyond linguistic tasks and evaluate it on a systematic progression of GPT models (GPT-3.5, GPT-4, GPT-4o, GPT-4o-mini). Using well-known simple games like Tic-Tac-Toe, Connect Four, and Battleship, all encoded in ASCII, we test their strategic capabilities and spatial reasoning. To probe generalization, we introduce three new games: LEGO Connect Language (LCL) for spatial logic, a shape recognition game, and Guess-the-SMILES (GtS), an advanced spatial logic benchmark in chemistry. Results show that, despite proficiency in standard benchmarks, GPT models perform poorly in these games, failing to anticipate losing moves, play correctly, or recognize spatial relationships. Except for Tic-Tac-Toe and GtS, a systematic progression in gameplay performance as models are formally improved (GPT-3.5, GPT-4, GPT-4o) is not observed. GPT-4 succeeds in shape recognition, but all models consistently struggle with LCL and GtS. This suggests that while GPT models can emulate conversational proficiency and basic rule comprehension, they have limited cognitive flexibility and generalization in strategy and spatial reasoning. Our findings, highlighted with our benchmark suite (ChildPlay GitHub Repository), caution against claims of emergent intelligence in GPT models, which appear more specialized than general.
Related papers
- Causal World Representation in the GPT Model [4.629721760278161]
generative pre-trained transformer (GPT) models are tested on real-world games played with the intention of winning.
We find that GPT models tend to generate next moves that adhere to the game rules for sequences for which the attention mechanism encodes a causal structure with high confidence.
In general, in cases for which the GPT model generates moves that do not adhere to the game rules, it also fails to capture any causal structure.
arXiv Detail & Related papers (2024-12-10T12:05:03Z) - Are Large Language Models Strategic Decision Makers? A Study of Performance and Bias in Two-Player Non-Zero-Sum Games [56.70628673595041]
Large Language Models (LLMs) have been increasingly used in real-world settings, yet their strategic decision-making abilities remain largely unexplored.
This work investigates the performance and merits of LLMs in canonical game-theoretic two-player non-zero-sum games, Stag Hunt and Prisoner Dilemma.
Our structured evaluation of GPT-3.5, GPT-4-Turbo, GPT-4o, and Llama-3-8B shows that these models, when making decisions in these games, are affected by at least one of the following systematic biases.
arXiv Detail & Related papers (2024-07-05T12:30:02Z) - Adaptable Logical Control for Large Language Models [68.27725600175013]
Ctrl-G is an adaptable framework that facilitates tractable and flexible control of model generation at inference time.
We show that Ctrl-G, when applied to a TULU2-7B model, outperforms GPT3.5 and GPT4 on the task of interactive text editing.
arXiv Detail & Related papers (2024-06-19T23:47:59Z) - GameBench: Evaluating Strategic Reasoning Abilities of LLM Agents [4.209869303518743]
We introduce GameBench, a cross-domain benchmark for evaluating strategic reasoning abilities of large language models.
Our evaluations use GPT-3 and GPT-4 in their base form along with two scaffolding frameworks designed to enhance strategic reasoning ability: Chain-of-Thought (CoT) prompting and Reasoning Via Planning (RAP)
Our results show that none of the tested models match human performance, and at worst GPT-4 performs worse than random action.
arXiv Detail & Related papers (2024-06-07T00:28:43Z) - DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT
Models [92.6951708781736]
This work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5.
We find that GPT models can be easily misled to generate toxic and biased outputs and leak private information.
Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps.
arXiv Detail & Related papers (2023-06-20T17:24:23Z) - Gpt-4: A Review on Advancements and Opportunities in Natural Language
Processing [0.0]
Generative Pre-trained Transformer 4 (GPT-4) is the fourth-generation language model in the GPT series, developed by OpenAI.
GPT-4 has a larger model size (more than one trillion), better multilingual capabilities, improved contextual understanding, and reasoning capabilities than GPT-3.
Some of the potential applications of GPT-4 include chatbots, personal assistants, language translation, text summarization, and question-answering.
arXiv Detail & Related papers (2023-05-04T22:46:43Z) - Analyzing the Performance of GPT-3.5 and GPT-4 in Grammatical Error
Correction [28.58384091374763]
GPT-3 and GPT-4 models are powerful, achieving high performance on a variety of Natural Language Processing tasks.
We perform experiments testing the capabilities of a GPT-3.5 model (text-davinci-003) and a GPT-4 model (gpt-4-0314) on major GEC benchmarks.
We report the performance of our best prompt on the BEA-2019 and JFLEG datasets, finding that the GPT models can perform well in a sentence-level revision setting.
arXiv Detail & Related papers (2023-03-25T03:08:49Z) - Sparks of Artificial General Intelligence: Early experiments with GPT-4 [66.1188263570629]
GPT-4, developed by OpenAI, was trained using an unprecedented scale of compute and data.
We demonstrate that GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more.
We believe GPT-4 could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system.
arXiv Detail & Related papers (2023-03-22T16:51:28Z) - A Comprehensive Capability Analysis of GPT-3 and GPT-3.5 Series Models [71.42197262495056]
GPT series models have gained considerable attention due to their exceptional natural language processing capabilities.
We select six representative models, comprising two GPT-3 series models and four GPT-3.5 series models.
We evaluate their performance on nine natural language understanding (NLU) tasks using 21 datasets.
Our experiments reveal that the overall ability of GPT series models on NLU tasks does not increase gradually as the models evolve.
arXiv Detail & Related papers (2023-03-18T14:02:04Z) - Prompting GPT-3 To Be Reliable [117.23966502293796]
This work decomposes reliability into four facets: generalizability, fairness, calibration, and factuality.
We find that GPT-3 outperforms smaller-scale supervised models by large margins on all these facets.
arXiv Detail & Related papers (2022-10-17T14:52:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.