LAMP: A Language Model on the Map
- URL: http://arxiv.org/abs/2403.09059v2
- Date: Tue, 12 Nov 2024 06:15:50 GMT
- Title: LAMP: A Language Model on the Map
- Authors: Pasquale Balsebre, Weiming Huang, Gao Cong,
- Abstract summary: Large Language Models (LLMs) are poised to play an increasingly important role in our lives, providing assistance across a wide array of tasks.
This study introduces a novel framework for fine-tuning a pre-trained model on city-specific data, to enable it to provide accurate recommendations.
- Score: 13.75316123602933
- License:
- Abstract: Large Language Models (LLMs) are poised to play an increasingly important role in our lives, providing assistance across a wide array of tasks. In the geospatial domain, LLMs have demonstrated the ability to answer generic questions, such as identifying a country's capital; nonetheless, their utility is hindered when it comes to answering fine-grained questions about specific places, such as grocery stores or restaurants, which constitute essential aspects of people's everyday lives. This is mainly because the places in our cities haven't been systematically fed into LLMs, so as to understand and memorize them. This study introduces a novel framework for fine-tuning a pre-trained model on city-specific data, to enable it to provide accurate recommendations, while minimizing hallucinations. We share our model, LAMP, and the data used to train it. We conduct experiments to analyze its ability to correctly retrieving spatial objects, and compare it to well-known open- and closed- source language models, such as GPT-4. Finally, we explore its emerging capabilities through a case study on day planning.
Related papers
- What can LLM tell us about cities? [6.405546719612814]
This study explores the capabilities of large language models (LLMs) in providing knowledge about cities and regions on a global scale.
Experiments reveal that LLMs embed a broad but varying degree of knowledge across global cities, with ML models trained on LLM-derived features consistently leading to improved predictive accuracy.
arXiv Detail & Related papers (2024-11-25T09:07:56Z) - Undesirable Memorization in Large Language Models: A Survey [5.659933808910005]
We present a Systematization of Knowledge (SoK) on the topic of memorization in Large Language Models (LLMs)
Memorization is the effect that a model tends to store and reproduce phrases or passages from the training data.
We discuss the metrics and methods used to measure memorization, followed by an analysis of the factors that contribute to memorization phenomenon.
arXiv Detail & Related papers (2024-10-03T16:34:46Z) - Generalization v.s. Memorization: Tracing Language Models' Capabilities Back to Pretraining Data [76.90128359866462]
We introduce an extended concept of memorization, distributional memorization, which measures the correlation between the output probabilities and the pretraining data frequency.
This study demonstrates that memorization plays a larger role in simpler, knowledge-intensive tasks, while generalization is the key for harder, reasoning-based tasks.
arXiv Detail & Related papers (2024-07-20T21:24:40Z) - Loose LIPS Sink Ships: Asking Questions in Battleship with Language-Informed Program Sampling [80.64715784334936]
We study tradeoffs in a classic grounded question-asking task based on the board game Battleship.
Our model uses large language models (LLMs) to generate natural language questions, translate them into symbolic programs, and evaluate their expected information gain.
We find that with a surprisingly modest resource budget, this simple Monte Carlo optimization strategy yields informative questions that mirror human performance.
arXiv Detail & Related papers (2024-02-29T18:58:15Z) - Supervised Knowledge Makes Large Language Models Better In-context Learners [94.89301696512776]
Large Language Models (LLMs) exhibit emerging in-context learning abilities through prompt engineering.
The challenge of improving the generalizability and factuality of LLMs in natural language understanding and question answering remains under-explored.
We propose a framework that enhances the reliability of LLMs as it: 1) generalizes out-of-distribution data, 2) elucidates how LLMs benefit from discriminative models, and 3) minimizes hallucinations in generative tasks.
arXiv Detail & Related papers (2023-12-26T07:24:46Z) - MechGPT, a language-based strategy for mechanics and materials modeling
that connects knowledge across scales, disciplines and modalities [0.0]
We use a Large Language Model (LLM) to distill question-answer pairs from raw sources followed by fine-tuning.
The resulting MechGPT LLM foundation model is used in a series of computational experiments to explore its capacity for knowledge retrieval, various language tasks, hypothesis generation, and connecting knowledge across disparate areas.
arXiv Detail & Related papers (2023-10-16T14:29:35Z) - Chatmap : Large Language Model Interaction with Cartographic Data [0.0]
OpenStreetMap (OSM) is the most ambitious open-source global initiative offering detailed urban and rural geographic data.
In this study, we demonstrate the proof of concept and details of the process of fine-tuning a relatively small scale (1B parameters) Large Language Models (LLMs) with a relatively small artificial dataset curated by a more capable teacher model.
The study aims to provide an initial guideline for such generative artificial intelligence (AI) adaptations and demonstrate early signs of useful emerging abilities in this context.
arXiv Detail & Related papers (2023-09-28T15:32:36Z) - GPT4GEO: How a Language Model Sees the World's Geography [31.215906518290883]
We investigate the degree to which GPT-4 has acquired factual geographic knowledge.
This knowledge is especially important for applications that involve geographic data.
We provide a broad characterisation of what GPT-4 knows about the world, highlighting both potentially surprising capabilities but also limitations.
arXiv Detail & Related papers (2023-05-30T18:28:04Z) - Self-Checker: Plug-and-Play Modules for Fact-Checking with Large Language Models [75.75038268227554]
Self-Checker is a framework comprising a set of plug-and-play modules that facilitate fact-checking.
This framework provides a fast and efficient way to construct fact-checking systems in low-resource environments.
arXiv Detail & Related papers (2023-05-24T01:46:07Z) - Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond [48.70557995528463]
This guide aims to provide researchers and practitioners with valuable insights and best practices for working with Large Language Models.
We present various use cases and non-use cases to illustrate the practical applications and limitations of LLMs in real-world scenarios.
arXiv Detail & Related papers (2023-04-26T17:52:30Z) - Large Language Models Are Latent Variable Models: Explaining and Finding
Good Demonstrations for In-Context Learning [104.58874584354787]
In recent years, pre-trained large language models (LLMs) have demonstrated remarkable efficiency in achieving an inference-time few-shot learning capability known as in-context learning.
This study aims to examine the in-context learning phenomenon through a Bayesian lens, viewing real-world LLMs as latent variable models.
arXiv Detail & Related papers (2023-01-27T18:59:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.