An Exploratory Assessment of LLM's Potential Toward Flight Trajectory
Reconstruction Analysis
- URL: http://arxiv.org/abs/2401.06204v1
- Date: Thu, 11 Jan 2024 17:59:18 GMT
- Title: An Exploratory Assessment of LLM's Potential Toward Flight Trajectory
Reconstruction Analysis
- Authors: Qilei Zhang and John H. Mott
- Abstract summary: The study focuses on reconstructing flight trajectories using Automatic Dependent Surveillance-Broadcast (ADS-B) data.
The findings demonstrate the model's proficiency in filtering noise and estimating both linear and curved flight trajectories.
The study's insights underscore the promise of LLMs in flight trajectory reconstruction and open new avenues for their broader application across the aviation and transportation sectors.
- Score: 3.3903227320938436
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large Language Models (LLMs) hold transformative potential in aviation,
particularly in reconstructing flight trajectories. This paper investigates
this potential, grounded in the notion that LLMs excel at processing sequential
data and deciphering complex data structures. Utilizing the LLaMA 2 model, a
pre-trained open-source LLM, the study focuses on reconstructing flight
trajectories using Automatic Dependent Surveillance-Broadcast (ADS-B) data with
irregularities inherent in real-world scenarios. The findings demonstrate the
model's proficiency in filtering noise and estimating both linear and curved
flight trajectories. However, the analysis also reveals challenges in managing
longer data sequences, which may be attributed to the token length limitations
of LLM models. The study's insights underscore the promise of LLMs in flight
trajectory reconstruction and open new avenues for their broader application
across the aviation and transportation sectors.
Related papers
- Large Language Models for Single-Step and Multi-Step Flight Trajectory Prediction [5.666505394825739]
This study pioneers the use of large language models (LLMs) for flight trajectory prediction by reframing it as a language modeling problem.
Specifically, We features extract the aircraft's status and from ADS-B flight data to construct a prompt-based dataset.
The dataset is then employed to finetune LLMs, enabling them to learn complextemporal patterns for accurate predictions.
arXiv Detail & Related papers (2025-01-29T07:35:56Z) - Clear Minds Think Alike: What Makes LLM Fine-tuning Robust? A Study of Token Perplexity [61.48338027901318]
We show that fine-tuning with LLM-generated data improves target task performance and reduces out-of-domain degradation.
This is the first mechanistic explanation for the superior OOD robustness conferred by LLM-generated training data.
arXiv Detail & Related papers (2025-01-24T08:18:56Z) - Evaluating the Effectiveness of Large Language Models in Representing and Understanding Movement Trajectories [1.3658544194443192]
This research focuses on assessing the ability of AI foundation models in representing the trajectories of movements.
We utilize one of the large language models (LLMs) to encode the string format of trajectories and then evaluate the effectiveness of the LLM-based representation for trajectory data analysis.
arXiv Detail & Related papers (2024-08-31T02:57:25Z) - Intermediate Distillation: Data-Efficient Distillation from Black-Box LLMs for Information Retrieval [7.441679541836913]
textit Intermediate Distillation treats large language models as black boxes and distills their knowledge via an innovative LLM-ranker-retriever pipeline.
Our proposed method can significantly improve the performance of retriever models with only 1,000 training instances.
arXiv Detail & Related papers (2024-06-18T00:41:41Z) - From Words to Actions: Unveiling the Theoretical Underpinnings of LLM-Driven Autonomous Systems [59.40480894948944]
Large language model (LLM) empowered agents are able to solve decision-making problems in the physical world.
Under this model, the LLM Planner navigates a partially observable Markov decision process (POMDP) by iteratively generating language-based subgoals via prompting.
We prove that the pretrained LLM Planner effectively performs Bayesian aggregated imitation learning (BAIL) through in-context learning.
arXiv Detail & Related papers (2024-05-30T09:42:54Z) - Characterization of Large Language Model Development in the Datacenter [55.9909258342639]
Large Language Models (LLMs) have presented impressive performance across several transformative tasks.
However, it is non-trivial to efficiently utilize large-scale cluster resources to develop LLMs.
We present an in-depth characterization study of a six-month LLM development workload trace collected from our GPU datacenter Acme.
arXiv Detail & Related papers (2024-03-12T13:31:14Z) - Are You Being Tracked? Discover the Power of Zero-Shot Trajectory
Tracing with LLMs! [3.844253028598048]
This study introduces LLMTrack, a model that illustrates how LLMs can be leveraged for Zero-Shot Trajectory Recognition.
We evaluate the model using real-world datasets designed to challenge it with distinct trajectories characterized by indoor and outdoor scenarios.
arXiv Detail & Related papers (2024-03-10T12:50:35Z) - Towards Modeling Learner Performance with Large Language Models [7.002923425715133]
This paper investigates whether the pattern recognition and sequence modeling capabilities of LLMs can be extended to the domain of knowledge tracing.
We compare two approaches to using LLMs for this task, zero-shot prompting and model fine-tuning, with existing, non-LLM approaches to knowledge tracing.
While LLM-based approaches do not achieve state-of-the-art performance, fine-tuned LLMs surpass the performance of naive baseline models and perform on par with standard Bayesian Knowledge Tracing approaches.
arXiv Detail & Related papers (2024-02-29T14:06:34Z) - Reflection-Tuning: Data Recycling Improves LLM Instruction-Tuning [79.32236399694077]
Low-quality data in the training set are usually detrimental to instruction tuning.
We propose a novel method, termed "reflection-tuning"
This approach utilizes an oracle LLM to recycle the original training data by introspecting and enhancing the quality of instructions and responses in the data.
arXiv Detail & Related papers (2023-10-18T05:13:47Z) - TRACE: A Comprehensive Benchmark for Continual Learning in Large
Language Models [52.734140807634624]
Aligned large language models (LLMs) demonstrate exceptional capabilities in task-solving, following instructions, and ensuring safety.
Existing continual learning benchmarks lack sufficient challenge for leading aligned LLMs.
We introduce TRACE, a novel benchmark designed to evaluate continual learning in LLMs.
arXiv Detail & Related papers (2023-10-10T16:38:49Z) - An Empirical Study of Catastrophic Forgetting in Large Language Models During Continual Fine-tuning [70.48605869773814]
Catastrophic forgetting (CF) is a phenomenon that occurs in machine learning when a model forgets previously learned information.
This study empirically evaluates the forgetting phenomenon in large language models during continual instruction tuning.
arXiv Detail & Related papers (2023-08-17T02:53:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.