Large Language Models for Forecasting and Anomaly Detection: A
Systematic Literature Review
- URL: http://arxiv.org/abs/2402.10350v1
- Date: Thu, 15 Feb 2024 22:43:02 GMT
- Title: Large Language Models for Forecasting and Anomaly Detection: A
Systematic Literature Review
- Authors: Jing Su, Chufeng Jiang, Xin Jin, Yuxin Qiao, Tingsong Xiao, Hongda Ma,
Rong Wei, Zhi Jing, Jiajun Xu, Junhong Lin
- Abstract summary: This systematic literature review comprehensively examines the application of Large Language Models (LLMs) in forecasting and anomaly detection.
LLMs have demonstrated significant potential in parsing and analyzing extensive datasets to identify patterns, predict future events, and detect anomalous behavior across various domains.
This review identifies several critical challenges that impede their broader adoption and effectiveness, including the reliance on vast historical datasets, issues with generalizability across different contexts, and the phenomenon of model hallucinations.
- Score: 10.325003320290547
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This systematic literature review comprehensively examines the application of
Large Language Models (LLMs) in forecasting and anomaly detection, highlighting
the current state of research, inherent challenges, and prospective future
directions. LLMs have demonstrated significant potential in parsing and
analyzing extensive datasets to identify patterns, predict future events, and
detect anomalous behavior across various domains. However, this review
identifies several critical challenges that impede their broader adoption and
effectiveness, including the reliance on vast historical datasets, issues with
generalizability across different contexts, the phenomenon of model
hallucinations, limitations within the models' knowledge boundaries, and the
substantial computational resources required. Through detailed analysis, this
review discusses potential solutions and strategies to overcome these
obstacles, such as integrating multimodal data, advancements in learning
methodologies, and emphasizing model explainability and computational
efficiency. Moreover, this review outlines critical trends that are likely to
shape the evolution of LLMs in these fields, including the push toward
real-time processing, the importance of sustainable modeling practices, and the
value of interdisciplinary collaboration. Conclusively, this review underscores
the transformative impact LLMs could have on forecasting and anomaly detection
while emphasizing the need for continuous innovation, ethical considerations,
and practical solutions to realize their full potential.
Related papers
- Investigating the Role of Instruction Variety and Task Difficulty in Robotic Manipulation Tasks [50.75902473813379]
This work introduces a comprehensive evaluation framework that systematically examines the role of instructions and inputs in the generalisation abilities of such models.
The proposed framework uncovers the resilience of multimodal models to extreme instruction perturbations and their vulnerability to observational changes.
arXiv Detail & Related papers (2024-07-04T14:36:49Z) - Video Anomaly Detection in 10 Years: A Survey and Outlook [10.143205531474907]
Video anomaly detection (VAD) holds immense importance across diverse domains such as surveillance, healthcare, and environmental monitoring.
This survey explores deep learning-based VAD, expanding beyond traditional supervised training paradigms to encompass emerging weakly supervised, self-supervised, and unsupervised approaches.
arXiv Detail & Related papers (2024-05-29T17:56:31Z) - Chain-of-Thought Prompting for Demographic Inference with Large Multimodal Models [58.58594658683919]
Large multimodal models (LMMs) have shown transformative potential across various research tasks.
Our findings indicate LMMs possess advantages in zero-shot learning, interpretability, and handling uncurated 'in-the-wild' inputs.
We propose a Chain-of-Thought augmented prompting approach, which effectively mitigates the off-target prediction issue.
arXiv Detail & Related papers (2024-05-24T16:26:56Z) - LLM as a Mastermind: A Survey of Strategic Reasoning with Large Language Models [75.89014602596673]
Strategic reasoning requires understanding and predicting adversary actions in multi-agent settings while adjusting strategies accordingly.
We explore the scopes, applications, methodologies, and evaluation metrics related to strategic reasoning with Large Language Models.
It underscores the importance of strategic reasoning as a critical cognitive capability and offers insights into future research directions and potential improvements.
arXiv Detail & Related papers (2024-04-01T16:50:54Z) - Towards Modeling Learner Performance with Large Language Models [7.002923425715133]
This paper investigates whether the pattern recognition and sequence modeling capabilities of LLMs can be extended to the domain of knowledge tracing.
We compare two approaches to using LLMs for this task, zero-shot prompting and model fine-tuning, with existing, non-LLM approaches to knowledge tracing.
While LLM-based approaches do not achieve state-of-the-art performance, fine-tuned LLMs surpass the performance of naive baseline models and perform on par with standard Bayesian Knowledge Tracing approaches.
arXiv Detail & Related papers (2024-02-29T14:06:34Z) - Automatically Correcting Large Language Models: Surveying the landscape
of diverse self-correction strategies [104.32199881187607]
Large language models (LLMs) have demonstrated remarkable performance across a wide array of NLP tasks.
A promising approach to rectify these flaws is self-correction, where the LLM itself is prompted or guided to fix problems in its own output.
This paper presents a comprehensive review of this emerging class of techniques.
arXiv Detail & Related papers (2023-08-06T18:38:52Z) - Structure in Deep Reinforcement Learning: A Survey and Open Problems [22.77618616444693]
Reinforcement Learning (RL), bolstered by the expressive capabilities of Deep Neural Networks (DNNs) for function approximation, has demonstrated considerable success in numerous applications.
However, its practicality in addressing various real-world scenarios, characterized by diverse and unpredictable dynamics, remains limited.
This limitation stems from poor data efficiency, limited generalization capabilities, a lack of safety guarantees, and the absence of interpretability.
arXiv Detail & Related papers (2023-06-28T08:48:40Z) - GLUECons: A Generic Benchmark for Learning Under Constraints [102.78051169725455]
In this work, we create a benchmark that is a collection of nine tasks in the domains of natural language processing and computer vision.
We model external knowledge as constraints, specify the sources of the constraints for each task, and implement various models that use these constraints.
arXiv Detail & Related papers (2023-02-16T16:45:36Z) - Methods for Estimating and Improving Robustness of Language Models [0.0]
Large language models (LLMs) suffer notorious flaws related to their preference for simple, surface-level textual relations over full semantic complexity.
This proposal investigates a common denominator of this problem in their weak ability to generalise outside of the training domain.
We find that incorporating some of these measures in the training objectives leads to enhanced distributional robustness of neural models.
arXiv Detail & Related papers (2022-06-16T21:02:53Z) - Self-Supervised Anomaly Detection in Computer Vision and Beyond: A
Survey and Outlook [9.85256783464329]
Anomaly detection plays a crucial role in various domains, including cybersecurity, finance, and healthcare.
In recent years, significant progress has been made in this field due to the remarkable growth of deep learning models.
The advent of self-supervised learning has sparked the development of novel AD algorithms that outperform the existing state-of-the-art approaches.
arXiv Detail & Related papers (2022-05-10T21:16:14Z) - Causal Reasoning Meets Visual Representation Learning: A Prospective
Study [117.08431221482638]
Lack of interpretability, robustness, and out-of-distribution generalization are becoming the challenges of the existing visual models.
Inspired by the strong inference ability of human-level agents, recent years have witnessed great effort in developing causal reasoning paradigms.
This paper aims to provide a comprehensive overview of this emerging field, attract attention, encourage discussions, bring to the forefront the urgency of developing novel causal reasoning methods.
arXiv Detail & Related papers (2022-04-26T02:22:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.