Bridging Causal Discovery and Large Language Models: A Comprehensive
Survey of Integrative Approaches and Future Directions
- URL: http://arxiv.org/abs/2402.11068v1
- Date: Fri, 16 Feb 2024 20:48:53 GMT
- Title: Bridging Causal Discovery and Large Language Models: A Comprehensive
Survey of Integrative Approaches and Future Directions
- Authors: Guangya Wan, Yuqi Wu, Mengxuan Hu, Zhixuan Chu, Sheng Li
- Abstract summary: Causal discovery (CD) and Large Language Models (LLMs) represent two emerging fields of study with significant implications for artificial intelligence.
This paper presents a comprehensive survey of the integration of LLMs, such as GPT4, into CD tasks.
- Score: 10.226735765284852
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Causal discovery (CD) and Large Language Models (LLMs) represent two emerging
fields of study with significant implications for artificial intelligence.
Despite their distinct origins, CD focuses on uncovering cause-effect
relationships from data, and LLMs on processing and generating humanlike text,
the convergence of these domains offers novel insights and methodologies for
understanding complex systems. This paper presents a comprehensive survey of
the integration of LLMs, such as GPT4, into CD tasks. We systematically review
and compare existing approaches that leverage LLMs for various CD tasks and
highlight their innovative use of metadata and natural language to infer causal
structures. Our analysis reveals the strengths and potential of LLMs in both
enhancing traditional CD methods and as an imperfect expert, alongside the
challenges and limitations inherent in current practices. Furthermore, we
identify gaps in the literature and propose future research directions aimed at
harnessing the full potential of LLMs in causality research. To our knowledge,
this is the first survey to offer a unified and detailed examination of the
synergy between LLMs and CD, setting the stage for future advancements in the
field.
Related papers
- Decoding Large-Language Models: A Systematic Overview of Socio-Technical Impacts, Constraints, and Emerging Questions [1.1970409518725493]
The article highlights the application areas that could have a positive impact on society along with the ethical considerations.
It includes responsible development considerations, algorithmic improvements, ethical challenges, and societal implications.
arXiv Detail & Related papers (2024-09-25T14:36:30Z) - From Linguistic Giants to Sensory Maestros: A Survey on Cross-Modal Reasoning with Large Language Models [56.9134620424985]
Cross-modal reasoning (CMR) is increasingly recognized as a crucial capability in the progression toward more sophisticated artificial intelligence systems.
The recent trend of deploying Large Language Models (LLMs) to tackle CMR tasks has marked a new mainstream of approaches for enhancing their effectiveness.
This survey offers a nuanced exposition of current methodologies applied in CMR using LLMs, classifying these into a detailed three-tiered taxonomy.
arXiv Detail & Related papers (2024-09-19T02:51:54Z) - Risks, Causes, and Mitigations of Widespread Deployments of Large Language Models (LLMs): A Survey [0.0]
Large Language Models (LLMs) have transformed Natural Language Processing (NLP) with their outstanding abilities in text generation, summarization, and classification.
Their widespread adoption introduces numerous challenges, including issues related to academic integrity, copyright, environmental impacts, and ethical considerations such as data bias, fairness, and privacy.
This paper offers a comprehensive survey of the literature on these subjects, systematically gathered and synthesized from Google Scholar.
arXiv Detail & Related papers (2024-08-01T21:21:18Z) - Retrieval-Enhanced Machine Learning: Synthesis and Opportunities [60.34182805429511]
Retrieval-enhancement can be extended to a broader spectrum of machine learning (ML)
This work introduces a formal framework of this paradigm, Retrieval-Enhanced Machine Learning (REML), by synthesizing the literature in various domains in ML with consistent notations which is missing from the current literature.
The goal of this work is to equip researchers across various disciplines with a comprehensive, formally structured framework of retrieval-enhanced models, thereby fostering interdisciplinary future research.
arXiv Detail & Related papers (2024-07-17T20:01:21Z) - A Survey on RAG Meeting LLMs: Towards Retrieval-Augmented Large Language Models [71.25225058845324]
Large Language Models (LLMs) have demonstrated revolutionary abilities in language understanding and generation.
Retrieval-Augmented Generation (RAG) can offer reliable and up-to-date external knowledge.
RA-LLMs have emerged to harness external and authoritative knowledge bases, rather than relying on the model's internal knowledge.
arXiv Detail & Related papers (2024-05-10T02:48:45Z) - Recent Advances in Hate Speech Moderation: Multimodality and the Role of Large Models [52.24001776263608]
This comprehensive survey delves into the recent strides in HS moderation.
We highlight the burgeoning role of large language models (LLMs) and large multimodal models (LMMs)
We identify existing gaps in research, particularly in the context of underrepresented languages and cultures.
arXiv Detail & Related papers (2024-01-30T03:51:44Z) - A Survey on Detection of LLMs-Generated Content [97.87912800179531]
The ability to detect LLMs-generated content has become of paramount importance.
We aim to provide a detailed overview of existing detection strategies and benchmarks.
We also posit the necessity for a multi-faceted approach to defend against various attacks.
arXiv Detail & Related papers (2023-10-24T09:10:26Z) - A Comprehensive Overview of Large Language Models [68.22178313875618]
Large Language Models (LLMs) have recently demonstrated remarkable capabilities in natural language processing tasks.
This article provides an overview of the existing literature on a broad range of LLM-related concepts.
arXiv Detail & Related papers (2023-07-12T20:01:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.