Causality Extraction from Nuclear Licensee Event Reports Using a Hybrid Framework
- URL: http://arxiv.org/abs/2404.05656v2
- Date: Mon, 22 Apr 2024 15:25:20 GMT
- Title: Causality Extraction from Nuclear Licensee Event Reports Using a Hybrid Framework
- Authors: Shahidur Rahoman Sohag, Sai Zhang, Min Xian, Shoukun Sun, Fei Xu, Zhegang Ma,
- Abstract summary: This paper proposed a hybrid framework for causality detection and extraction from nuclear licensee event reports.
We compiled an LER corpus with 20,129 text samples for causality analysis, developed an interactive tool for labeling cause effect pairs, and built a deep-learning-based approach for causal relation detection.
- Score: 3.1139106894905972
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Industry-wide nuclear power plant operating experience is a critical source of raw data for performing parameter estimations in reliability and risk models. Much operating experience information pertains to failure events and is stored as reports containing unstructured data, such as narratives. Event reports are essential for understanding how failures are initiated and propagated, including the numerous causal relations involved. Causal relation extraction using deep learning represents a significant frontier in the field of natural language processing (NLP), and is crucial since it enables the interpretation of intricate narratives and connections contained within vast amounts of written information. This paper proposed a hybrid framework for causality detection and extraction from nuclear licensee event reports. The main contributions include: (1) we compiled an LER corpus with 20,129 text samples for causality analysis, (2) developed an interactive tool for labeling cause effect pairs, (3) built a deep-learning-based approach for causal relation detection, and (4) developed a knowledge based cause-effect extraction approach.
Related papers
- Learning Traffic Crashes as Language: Datasets, Benchmarks, and What-if Causal Analyses [76.59021017301127]
We propose a large-scale traffic crash language dataset, named CrashEvent, summarizing 19,340 real-world crash reports.
We further formulate the crash event feature learning as a novel text reasoning problem and further fine-tune various large language models (LLMs) to predict detailed accident outcomes.
Our experiments results show that our LLM-based approach not only predicts the severity of accidents but also classifies different types of accidents and predicts injury outcomes.
arXiv Detail & Related papers (2024-06-16T03:10:16Z) - RealTCD: Temporal Causal Discovery from Interventional Data with Large Language Model [15.416325455014462]
Temporal causal discovery aims to identify temporal causal relationships between variables directly from observations.
Existing methods mainly focus on synthetic datasets with heavy reliance on intervention targets.
We propose the RealTCD framework, which is able to leverage domain knowledge to discover temporal causal relationships without interventional targets.
arXiv Detail & Related papers (2024-04-23T06:52:40Z) - Discovery of the Hidden World with Large Language Models [100.38157787218044]
We introduce COAT: Causal representatiOn AssistanT.
COAT incorporates LLMs as a factor proposer that extracts the potential causal factors from unstructured data.
LLMs can also be instructed to provide additional information used to collect data values.
arXiv Detail & Related papers (2024-02-06T12:18:54Z) - Multi-modal Causal Structure Learning and Root Cause Analysis [67.67578590390907]
We propose Mulan, a unified multi-modal causal structure learning method for root cause localization.
We leverage a log-tailored language model to facilitate log representation learning, converting log sequences into time-series data.
We also introduce a novel key performance indicator-aware attention mechanism for assessing modality reliability and co-learning a final causal graph.
arXiv Detail & Related papers (2024-02-04T05:50:38Z) - Chain of Thought with Explicit Evidence Reasoning for Few-shot Relation
Extraction [15.553367375330843]
We propose a novel approach for few-shot relation extraction using large language models.
CoT-ER first induces large language models to generate evidences using task-specific and concept-level knowledge.
arXiv Detail & Related papers (2023-11-10T08:12:00Z) - Inducing Causal Structure for Abstractive Text Summarization [76.1000380429553]
We introduce a Structural Causal Model (SCM) to induce the underlying causal structure of the summarization data.
We propose a Causality Inspired Sequence-to-Sequence model (CI-Seq2Seq) to learn the causal representations that can mimic the causal factors.
Experimental results on two widely used text summarization datasets demonstrate the advantages of our approach.
arXiv Detail & Related papers (2023-08-24T16:06:36Z) - Causal Document-Grounded Dialogue Pre-training [81.16429056652483]
We present a causally-complete dataset construction strategy for building million-level DocGD pre-training corpora.
Experiments on three benchmark datasets demonstrate that our causal pre-training achieves considerable and consistent improvements under fully-supervised, low-resource, few-shot, and zero-shot settings.
arXiv Detail & Related papers (2023-05-18T12:39:25Z) - SAIS: Supervising and Augmenting Intermediate Steps for Document-Level
Relation Extraction [51.27558374091491]
We propose to explicitly teach the model to capture relevant contexts and entity types by supervising and augmenting intermediate steps (SAIS) for relation extraction.
Based on a broad spectrum of carefully designed tasks, our proposed SAIS method not only extracts relations of better quality due to more effective supervision, but also retrieves the corresponding supporting evidence more accurately.
arXiv Detail & Related papers (2021-09-24T17:37:35Z) - A Survey on Extraction of Causal Relations from Natural Language Text [9.317718453037667]
Cause-effect relations appear frequently in text, and curating cause-effect relations from text helps in building causal networks for predictive tasks.
Existing causality extraction techniques include knowledge-based, statistical machine learning(ML)-based, and deep learning-based approaches.
arXiv Detail & Related papers (2021-01-16T10:49:39Z) - Causal BERT : Language models for causality detection between events
expressed in text [1.0756038762528868]
Causality understanding between events is helpful in many areas, including health care, business risk management and finance.
"Cause-Effect" relationships between natural language events continues to remain a challenge simply because it is often expressed implicitly.
Our proposed methods achieve the state-of-art performance in three different data distributions and can be leveraged for extraction of a causal diagram.
arXiv Detail & Related papers (2020-12-10T04:59:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.