An Overview of Distant Supervision for Relation Extraction with a Focus
on Denoising and Pre-training Methods
- URL: http://arxiv.org/abs/2207.08286v1
- Date: Sun, 17 Jul 2022 21:02:04 GMT
- Title: An Overview of Distant Supervision for Relation Extraction with a Focus
on Denoising and Pre-training Methods
- Authors: William Hogan
- Abstract summary: Relation Extraction is a foundational task of natural language processing.
The history of RE methods can be roughly organized into four phases: pattern-based RE, statistical-based RE, neural-based RE, and large language model-based RE.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Relation Extraction (RE) is a foundational task of natural language
processing. RE seeks to transform raw, unstructured text into structured
knowledge by identifying relational information between entity pairs found in
text. RE has numerous uses, such as knowledge graph completion, text
summarization, question-answering, and search querying. The history of RE
methods can be roughly organized into four phases: pattern-based RE,
statistical-based RE, neural-based RE, and large language model-based RE. This
survey begins with an overview of a few exemplary works in the earlier phases
of RE, highlighting limitations and shortcomings to contextualize progress.
Next, we review popular benchmarks and critically examine metrics used to
assess RE performance. We then discuss distant supervision, a paradigm that has
shaped the development of modern RE methods. Lastly, we review recent RE works
focusing on denoising and pre-training methods.
Related papers
- Empowering Few-Shot Relation Extraction with The Integration of Traditional RE Methods and Large Language Models [48.846159555253834]
Few-Shot Relation Extraction (FSRE) appeals to more researchers in Natural Language Processing (NLP)
Recent emergence of Large Language Models (LLMs) has prompted numerous researchers to explore FSRE through In-Context Learning (ICL)
arXiv Detail & Related papers (2024-07-12T03:31:11Z) - Reward-based Input Construction for Cross-document Relation Extraction [11.52832308525974]
Cross-document Relation extraction (RE) is a fundamental task in natural language processing.
We propose REward-based Input Construction (REIC), the first learning-based sentence selector for cross-document RE.
REIC extracts sentences based on relational evidence, enabling the RE module to effectively infer relations.
arXiv Detail & Related papers (2024-05-31T07:30:34Z) - RaFe: Ranking Feedback Improves Query Rewriting for RAG [83.24385658573198]
We propose a framework for training query rewriting models free of annotations.
By leveraging a publicly available reranker, oursprovides feedback aligned well with the rewriting objectives.
arXiv Detail & Related papers (2024-05-23T11:00:19Z) - Intrinsic Task-based Evaluation for Referring Expression Generation [9.322715583523928]
Referring Expressions (REs) generated by state-of-the-art neural models were not only indistinguishable from the REs in textscwebnlg but also from the REs generated by a simple rule-based system.
Here, we argue that this limitation could stem from the use of a purely ratings-based human evaluation.
We propose an intrinsic task-based evaluation for REG models, in which, in addition to rating the quality of REs, participants were asked to accomplish two meta-level tasks.
arXiv Detail & Related papers (2024-02-12T06:21:35Z) - Whether you can locate or not? Interactive Referring Expression
Generation [12.148963878497243]
We propose an Interactive REG (IREG) model that can interact with a real REC model.
IREG outperforms previous state-of-the-art methods on popular evaluation metrics.
arXiv Detail & Related papers (2023-08-19T10:53:32Z) - A Comprehensive Survey on Relation Extraction: Recent Advances and New Frontiers [76.51245425667845]
Relation extraction (RE) involves identifying the relations between entities from underlying content.
Deep neural networks have dominated the field of RE and made noticeable progress.
This survey is expected to facilitate researchers' collaborative efforts to address the challenges of real-world RE systems.
arXiv Detail & Related papers (2023-06-03T08:39:25Z) - How Fragile is Relation Extraction under Entity Replacements? [70.34001923252711]
Relation extraction (RE) aims to extract the relations between entity names from the textual context.
Existing work has found that the RE models the entity name patterns to make RE predictions while ignoring the textual context.
This motivates us to raise the question: are RE models robust to the entity replacements?''
arXiv Detail & Related papers (2023-05-22T23:53:32Z) - Summarization as Indirect Supervision for Relation Extraction [23.98136192661566]
We present SuRE, which converts Relation extraction (RE) into a summarization formulation.
We develop sentence and relation conversion techniques that essentially bridge the formulation of summarization and RE tasks.
Experiments on three datasets demonstrate the effectiveness of SuRE in both full-dataset and low-resource settings.
arXiv Detail & Related papers (2022-05-19T20:25:29Z) - Should We Rely on Entity Mentions for Relation Extraction? Debiasing
Relation Extraction with Counterfactual Analysis [60.83756368501083]
We propose the CORE (Counterfactual Analysis based Relation Extraction) debiasing method for sentence-level relation extraction.
Our CORE method is model-agnostic to debias existing RE systems during inference without changing their training processes.
arXiv Detail & Related papers (2022-05-08T05:13:54Z) - Learning from Context or Names? An Empirical Study on Neural Relation
Extraction [112.06614505580501]
We study the effect of two main information sources in text: textual context and entity mentions (names)
We propose an entity-masked contrastive pre-training framework for relation extraction (RE)
Our framework can improve the effectiveness and robustness of neural models in different RE scenarios.
arXiv Detail & Related papers (2020-10-05T11:21:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.