Empirical Analysis for Unsupervised Universal Dependency Parse Tree Aggregation
- URL: http://arxiv.org/abs/2403.19183v2
- Date: Wed, 3 Apr 2024 05:53:38 GMT
- Title: Empirical Analysis for Unsupervised Universal Dependency Parse Tree Aggregation
- Authors: Adithya Kulkarni, Oliver Eulenstein, Qi Li,
- Abstract summary: Dependency parsing is an essential task in NLP, and the quality of dependencys is crucial for many downstream tasks.
In various NLP tasks, aggregation methods are used for post-processing aggregation and have been shown to combat the issue of varying quality.
We compare different unsupervised post-processing aggregation methods to identify the most suitable dependency tree structure aggregation method.
- Score: 9.075353955444518
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Dependency parsing is an essential task in NLP, and the quality of dependency parsers is crucial for many downstream tasks. Parsers' quality often varies depending on the domain and the language involved. Therefore, it is essential to combat the issue of varying quality to achieve stable performance. In various NLP tasks, aggregation methods are used for post-processing aggregation and have been shown to combat the issue of varying quality. However, aggregation methods for post-processing aggregation have not been sufficiently studied in dependency parsing tasks. In an extensive empirical study, we compare different unsupervised post-processing aggregation methods to identify the most suitable dependency tree structure aggregation method.
Related papers
- UCB-driven Utility Function Search for Multi-objective Reinforcement Learning [75.11267478778295]
In Multi-objective Reinforcement Learning (MORL) agents are tasked with optimising decision-making behaviours.
We focus on the case of linear utility functions parameterised by weight vectors w.
We introduce a method based on Upper Confidence Bound to efficiently search for the most promising weight vectors during different stages of the learning process.
arXiv Detail & Related papers (2024-05-01T09:34:42Z) - Variable Importance in High-Dimensional Settings Requires Grouping [19.095605415846187]
Conditional Permutation Importance (CPI) bypasses PI's limitations in such cases.
Grouping variables statistically via clustering or some prior knowledge gains some power back.
We show that the approach extended with stacking controls the type-I error even with highly-correlated groups.
arXiv Detail & Related papers (2023-12-18T00:21:47Z) - Topic-driven Distant Supervision Framework for Macro-level Discourse
Parsing [72.14449502499535]
The task of analyzing the internal rhetorical structure of texts is a challenging problem in natural language processing.
Despite the recent advances in neural models, the lack of large-scale, high-quality corpora for training remains a major obstacle.
Recent studies have attempted to overcome this limitation by using distant supervision.
arXiv Detail & Related papers (2023-05-23T07:13:51Z) - Multi-task Bias-Variance Trade-off Through Functional Constraints [102.64082402388192]
Multi-task learning aims to acquire a set of functions that perform well for diverse tasks.
In this paper we draw intuition from the two extreme learning scenarios -- a single function for all tasks, and a task-specific function that ignores the other tasks.
We introduce a constrained learning formulation that enforces domain specific solutions to a central function.
arXiv Detail & Related papers (2022-10-27T16:06:47Z) - Solving Continuous Control via Q-learning [54.05120662838286]
We show that a simple modification of deep Q-learning largely alleviates issues with actor-critic methods.
By combining bang-bang action discretization with value decomposition, framing single-agent control as cooperative multi-agent reinforcement learning (MARL), this simple critic-only approach matches performance of state-of-the-art continuous actor-critic methods.
arXiv Detail & Related papers (2022-10-22T22:55:50Z) - Probing for Labeled Dependency Trees [25.723591566201343]
DepProbe is a linear probe which can extract labeled and directed dependency parse trees from embeddings.
Across 13 languages, our proposed method identifies the best source treebank of the time.
arXiv Detail & Related papers (2022-03-24T10:21:07Z) - The BP Dependency Function: a Generic Measure of Dependence between
Random Variables [0.0]
Measuring and quantifying dependencies between random variables (RV's) can give critical insights into a data-set.
Common practice of data analysis is that most data analysts use the Pearson correlation coefficient (PCC) to quantify dependence between RV's.
We propose a new dependency function that meets all these requirements.
arXiv Detail & Related papers (2022-03-23T11:14:40Z) - Compositional Generalization in Dependency Parsing [15.953482168182003]
Dependency, however, lacks a compositional parsing benchmark.
We find that increasing compound divergence degrades dependency performance, although not as dramatically as semantic parsing performance.
We identify a number of syntactic structures that drive the dependency's lower performance on the most challenging splits.
arXiv Detail & Related papers (2021-10-13T16:32:24Z) - Diversity-Aware Batch Active Learning for Dependency Parsing [12.579809393060858]
We show that selecting diverse batches with DPPs is superior to strong selection strategies that do not enforce batch diversity.
Our diversityaware strategy is robust under a corpus duplication setting, where diversity-agnostic sampling strategies exhibit significant degradation.
arXiv Detail & Related papers (2021-04-28T18:00:05Z) - Contrastive learning of strong-mixing continuous-time stochastic
processes [53.82893653745542]
Contrastive learning is a family of self-supervised methods where a model is trained to solve a classification task constructed from unlabeled data.
We show that a properly constructed contrastive learning task can be used to estimate the transition kernel for small-to-mid-range intervals in the diffusion case.
arXiv Detail & Related papers (2021-03-03T23:06:47Z) - A Survey of Unsupervised Dependency Parsing [62.16714720135358]
Unsupervised dependency parsing aims to learn a dependency from sentences that have no annotation of their correct parse trees.
Despite its difficulty, unsupervised parsing is an interesting research direction because of its capability of utilizing almost unlimited unannotated text data.
arXiv Detail & Related papers (2020-10-04T10:51:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.