Instance-Based Neural Dependency Parsing
- URL: http://arxiv.org/abs/2109.13497v1
- Date: Tue, 28 Sep 2021 05:30:52 GMT
- Title: Instance-Based Neural Dependency Parsing
- Authors: Hiroki Ouchi, Jun Suzuki, Sosuke Kobayashi, Sho Yokoi, Tatsuki
Kuribayashi, Masashi Yoshikawa, Kentaro Inui
- Abstract summary: We develop neural models that possess an interpretable inference process for dependency parsing.
Our models adopt instance-based inference, where dependency edges are extracted and labeled by comparing them to edges in a training set.
- Score: 56.63500180843504
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Interpretable rationales for model predictions are crucial in practical
applications. We develop neural models that possess an interpretable inference
process for dependency parsing. Our models adopt instance-based inference,
where dependency edges are extracted and labeled by comparing them to edges in
a training set. The training edges are explicitly used for the predictions;
thus, it is easy to grasp the contribution of each edge to the predictions. Our
experiments show that our instance-based models achieve competitive accuracy
with standard neural models and have the reasonable plausibility of
instance-based explanations.
Related papers
- Generative vs. Discriminative modeling under the lens of uncertainty quantification [0.929965561686354]
In this paper, we undertake a comparative analysis of generative and discriminative approaches.
We compare the ability of both approaches to leverage information from various sources in an uncertainty aware inference.
We propose a general sampling scheme enabling supervised learning for both approaches, as well as semi-supervised learning when compatible with the considered modeling approach.
arXiv Detail & Related papers (2024-06-13T14:32:43Z) - Training Survival Models using Scoring Rules [9.330089124239086]
Survival Analysis provides critical insights for incomplete time-to-event data.
It is also an important example of probabilistic machine learning.
We establish different parametric and non-parametric sub-frameworks that allow different degrees of flexibility.
We show that using our framework, we can recover various parametric models and demonstrate that optimization works equally well when compared to likelihood-based methods.
arXiv Detail & Related papers (2024-03-19T20:58:38Z) - Deep Grey-Box Modeling With Adaptive Data-Driven Models Toward
Trustworthy Estimation of Theory-Driven Models [88.63781315038824]
We present a framework that enables us to analyze a regularizer's behavior empirically with a slight change in the neural net's architecture and the training objective.
arXiv Detail & Related papers (2022-10-24T10:42:26Z) - Pathologies of Pre-trained Language Models in Few-shot Fine-tuning [50.3686606679048]
We show that pre-trained language models with few examples show strong prediction bias across labels.
Although few-shot fine-tuning can mitigate the prediction bias, our analysis shows models gain performance improvement by capturing non-task-related features.
These observations alert that pursuing model performance with fewer examples may incur pathological prediction behavior.
arXiv Detail & Related papers (2022-04-17T15:55:18Z) - On the Lack of Robust Interpretability of Neural Text Classifiers [14.685352584216757]
We assess the robustness of interpretations of neural text classifiers based on pretrained Transformer encoders.
Both tests show surprising deviations from expected behavior, raising questions about the extent of insights that practitioners may draw from interpretations.
arXiv Detail & Related papers (2021-06-08T18:31:02Z) - Explaining and Improving Model Behavior with k Nearest Neighbor
Representations [107.24850861390196]
We propose using k nearest neighbor representations to identify training examples responsible for a model's predictions.
We show that kNN representations are effective at uncovering learned spurious associations.
Our results indicate that the kNN approach makes the finetuned model more robust to adversarial inputs.
arXiv Detail & Related papers (2020-10-18T16:55:25Z) - Nonparametric Score Estimators [49.42469547970041]
Estimating the score from a set of samples generated by an unknown distribution is a fundamental task in inference and learning of probabilistic models.
We provide a unifying view of these estimators under the framework of regularized nonparametric regression.
We propose score estimators based on iterative regularization that enjoy computational benefits from curl-free kernels and fast convergence.
arXiv Detail & Related papers (2020-05-20T15:01:03Z) - Instance-Based Learning of Span Representations: A Case Study through
Named Entity Recognition [48.06319154279427]
We present a method of instance-based learning that learns similarities between spans.
Our method enables to build models that have high interpretability without sacrificing performance.
arXiv Detail & Related papers (2020-04-29T23:32:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.