Exploring Probabilistic Soft Logic as a framework for integrating
top-down and bottom-up processing of language in a task context
- URL: http://arxiv.org/abs/2004.07000v1
- Date: Wed, 15 Apr 2020 11:00:07 GMT
- Title: Exploring Probabilistic Soft Logic as a framework for integrating
top-down and bottom-up processing of language in a task context
- Authors: Johannes Dellert
- Abstract summary: The architecture integrates existing NLP components to produce candidate analyses on eight levels of linguistic modeling.
The architecture builds on Universal Dependencies (UD) as its representation formalism on the form level and on Abstract Meaning Representations (AMRs) to represent semantic analyses of learner answers.
- Score: 0.6091702876917279
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This technical report describes a new prototype architecture designed to
integrate top-down and bottom-up analysis of non-standard linguistic input,
where a semantic model of the context of an utterance is used to guide the
analysis of the non-standard surface forms, including their automated
normalization in context. While the architecture is generally applicable, as a
concrete use case of the architecture we target the generation of
semantically-informed target hypotheses for answers written by German learners
in response to reading comprehension questions, where the reading context and
possible target answers are given.
The architecture integrates existing NLP components to produce candidate
analyses on eight levels of linguistic modeling, all of which are broken down
into atomic statements and connected into a large graphical model using
Probabilistic Soft Logic (PSL) as a framework. Maximum a posteriori inference
on the resulting graphical model then assigns a belief distribution to
candidate target hypotheses. The current version of the architecture builds on
Universal Dependencies (UD) as its representation formalism on the form level
and on Abstract Meaning Representations (AMRs) to represent semantic analyses
of learner answers and the context information provided by the target answers.
These general choices will make it comparatively straightforward to apply the
architecture to other tasks and other languages.
Related papers
- Michelangelo: Long Context Evaluations Beyond Haystacks via Latent Structure Queries [54.325172923155414]
We introduce Michelangelo: a minimal, synthetic, and unleaked long-context reasoning evaluation for large language models.
This evaluation is derived via a novel, unifying framework for evaluations over arbitrarily long contexts.
arXiv Detail & Related papers (2024-09-19T10:38:01Z) - An Encoding of Abstract Dialectical Frameworks into Higher-Order Logic [57.24311218570012]
This approach allows for the computer-assisted analysis of abstract dialectical frameworks.
Exemplary applications include the formal analysis and verification of meta-theoretical properties.
arXiv Detail & Related papers (2023-12-08T09:32:26Z) - Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language
Modelling [70.23876429382969]
We propose a benchmark that can evaluate intra-sentence discourse properties across a diverse set of NLP tasks.
Disco-Bench consists of 9 document-level testsets in the literature domain, which contain rich discourse phenomena.
For linguistic analysis, we also design a diagnostic test suite that can examine whether the target models learn discourse knowledge.
arXiv Detail & Related papers (2023-07-16T15:18:25Z) - A Machine Learning Approach to Classifying Construction Cost Documents
into the International Construction Measurement Standard [0.0]
We introduce the first automated models for classifying natural language descriptions provided in cost documents called "Bills of Quantities"
We learn from a dataset of more than 50 thousand descriptions of items retrieved from 24 large infrastructure construction projects across the United Kingdom.
arXiv Detail & Related papers (2022-10-24T11:35:53Z) - Decoupled Context Processing for Context Augmented Language Modeling [33.89636308731306]
Language models can be augmented with a context retriever to incorporate knowledge from large external databases.
By leveraging retrieved context, the neural network does not have to memorize the massive amount of world knowledge within its internal parameters, leading to better efficiency, interpretability and modularity.
arXiv Detail & Related papers (2022-10-11T20:05:09Z) - Compositional Generalization in Grounded Language Learning via Induced
Model Sparsity [81.38804205212425]
We consider simple language-conditioned navigation problems in a grid world environment with disentangled observations.
We design an agent that encourages sparse correlations between words in the instruction and attributes of objects, composing them together to find the goal.
Our agent maintains a high level of performance on goals containing novel combinations of properties even when learning from a handful of demonstrations.
arXiv Detail & Related papers (2022-07-06T08:46:27Z) - A Knowledge-Enhanced Adversarial Model for Cross-lingual Structured
Sentiment Analysis [31.05169054736711]
Cross-lingual structured sentiment analysis task aims to transfer the knowledge from source language to target one.
We propose a Knowledge-Enhanced Adversarial Model (textttKEAM) with both implicit distributed and explicit structural knowledge.
We conduct experiments on five datasets and compare textttKEAM with both the supervised and unsupervised methods.
arXiv Detail & Related papers (2022-05-31T03:07:51Z) - CUGE: A Chinese Language Understanding and Generation Evaluation
Benchmark [144.05723617401674]
General-purpose language intelligence evaluation has been a longstanding goal for natural language processing.
We argue that for general-purpose language intelligence evaluation, the benchmark itself needs to be comprehensive and systematic.
We propose CUGE, a Chinese Language Understanding and Generation Evaluation benchmark with the following features.
arXiv Detail & Related papers (2021-12-27T11:08:58Z) - Architectures of Meaning, A Systematic Corpus Analysis of NLP Systems [0.0]
The framework is validated in the full corpus of Semeval tasks.
It provides a systematic mechanism to interpret a largely dynamic and exponentially growing field.
arXiv Detail & Related papers (2021-07-16T21:10:43Z) - Learning Universal Representations from Word to Sentence [89.82415322763475]
This work introduces and explores the universal representation learning, i.e., embeddings of different levels of linguistic unit in a uniform vector space.
We present our approach of constructing analogy datasets in terms of words, phrases and sentences.
We empirically verify that well pre-trained Transformer models incorporated with appropriate training settings may effectively yield universal representation.
arXiv Detail & Related papers (2020-09-10T03:53:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.