Argumentative Topology: Finding Loop(holes) in Logic
- URL: http://arxiv.org/abs/2011.08952v1
- Date: Tue, 17 Nov 2020 21:23:58 GMT
- Title: Argumentative Topology: Finding Loop(holes) in Logic
- Authors: Sarah Tymochko, Zachary New, Lucius Bynum, Emilie Purvine, Timothy
Doster, Julien Chaput, Tegan Emerson
- Abstract summary: Topological Word Embeddings uses mathematical techniques in dynamical system analysis and data driven shape extraction.
We show that using a topological delay embedding we are able to capture and extract a different, shape-based notion of logic.
- Score: 3.977669302067367
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Advances in natural language processing have resulted in increased
capabilities with respect to multiple tasks. One of the possible causes of the
observed performance gains is the introduction of increasingly sophisticated
text representations. While many of the new word embedding techniques can be
shown to capture particular notions of sentiment or associative structures, we
explore the ability of two different word embeddings to uncover or capture the
notion of logical shape in text. To this end we present a novel framework that
we call Topological Word Embeddings which leverages mathematical techniques in
dynamical system analysis and data driven shape extraction (i.e. topological
data analysis). In this preliminary work we show that using a topological delay
embedding we are able to capture and extract a different, shape-based notion of
logic aimed at answering the question "Can we find a circle in a circular
argument?"
Related papers
- LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - Modeling Hierarchical Reasoning Chains by Linking Discourse Units and
Key Phrases for Reading Comprehension [80.99865844249106]
We propose a holistic graph network (HGN) which deals with context at both discourse level and word level, as the basis for logical reasoning.
Specifically, node-level and type-level relations, which can be interpreted as bridges in the reasoning process, are modeled by a hierarchical interaction mechanism.
arXiv Detail & Related papers (2023-06-21T07:34:27Z) - Conversational Semantic Parsing using Dynamic Context Graphs [68.72121830563906]
We consider the task of conversational semantic parsing over general purpose knowledge graphs (KGs) with millions of entities, and thousands of relation-types.
We focus on models which are capable of interactively mapping user utterances into executable logical forms.
arXiv Detail & Related papers (2023-05-04T16:04:41Z) - From axioms over graphs to vectors, and back again: evaluating the
properties of graph-based ontology embeddings [78.217418197549]
One approach to generating embeddings is by introducing a set of nodes and edges for named entities and logical axioms structure.
Methods that embed in graphs (graph projections) have different properties related to the type of axioms they utilize.
arXiv Detail & Related papers (2023-03-29T08:21:49Z) - Discourse-Aware Graph Networks for Textual Logical Reasoning [142.0097357999134]
Passage-level logical relations represent entailment or contradiction between propositional units (e.g., a concluding sentence)
We propose logic structural-constraint modeling to solve the logical reasoning QA and introduce discourse-aware graph networks (DAGNs)
The networks first construct logic graphs leveraging in-line discourse connectives and generic logic theories, then learn logic representations by end-to-end evolving the logic relations with an edge-reasoning mechanism and updating the graph features.
arXiv Detail & Related papers (2022-07-04T14:38:49Z) - Topological Data Analysis for Word Sense Disambiguation [0.0]
We develop and test a novel unsupervised algorithm for word sense induction and disambiguation.
Our approach relies on advanced mathematical concepts in the field of topology which provides a richer conceptualization of clusters for the word sense induction tasks.
This shows the promise of topological algorithms for natural language processing and we advocate for future work in this promising area.
arXiv Detail & Related papers (2022-03-01T15:41:54Z) - Logic-Driven Context Extension and Data Augmentation for Logical
Reasoning of Text [65.24325614642223]
We propose to understand logical symbols and expressions in the text to arrive at the answer.
Based on such logical information, we put forward a context extension framework and a data augmentation algorithm.
Our method achieves the state-of-the-art performance, and both logic-driven context extension framework and data augmentation algorithm can help improve the accuracy.
arXiv Detail & Related papers (2021-05-08T10:09:36Z) - A Note on Argumentative Topology: Circularity and Syllogisms as Unsolved
Problems [0.0]
We show that the problem of connecting logic, topology and text is still very much unsolved.
We conclude that there is no clear answer to the question: Can we find a circle in a circular argument?''
arXiv Detail & Related papers (2021-02-07T18:30:37Z) - Topological Data Analysis in Text Classification: Extracting Features
with Additive Information [2.1410799064827226]
Topological Data Analysis is challenging to apply to high dimensional numeric data.
Topological features carry some exclusive information not captured by conventional text mining methods.
Adding topological features to the conventional features in ensemble models improves the classification results.
arXiv Detail & Related papers (2020-03-29T21:02:09Z) - A Novel Method of Extracting Topological Features from Word Embeddings [2.4063592468412267]
We introduce a novel algorithm to extract topological features from word embedding representation of text.
We will show our defined topological features may outperform conventional text mining features.
arXiv Detail & Related papers (2020-03-29T16:55:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.