From Weighted Conditionals of Multilayer Perceptrons to a Gradual
Argumentation Semantics
- URL: http://arxiv.org/abs/2110.03643v1
- Date: Thu, 7 Oct 2021 17:33:10 GMT
- Title: From Weighted Conditionals of Multilayer Perceptrons to a Gradual
Argumentation Semantics
- Authors: Laura Giordano
- Abstract summary: A fuzzy multipreference semantics has been recently proposed for weighted conditional knowledge bases and used to develop a logical semantics for Multilayer Perceptrons.
This semantics suggests some gradual argumentation semantics, which are related to the family of the gradual semantics.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A fuzzy multipreference semantics has been recently proposed for weighted
conditional knowledge bases, and used to develop a logical semantics for
Multilayer Perceptrons, by regarding a deep neural network (after training) as
a weighted conditional knowledge base. This semantics, in its different
variants, suggests some gradual argumentation semantics, which are related to
the family of the gradual semantics. The relationships between weighted
conditional knowledge bases and MLPs extend to the proposed gradual semantics,
which captures the stationary states of MPs, so that a dee neural network can
as well be seen as a weighted argumentation graph.
Related papers
- Reasoning with the Theory of Mind for Pragmatic Semantic Communication [62.87895431431273]
A pragmatic semantic communication framework is proposed in this paper.
It enables effective goal-oriented information sharing between two-intelligent agents.
Numerical evaluations demonstrate the framework's ability to achieve efficient communication with a reduced amount of bits.
arXiv Detail & Related papers (2023-11-30T03:36:19Z) - A preferential interpretation of MultiLayer Perceptrons in a conditional
logic with typicality [2.3103579794296736]
Weighted knowledge bases for a simple description logic with typicality are considered under a (many-valued) concept-wise" multipreference semantics.
The semantics is used to provide a preferential interpretation of MultiLayer Perceptrons (MLPs)
A model checking and an entailment based approach are exploited in the verification of conditional properties ofLayers.
arXiv Detail & Related papers (2023-04-29T17:15:36Z) - Dual Box Embeddings for the Description Logic EL++ [16.70961576041243]
Similar to Knowledge Graphs (KGs), Knowledge Graphs are often incomplete, and maintaining and constructing them has proved challenging.
Similar to KGs, a promising approach is to learn embeddings in a latent vector space, while additionally ensuring they adhere to the semantics of the underlying DL.
We propose a novel ontology embedding method named Box$2$EL for the DL EL++, which represents both concepts and roles as boxes.
arXiv Detail & Related papers (2023-01-26T14:13:37Z) - Disentangling Learnable and Memorizable Data via Contrastive Learning
for Semantic Communications [81.10703519117465]
A novel machine reasoning framework is proposed to disentangle source data so as to make it semantic-ready.
In particular, a novel contrastive learning framework is proposed, whereby instance and cluster discrimination are performed on the data.
Deep semantic clusters of highest confidence are considered learnable, semantic-rich data.
Our simulation results showcase the superiority of our contrastive learning approach in terms of semantic impact and minimalism.
arXiv Detail & Related papers (2022-12-18T12:00:12Z) - Imitation Learning-based Implicit Semantic-aware Communication Networks:
Multi-layer Representation and Collaborative Reasoning [68.63380306259742]
Despite its promising potential, semantic communications and semantic-aware networking are still at their infancy.
We propose a novel reasoning-based implicit semantic-aware communication network architecture that allows multiple tiers of CDC and edge servers to collaborate.
We introduce a new multi-layer representation of semantic information taking into consideration both the hierarchical structure of implicit semantics as well as the personalized inference preference of individual users.
arXiv Detail & Related papers (2022-10-28T13:26:08Z) - Fuzzy Labeling Semantics for Quantitative Argumentation [0.0]
We provide a novel quantitative method called fuzzy labeling for fuzzy argumentation systems.
A triple of acceptability, rejectability, and undecidability degrees is used to evaluate argument strength.
arXiv Detail & Related papers (2022-07-15T08:31:36Z) - An ASP approach for reasoning on neural networks under a finitely
many-valued semantics for weighted conditional knowledge bases [0.0]
We consider conditional ALC knowledge bases with typicality in the finitely many-valued case.
We exploit ASP and "asprin" for reasoning with the concept-wise multipreferences.
arXiv Detail & Related papers (2022-02-02T16:30:28Z) - Contextualized Semantic Distance between Highly Overlapped Texts [85.1541170468617]
Overlapping frequently occurs in paired texts in natural language processing tasks like text editing and semantic similarity evaluation.
This paper aims to address the issue with a mask-and-predict strategy.
We take the words in the longest common sequence as neighboring words and use masked language modeling (MLM) to predict the distributions on their positions.
Experiments on Semantic Textual Similarity show NDD to be more sensitive to various semantic differences, especially on highly overlapped paired texts.
arXiv Detail & Related papers (2021-10-04T03:59:15Z) - Weighted defeasible knowledge bases and a multipreference semantics for
a deep neural network model [0.0]
We investigate the relationships between a multipreferential semantics for defeasible reasoning in knowledge representation and a deep neural network model.
Weighted knowledge bases for description logics are considered under a "concept-wise" multipreference semantics.
arXiv Detail & Related papers (2020-12-24T19:04:51Z) - A Chain Graph Interpretation of Real-World Neural Networks [58.78692706974121]
We propose an alternative interpretation that identifies NNs as chain graphs (CGs) and feed-forward as an approximate inference procedure.
The CG interpretation specifies the nature of each NN component within the rich theoretical framework of probabilistic graphical models.
We demonstrate with concrete examples that the CG interpretation can provide novel theoretical support and insights for various NN techniques.
arXiv Detail & Related papers (2020-06-30T14:46:08Z) - Mechanisms for Handling Nested Dependencies in Neural-Network Language
Models and Humans [75.15855405318855]
We studied whether a modern artificial neural network trained with "deep learning" methods mimics a central aspect of human sentence processing.
Although the network was solely trained to predict the next word in a large corpus, analysis showed the emergence of specialized units that successfully handled local and long-distance syntactic agreement.
We tested the model's predictions in a behavioral experiment where humans detected violations in number agreement in sentences with systematic variations in the singular/plural status of multiple nouns.
arXiv Detail & Related papers (2020-06-19T12:00:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.