Abstract Dialectical Frameworks are Boolean Networks (full version)
- URL: http://arxiv.org/abs/2407.02055v1
- Date: Tue, 2 Jul 2024 08:37:05 GMT
- Title: Abstract Dialectical Frameworks are Boolean Networks (full version)
- Authors: Jesse Heyninck, Matthias Knorr, João Leite,
- Abstract summary: We study the relation between these two formalisms revealing their communalities as well as their differences, and introduce a correspondence that allows to establish novel results for the individual formalisms.
- Score: 7.324459578044214
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Dialectical frameworks are a unifying model of formal argumentation, where argumentative relations between arguments are represented by assigning acceptance conditions to atomic arguments. Their generality allow them to cover a number of different approaches with varying forms of representing the argumentation structure. Boolean regulatory networks are used to model the dynamics of complex biological processes, taking into account the interactions of biological compounds, such as proteins or genes. These models have proven highly useful for comprehending such biological processes, allowing to reproduce known behaviour and testing new hypotheses and predictions in silico, for example in the context of new medical treatments. While both these approaches stem from entirely different communities, it turns out that there are striking similarities in their appearence. In this paper, we study the relation between these two formalisms revealing their communalities as well as their differences, and introducing a correspondence that allows to establish novel results for the individual formalisms.
Related papers
- Causal Representation Learning from Multimodal Biological Observations [57.00712157758845]
We aim to develop flexible identification conditions for multimodal data.
We establish identifiability guarantees for each latent component, extending the subspace identification results from prior work.
Our key theoretical ingredient is the structural sparsity of the causal connections among distinct modalities.
arXiv Detail & Related papers (2024-11-10T16:40:27Z) - Toward Understanding In-context vs. In-weight Learning [50.24035812301655]
We identify simplified distributional properties that give rise to the emergence and disappearance of in-context learning.
We then extend the study to a full large language model, showing how fine-tuning on various collections of natural language prompts can elicit similar in-context and in-weight learning behaviour.
arXiv Detail & Related papers (2024-10-30T14:09:00Z) - A generative framework to bridge data-driven models and scientific theories in language neuroscience [84.76462599023802]
We present generative explanation-mediated validation, a framework for generating concise explanations of language selectivity in the brain.
We show that explanatory accuracy is closely related to the predictive power and stability of the underlying statistical models.
arXiv Detail & Related papers (2024-10-01T15:57:48Z) - Cognitive Evolutionary Learning to Select Feature Interactions for Recommender Systems [59.117526206317116]
We show that CELL can adaptively evolve into different models for different tasks and data.
Experiments on four real-world datasets demonstrate that CELL significantly outperforms state-of-the-art baselines.
arXiv Detail & Related papers (2024-05-29T02:35:23Z) - Explaining Text Similarity in Transformer Models [52.571158418102584]
Recent advances in explainable AI have made it possible to mitigate limitations by leveraging improved explanations for Transformers.
We use BiLRP, an extension developed for computing second-order explanations in bilinear similarity models, to investigate which feature interactions drive similarity in NLP models.
Our findings contribute to a deeper understanding of different semantic similarity tasks and models, highlighting how novel explainable AI methods enable in-depth analyses and corpus-level insights.
arXiv Detail & Related papers (2024-05-10T17:11:31Z) - Agentivit\`a e telicit\`a in GilBERTo: implicazioni cognitive [77.71680953280436]
The goal of this study is to investigate whether a Transformer-based neural language model infers lexical semantics.
The semantic properties considered are telicity (also combined with definiteness) and agentivity.
arXiv Detail & Related papers (2023-07-06T10:52:22Z) - Analyzing Diffusion as Serial Reproduction [12.389541192789167]
Diffusion models learn to synthesize samples by inverting a diffusion process that gradually maps data into noise.
Our work highlights how classic paradigms in cognitive science can shed light on state-of-the-art machine learning problems.
arXiv Detail & Related papers (2022-09-29T14:35:28Z) - A Neural Approach for Detecting Morphological Analogies [7.89271130004391]
Analogical proportions are statements of the form "A is to B as C is to D"
We propose a deep learning approach to detect morphological analogies.
arXiv Detail & Related papers (2021-08-09T11:21:55Z) - Causal Analysis of Syntactic Agreement Mechanisms in Neural Language
Models [40.83377935276978]
This study applies causal mediation analysis to pre-trained neural language models.
We investigate the magnitude of models' preferences for grammatical inflections.
We observe two distinct mechanisms for producing subject-verb agreement depending on the syntactic structure.
arXiv Detail & Related papers (2021-06-10T23:50:51Z) - Modelling Compositionality and Structure Dependence in Natural Language [0.12183405753834563]
Drawing on linguistics and set theory, a formalisation of these ideas is presented in the first half of this thesis.
We see how cognitive systems that process language need to have certain functional constraints.
Using the advances of word embedding techniques, a model of relational learning is simulated.
arXiv Detail & Related papers (2020-11-22T17:28:50Z) - High-order Semantic Role Labeling [86.29371274587146]
This paper introduces a high-order graph structure for the neural semantic role labeling model.
It enables the model to explicitly consider not only the isolated predicate-argument pairs but also the interaction between the predicate-argument pairs.
Experimental results on 7 languages of the CoNLL-2009 benchmark show that the high-order structural learning techniques are beneficial to the strong performing SRL models.
arXiv Detail & Related papers (2020-10-09T15:33:54Z) - On the Relationship Between Active Inference and Control as Inference [62.997667081978825]
Active Inference (AIF) is an emerging framework in the brain sciences which suggests that biological agents act to minimise a variational bound on model evidence.
Control-as-Inference (CAI) is a framework within reinforcement learning which casts decision making as a variational inference problem.
arXiv Detail & Related papers (2020-06-23T13:03:58Z) - Do Neural Models Learn Systematicity of Monotonicity Inference in
Natural Language? [41.649440404203595]
We introduce a method for evaluating whether neural models can learn systematicity of monotonicity inference in natural language.
We consider four aspects of monotonicity inferences and test whether the models can systematically interpret lexical and logical phenomena on different training/test splits.
arXiv Detail & Related papers (2020-04-30T14:48:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.