Information exchange, meaning and redundancy generation in anticipatory
systems: self-organization of expectations -- the case of Covid-19
- URL: http://arxiv.org/abs/2106.07432v1
- Date: Tue, 25 May 2021 05:01:38 GMT
- Title: Information exchange, meaning and redundancy generation in anticipatory
systems: self-organization of expectations -- the case of Covid-19
- Authors: Inga A. Ivanova
- Abstract summary: The paper focuses on the link between the dynamics of information and system evolution.
The paper demonstrates how the dynamics of information and meaning can be incorporated in the description of Covid-19 infectious propagation.
- Score: 1.4467794332678539
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: When studying the evolution of complex systems one refers to model
representations comprising various descriptive parameters. There is hardly
research where system evolution is described on the base of information flows
in the system. The paper focuses on the link between the dynamics of
information and system evolution. Information, exchanged between different
system's parts, before being processed is first provided with meaning by the
system. Meanings are generated from the perspective of hindsight, i.e. against
the arrow of time. The same information can be differently interpreted by
different system's parts (i,e,provided with different meanings) so that the
number of options for possible system development is proliferated. Some options
eventually turn into observable system states. So that system evolutionary
dynamics can be considered as due to information processing within the system.
This process is considered here in a model representation. The model under
study is Triple Helix (TH) model, which was earlier used to describe
interactions between university, industry and government to foster innovations.
In TH model the system is comprised of three interacting parts where each part
process information ina different way. The model is not limited to the sphere
of innovation and can be used in a broader perspective. Here TH is
conceptualized in the framework of three compertment model used to describe
infectious disease. The paper demonstrates how the dynamics of information and
meaning can be incorporated in the description of Covid-19 infectious
propagation. The results show correspondence of model predictions with
observable infection dynamics.
Related papers
- Causal Graph ODE: Continuous Treatment Effect Modeling in Multi-agent
Dynamical Systems [70.84976977950075]
Real-world multi-agent systems are often dynamic and continuous, where the agents co-evolve and undergo changes in their trajectories and interactions over time.
We propose a novel model that captures the continuous interaction among agents using a Graph Neural Network (GNN) as the ODE function.
The key innovation of our model is to learn time-dependent representations of treatments and incorporate them into the ODE function, enabling precise predictions of potential outcomes.
arXiv Detail & Related papers (2024-02-29T23:07:07Z) - Using Causal Threads to Explain Changes in a Dynamic System [0.0]
We explore developing rich semantic models of systems.
Specifically, we consider structured causal explanations about state changes in those systems.
We construct a model of the causal threads for geological changes proposed by the Snowball Earth theory.
arXiv Detail & Related papers (2023-11-19T14:32:06Z) - From system models to class models: An in-context learning paradigm [0.0]
We introduce a novel paradigm for system identification, addressing two primary tasks: one-step-ahead prediction and multi-step simulation.
We learn a meta model that represents a class of dynamical systems.
For one-step prediction, a GPT-like decoder-only architecture is utilized, whereas the simulation problem employs an encoder-decoder structure.
arXiv Detail & Related papers (2023-08-25T13:50:17Z) - communication of information in systems of heterogenious agents and
systems' dynamics [0.0]
Communication of information in complex systems can be considered as major driver of systems evolution.
informational exchange in a system of heterogenious agents is more complex than simple input-output model.
The mechanisms of meaning and information processing can be evaluated analytically ion a model framework.
arXiv Detail & Related papers (2023-04-27T08:09:04Z) - Graph-informed simulation-based inference for models of active matter [5.533353383316288]
We show that simulation-based inference can be used to robustly infer active matter parameters from system observations.
Our work highlights that high-level system information is contained within the relational structure of a collective system.
arXiv Detail & Related papers (2023-04-05T09:39:17Z) - Improving Coherence and Consistency in Neural Sequence Models with
Dual-System, Neuro-Symbolic Reasoning [49.6928533575956]
We use neural inference to mediate between the neural System 1 and the logical System 2.
Results in robust story generation and grounded instruction-following show that this approach can increase the coherence and accuracy of neurally-based generations.
arXiv Detail & Related papers (2021-07-06T17:59:49Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - This is not the Texture you are looking for! Introducing Novel
Counterfactual Explanations for Non-Experts using Generative Adversarial
Learning [59.17685450892182]
counterfactual explanation systems try to enable a counterfactual reasoning by modifying the input image.
We present a novel approach to generate such counterfactual image explanations based on adversarial image-to-image translation techniques.
Our results show that our approach leads to significantly better results regarding mental models, explanation satisfaction, trust, emotions, and self-efficacy than two state-of-the art systems.
arXiv Detail & Related papers (2020-12-22T10:08:05Z) - Dependency Decomposition and a Reject Option for Explainable Models [4.94950858749529]
Recent deep learning models perform extremely well in various inference tasks.
Recent advances offer methods to visualize features, describe attribution of the input.
We present the first analysis of dependencies regarding the probability distribution over the desired image classification outputs.
arXiv Detail & Related papers (2020-12-11T17:39:33Z) - Explainable Recommender Systems via Resolving Learning Representations [57.24565012731325]
Explanations could help improve user experience and discover system defects.
We propose a novel explainable recommendation model through improving the transparency of the representation learning process.
arXiv Detail & Related papers (2020-08-21T05:30:48Z) - Causal Discovery in Physical Systems from Videos [123.79211190669821]
Causal discovery is at the core of human cognition.
We consider the task of causal discovery from videos in an end-to-end fashion without supervision on the ground-truth graph structure.
arXiv Detail & Related papers (2020-07-01T17:29:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.