Surveying the space of descriptions of a composite system with machine learning
- URL: http://arxiv.org/abs/2411.18579v1
- Date: Wed, 27 Nov 2024 18:24:13 GMT
- Title: Surveying the space of descriptions of a composite system with machine learning
- Authors: Kieran A. Murphy, Yujing Zhang, Dani S. Bassett,
- Abstract summary: We study the continuous space of possible descriptions of a composite system as a window into its organizational structure.
We introduce a machine learning framework to optimize descriptions that extremize key information theoretic quantities used to characterize organization.
By integrating machine learning into a fine-grained information theoretic analysis of composite random variables, our framework opens a new avenues for probing the structure of real-world complex systems.
- Score: 5.473530151011081
- License:
- Abstract: Multivariate information theory provides a general and principled framework for understanding how the components of a complex system are connected. Existing analyses are coarse in nature -- built up from characterizations of discrete subsystems -- and can be computationally prohibitive. In this work, we propose to study the continuous space of possible descriptions of a composite system as a window into its organizational structure. A description consists of specific information conveyed about each of the components, and the space of possible descriptions is equivalent to the space of lossy compression schemes of the components. We introduce a machine learning framework to optimize descriptions that extremize key information theoretic quantities used to characterize organization, such as total correlation and O-information. Through case studies on spin systems, Sudoku boards, and letter sequences from natural language, we identify extremal descriptions that reveal how system-wide variation emerges from individual components. By integrating machine learning into a fine-grained information theoretic analysis of composite random variables, our framework opens a new avenues for probing the structure of real-world complex systems.
Related papers
- Local Compositional Complexity: How to Detect a Human-readable Messsage [0.0]
We focus on a particular sense of complexity that is high if the data is structured in a way that could serve to communicate a message.
We describe a general framework for measuring data complexity based on dividing the shortest description of the data into a structured and an unstructured portion.
We derive a more precise and computable definition geared towards human communication, by proposing local compositionality as an appropriate specific structure.
arXiv Detail & Related papers (2025-01-07T10:04:01Z) - Finding structure in logographic writing with library learning [55.63800121311418]
We develop a computational framework for discovering structure in a writing system.
Our framework discovers known linguistic structures in the Chinese writing system.
We demonstrate how a library learning approach may help reveal the fundamental computational principles that underlie the creation of structures in human cognition.
arXiv Detail & Related papers (2024-05-11T04:23:53Z) - On the Role of Information Structure in Reinforcement Learning for Partially-Observable Sequential Teams and Games [55.2480439325792]
In a sequential decision-making problem, the information structure is the description of how events in the system occurring at different points in time affect each other.
By contrast, real-world sequential decision-making problems typically involve a complex and time-varying interdependence of system variables.
We formalize a novel reinforcement learning model which explicitly represents the information structure.
arXiv Detail & Related papers (2024-03-01T21:28:19Z) - Inducing Systematicity in Transformers by Attending to Structurally
Quantized Embeddings [60.698130703909804]
Transformers generalize to novel compositions of structures and entities after being trained on a complex dataset.
We propose SQ-Transformer that explicitly encourages systematicity in the embeddings and attention layers.
We show that SQ-Transformer achieves stronger compositional generalization than the vanilla Transformer on multiple low-complexity semantic parsing and machine translation datasets.
arXiv Detail & Related papers (2024-02-09T15:53:15Z) - Discovering modular solutions that generalize compositionally [55.46688816816882]
We show that identification up to linear transformation purely from demonstrations is possible without having to learn an exponential number of module combinations.
We further demonstrate empirically that meta-learning from finite data can discover modular policies that generalize compositionally in a number of complex environments.
arXiv Detail & Related papers (2023-12-22T16:33:50Z) - Information decomposition in complex systems via machine learning [4.189643331553922]
We use machine learning to decompose the information contained in a set of measurements by jointly optimizing a lossy compression of each measurement.
We focus our analysis on two paradigmatic complex systems: a circuit and an amorphous material undergoing plastic deformation.
arXiv Detail & Related papers (2023-07-10T17:57:32Z) - communication of information in systems of heterogenious agents and
systems' dynamics [0.0]
Communication of information in complex systems can be considered as major driver of systems evolution.
informational exchange in a system of heterogenious agents is more complex than simple input-output model.
The mechanisms of meaning and information processing can be evaluated analytically ion a model framework.
arXiv Detail & Related papers (2023-04-27T08:09:04Z) - The Distributed Information Bottleneck reveals the explanatory structure
of complex systems [1.52292571922932]
The Information Bottleneck (IB) is an information theoretic framework for understanding a relationship between an input and an output.
We show that a crucial modification -- distributing bottlenecks across multiple components of the input -- opens fundamentally new avenues for interpretable deep learning in science.
We demonstrate the Distributed IB's explanatory utility in systems drawn from applied mathematics and condensed matter physics.
arXiv Detail & Related papers (2022-04-15T17:59:35Z) - Discrete-Valued Neural Communication [85.3675647398994]
We show that restricting the transmitted information among components to discrete representations is a beneficial bottleneck.
Even though individuals have different understandings of what a "cat" is based on their specific experiences, the shared discrete token makes it possible for communication among individuals to be unimpeded by individual differences in internal representation.
We extend the quantization mechanism from the Vector-Quantized Variational Autoencoder to multi-headed discretization with shared codebooks and use it for discrete-valued neural communication.
arXiv Detail & Related papers (2021-07-06T03:09:25Z) - Compositional Processing Emerges in Neural Networks Solving Math
Problems [100.80518350845668]
Recent progress in artificial neural networks has shown that when large models are trained on enough linguistic data, grammatical structure emerges in their representations.
We extend this work to the domain of mathematical reasoning, where it is possible to formulate precise hypotheses about how meanings should be composed.
Our work shows that neural networks are not only able to infer something about the structured relationships implicit in their training data, but can also deploy this knowledge to guide the composition of individual meanings into composite wholes.
arXiv Detail & Related papers (2021-05-19T07:24:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.