A simplicity bubble problem and zemblanity in digitally intermediated societies
- URL: http://arxiv.org/abs/2304.10681v3
- Date: Fri, 18 Oct 2024 15:05:54 GMT
- Title: A simplicity bubble problem and zemblanity in digitally intermediated societies
- Authors: Felipe S. Abrahão, Ricardo P. Cavassane, Michael Winter, Mariana Vitti Rodrigues, Itala M. L. D'Ottaviano,
- Abstract summary: We discuss the ubiquity of Big Data and machine learning in society.
We show that there is a ceiling above which formal knowledge cannot further decrease the probability of zemblanitous findings.
- Score: 1.4380443010065829
- License:
- Abstract: In this article, we discuss the ubiquity of Big Data and machine learning in society and propose that it evinces the need of further investigation of their fundamental limitations. We extend the ``too much information tends to behave like very little information'' phenomenon to formal knowledge about lawlike universes and arbitrary collections of computably generated datasets. This gives rise to the simplicity bubble problem, which refers to a learning algorithm equipped with a formal theory that can be deceived by a dataset to find a locally optimal model which it deems to be the global one. In the context of lawlike (computable) universes and formal learning systems, we show that there is a ceiling above which formal knowledge cannot further decrease the probability of zemblanitous findings, should the randomly generated data made available to the formal learning system be sufficiently large in comparison to their joint complexity. Zemblanity, the opposite of serendipity, is defined by an undesirable but expected finding that reveals an underlying problem or negative consequence in a given model or theory, which is in principle predictable in case the formal theory contains sufficient information. We also argue that this is an epistemological limitation that may generate unpredictable problems in digitally intermediated societies.
Related papers
- Hardness of Learning Neural Networks under the Manifold Hypothesis [3.2635082758250693]
manifold hypothesis presumes that high-dimensional data lies on or near a low-dimensional manifold.
We investigate the hardness of learning under the manifold hypothesis.
We show that additional assumptions on the volume of the data manifold alleviate these fundamental limitations.
arXiv Detail & Related papers (2024-06-03T15:50:32Z) - Learning Discrete Concepts in Latent Hierarchical Models [73.01229236386148]
Learning concepts from natural high-dimensional data holds potential in building human-aligned and interpretable machine learning models.
We formalize concepts as discrete latent causal variables that are related via a hierarchical causal model.
We substantiate our theoretical claims with synthetic data experiments.
arXiv Detail & Related papers (2024-06-01T18:01:03Z) - Skews in the Phenomenon Space Hinder Generalization in Text-to-Image Generation [59.138470433237615]
We introduce statistical metrics that quantify both the linguistic and visual skew of a dataset for relational learning.
We show that systematically controlled metrics are strongly predictive of generalization performance.
This work informs an important direction towards quality-enhancing the data diversity or balance to scaling up the absolute size.
arXiv Detail & Related papers (2024-03-25T03:18:39Z) - Neural Causal Abstractions [63.21695740637627]
We develop a new family of causal abstractions by clustering variables and their domains.
We show that such abstractions are learnable in practical settings through Neural Causal Models.
Our experiments support the theory and illustrate how to scale causal inferences to high-dimensional settings involving image data.
arXiv Detail & Related papers (2024-01-05T02:00:27Z) - The No Free Lunch Theorem, Kolmogorov Complexity, and the Role of Inductive Biases in Machine Learning [80.1018596899899]
We argue that neural network models share this same preference, formalized using Kolmogorov complexity.
Our experiments show that pre-trained and even randomly language models prefer to generate low-complexity sequences.
These observations justify the trend in deep learning of unifying seemingly disparate problems with an increasingly small set of machine learning models.
arXiv Detail & Related papers (2023-04-11T17:22:22Z) - Principled Knowledge Extrapolation with GANs [92.62635018136476]
We study counterfactual synthesis from a new perspective of knowledge extrapolation.
We show that an adversarial game with a closed-form discriminator can be used to address the knowledge extrapolation problem.
Our method enjoys both elegant theoretical guarantees and superior performance in many scenarios.
arXiv Detail & Related papers (2022-05-21T08:39:42Z) - A Simplicity Bubble Problem in Formal-Theoretic Learning Systems [1.7996150751268578]
We show that current approaches to machine learning can always be deceived, naturally or artificially, by sufficiently large datasets.
We discuss the framework and additional empirical conditions to be met in order to circumvent this deceptive phenomenon.
arXiv Detail & Related papers (2021-12-22T23:44:47Z) - The Causal Neural Connection: Expressiveness, Learnability, and
Inference [125.57815987218756]
An object called structural causal model (SCM) represents a collection of mechanisms and sources of random variation of the system under investigation.
In this paper, we show that the causal hierarchy theorem (Thm. 1, Bareinboim et al., 2020) still holds for neural models.
We introduce a special type of SCM called a neural causal model (NCM), and formalize a new type of inductive bias to encode structural constraints necessary for performing causal inferences.
arXiv Detail & Related papers (2021-07-02T01:55:18Z) - Parsimonious Inference [0.0]
Parsimonious inference is an information-theoretic formulation of inference over arbitrary architectures.
Our approaches combine efficient encodings with prudent sampling strategies to construct predictive ensembles without cross-validation.
arXiv Detail & Related papers (2021-03-03T04:13:14Z) - Random thoughts about Complexity, Data and Models [0.0]
Data Science and Machine learning have been growing strong for the past decade.
We investigate the subtle relation between "data and models"
Key issue for appraisal of relation between algorithmic complexity and algorithmic learning is to do with concepts of compressibility, determinism and predictability.
arXiv Detail & Related papers (2020-04-16T14:27:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.