Comment on Is Complexity an Illusion?
- URL: http://arxiv.org/abs/2411.08897v1
- Date: Tue, 29 Oct 2024 02:40:05 GMT
- Title: Comment on Is Complexity an Illusion?
- Authors: Gabriel Simmons,
- Abstract summary: "Is Complexity an Illusion?" (Bennett, 2024) provides a formalism for complexity, learning, inference, and generalization.
This reply shows that correct policies do not exist for a simple task of supervised multi-class classification.
- Score: 5.439020425819001
- License:
- Abstract: The paper "Is Complexity an Illusion?" (Bennett, 2024) provides a formalism for complexity, learning, inference, and generalization, and introduces a formal definition for a "policy". This reply shows that correct policies do not exist for a simple task of supervised multi-class classification, via mathematical proof and exhaustive search. Implications of this result are discussed, as well as possible responses and amendments to the theory.
Related papers
- The complexity of entanglement embezzlement [0.0]
We study the circuit complexity of embezzlement using sequences of states that enable arbitrary precision for the process.
Our results imply that circuit complexity acts as a physical obstruction to perfect embezzlement.
arXiv Detail & Related papers (2024-10-24T18:00:33Z) - A Complexity-Based Theory of Compositionality [53.025566128892066]
In AI, compositional representations can enable a powerful form of out-of-distribution generalization.
Here, we propose a formal definition of compositionality that accounts for and extends our intuitions about compositionality.
The definition is conceptually simple, quantitative, grounded in algorithmic information theory, and applicable to any representation.
arXiv Detail & Related papers (2024-10-18T18:37:27Z) - Is Complexity an Illusion? [0.0]
We show that all constraints can take equally simple forms, regardless of weakness.
If function is represented using a finite subset of forms, then we can force a correlation between simplicity and generalisation.
arXiv Detail & Related papers (2024-03-31T13:36:55Z) - A Reply to Makelov et al. (2023)'s "Interpretability Illusion" Arguments [59.87080148922358]
We argue that Makelov et al. (2023) see in practice are artifacts of their training and evaluation paradigms.
Though we disagree with their core characterization, Makelov et al. (2023)'s examples and discussion have undoubtedly pushed the field of interpretability forward.
arXiv Detail & Related papers (2024-01-23T10:27:42Z) - Discovering modular solutions that generalize compositionally [55.46688816816882]
We show that identification up to linear transformation purely from demonstrations is possible without having to learn an exponential number of module combinations.
We further demonstrate empirically that meta-learning from finite data can discover modular policies that generalize compositionally in a number of complex environments.
arXiv Detail & Related papers (2023-12-22T16:33:50Z) - Succinct Representations for Concepts [12.134564449202708]
Foundation models like chatGPT have demonstrated remarkable performance on various tasks.
However, for many questions, they may produce false answers that look accurate.
In this paper, we introduce succinct representations of concepts based on category theory.
arXiv Detail & Related papers (2023-03-01T12:11:23Z) - On the Complexity of Representation Learning in Contextual Linear
Bandits [110.84649234726442]
We show that representation learning is fundamentally more complex than linear bandits.
In particular, learning with a given set of representations is never simpler than learning with the worst realizable representation in the set.
arXiv Detail & Related papers (2022-12-19T13:08:58Z) - On the Complexity of Bayesian Generalization [141.21610899086392]
We consider concept generalization at a large scale in the diverse and natural visual spectrum.
We study two modes when the problem space scales up, and the $complexity$ of concepts becomes diverse.
arXiv Detail & Related papers (2022-11-20T17:21:37Z) - Generating Literal and Implied Subquestions to Fact-check Complex Claims [64.81832149826035]
We focus on decomposing a complex claim into a comprehensive set of yes-no subquestions whose answers influence the veracity of the claim.
We present ClaimDecomp, a dataset of decompositions for over 1000 claims.
We show that these subquestions can help identify relevant evidence to fact-check the full claim and derive the veracity through their answers.
arXiv Detail & Related papers (2022-05-14T00:40:57Z) - Majority Voting and the Condorcet's Jury Theorem [3.3048031280378556]
"Condorcet's jury theorem" states that majorities are more likely to choose correctly when individual votes are often correct and independent.
"Strength of Weak Learnability" (1990) describes a method for converting a weak learning algorithm into one that achieves arbitrarily high accuracy.
We humbly want to offer a more publicly available simple derivation of the theorem.
arXiv Detail & Related papers (2020-02-08T12:28:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.