Generative Models of Huge Objects
- URL: http://arxiv.org/abs/2302.12823v1
- Date: Fri, 24 Feb 2023 18:58:11 GMT
- Title: Generative Models of Huge Objects
- Authors: Lunjia Hu, Inbal Livni-Navon, Omer Reingold
- Abstract summary: Indistinguishability from a single object is motivated by the study of generative models in learning theory and regularity lemmas in graph theory.
We provide a learning algorithm for huge indistinguishable objects in several natural settings.
Results rely on notions and techniques from a variety of areas including learning theory, complexity theory, cryptography, and game theory.
- Score: 6.7086472013021305
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work initiates the systematic study of explicit distributions that are
indistinguishable from a single exponential-size combinatorial object. In this
we extend the work of Goldreich, Goldwasser and Nussboim (SICOMP 2010) that
focused on the implementation of huge objects that are indistinguishable from
the uniform distribution, satisfying some global properties (which they coined
truthfulness). Indistinguishability from a single object is motivated by the
study of generative models in learning theory and regularity lemmas in graph
theory. Problems that are well understood in the setting of pseudorandomness
present significant challenges and at times are impossible when considering
generative models of huge objects.
We demonstrate the versatility of this study by providing a learning
algorithm for huge indistinguishable objects in several natural settings
including: dense functions and graphs with a truthfulness requirement on the
number of ones in the function or edges in the graphs, and a version of the
weak regularity lemma for sparse graphs that satisfy some global properties.
These and other results generalize basic pseudorandom objects as well as
notions introduced in algorithmic fairness. The results rely on notions and
techniques from a variety of areas including learning theory, complexity
theory, cryptography, and game theory.
Related papers
- Generalization of Graph Neural Networks is Robust to Model Mismatch [84.01980526069075]
Graph neural networks (GNNs) have demonstrated their effectiveness in various tasks supported by their generalization capabilities.
In this paper, we examine GNNs that operate on geometric graphs generated from manifold models.
Our analysis reveals the robustness of the GNN generalization in the presence of such model mismatch.
arXiv Detail & Related papers (2024-08-25T16:00:44Z) - Likelihood Based Inference in Fully and Partially Observed Exponential Family Graphical Models with Intractable Normalizing Constants [4.532043501030714]
Probabilistic graphical models that encode an underlying Markov random field are fundamental building blocks of generative modeling.
This paper is to demonstrate that full likelihood based analysis of these models is feasible in a computationally efficient manner.
arXiv Detail & Related papers (2024-04-27T02:58:22Z) - Homomorphism Counts for Graph Neural Networks: All About That Basis [8.25219440625445]
We argue for a more fine-grained approach, which incorporates the homomorphism counts of all structures in the basis'' of the target pattern.
This yields strictly more expressive architectures without incurring any additional overhead in terms of computational complexity.
arXiv Detail & Related papers (2024-02-13T16:57:06Z) - Generative Learning of Continuous Data by Tensor Networks [45.49160369119449]
We introduce a new family of tensor network generative models for continuous data.
We benchmark the performance of this model on several synthetic and real-world datasets.
Our methods give important theoretical and empirical evidence of the efficacy of quantum-inspired methods for the rapidly growing field of generative learning.
arXiv Detail & Related papers (2023-10-31T14:37:37Z) - Object-centric architectures enable efficient causal representation
learning [51.6196391784561]
We show that when the observations are of multiple objects, the generative function is no longer injective and disentanglement fails in practice.
We develop an object-centric architecture that leverages weak supervision from sparse perturbations to disentangle each object's properties.
This approach is more data-efficient in the sense that it requires significantly fewer perturbations than a comparable approach that encodes to a Euclidean space.
arXiv Detail & Related papers (2023-10-29T16:01:03Z) - The No Free Lunch Theorem, Kolmogorov Complexity, and the Role of Inductive Biases in Machine Learning [80.1018596899899]
We argue that neural network models share this same preference, formalized using Kolmogorov complexity.
Our experiments show that pre-trained and even randomly language models prefer to generate low-complexity sequences.
These observations justify the trend in deep learning of unifying seemingly disparate problems with an increasingly small set of machine learning models.
arXiv Detail & Related papers (2023-04-11T17:22:22Z) - Robust Model Selection of Gaussian Graphical Models [16.933125281564163]
Noise-corrupted samples present significant challenges in graphical model selection.
We propose an algorithm which provably recovers the underlying graph up to the identified ambiguity.
This information is useful in a range of real-world problems, including power grids, social networks, protein-protein interactions, and neural structures.
arXiv Detail & Related papers (2022-11-10T16:50:50Z) - Generalization of Neural Combinatorial Solvers Through the Lens of
Adversarial Robustness [68.97830259849086]
Most datasets only capture a simpler subproblem and likely suffer from spurious features.
We study adversarial robustness - a local generalization property - to reveal hard, model-specific instances and spurious features.
Unlike in other applications, where perturbation models are designed around subjective notions of imperceptibility, our perturbation models are efficient and sound.
Surprisingly, with such perturbations, a sufficiently expressive neural solver does not suffer from the limitations of the accuracy-robustness trade-off common in supervised learning.
arXiv Detail & Related papers (2021-10-21T07:28:11Z) - Model-agnostic multi-objective approach for the evolutionary discovery
of mathematical models [55.41644538483948]
In modern data science, it is more interesting to understand the properties of the model, which parts could be replaced to obtain better results.
We use multi-objective evolutionary optimization for composite data-driven model learning to obtain the algorithm's desired properties.
arXiv Detail & Related papers (2021-07-07T11:17:09Z) - Block-Approximated Exponential Random Graphs [77.4792558024487]
An important challenge in the field of exponential random graphs (ERGs) is the fitting of non-trivial ERGs on large graphs.
We propose an approximative framework to such non-trivial ERGs that result in dyadic independence (i.e., edge independent) distributions.
Our methods are scalable to sparse graphs consisting of millions of nodes.
arXiv Detail & Related papers (2020-02-14T11:42:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.