AI without networks
- URL: http://arxiv.org/abs/2106.03354v3
- Date: Thu, 6 Jun 2024 02:50:06 GMT
- Title: AI without networks
- Authors: Partha P Mitra, Clément Sire,
- Abstract summary: We develop a network-free framework for AI incorporating generative modeling.
We demonstrate this framework with examples from three different disciplines - ethology, control theory, and mathematics.
We also propose an easily computed method of credit assignment based on this framework, to help address ethical-legal challenges raised by generative AI.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Contemporary Artificial Intelligence (AI) stands on two legs: large training data corpora and many-parameter artificial neural networks (ANNs). The data corpora are needed to represent the complexity and heterogeneity of the world. The role of the networks is less transparent due to the obscure dependence of the network parameters and outputs on the training data and inputs. This raises problems, ranging from technical-scientific to legal-ethical. We hypothesize that a transparent approach to machine learning is possible without using networks at all. By generalizing a parameter-free, statistically consistent data interpolation method, which we analyze theoretically in detail, we develop a network-free framework for AI incorporating generative modeling. We demonstrate this framework with examples from three different disciplines - ethology, control theory, and mathematics. Our generative Hilbert framework applied to the trajectories of small groups of swimming fish outperformed state-of-the-art traditional mathematical behavioral models and current ANN-based models. We demonstrate pure data interpolation based control by stabilizing an inverted pendulum and a driven logistic map around unstable fixed points. Finally, we present a mathematical application by predicting zeros of the Riemann Zeta function, achieving comparable performance as a transformer network. We do not suggest that the proposed framework will always outperform networks as over-parameterized networks can interpolate. However, our framework is theoretically sound, transparent, deterministic, and parameter free: remarkably, it does not require any compute-expensive training, does not involve optimization, has no model selection, and is easily reproduced and ported. We also propose an easily computed method of credit assignment based on this framework, to help address ethical-legal challenges raised by generative AI.
Related papers
- pyhgf: A neural network library for predictive coding [0.2150989251218736]
texttpyhgf is a Python package for creating, manipulating and sampling dynamic networks for predictive coding.
We improve over other frameworks by enclosing the network components as transparent, modular and malleable variables in the message-passing steps.
The transparency of core variables can also translate into inference processes that leverage self-organisation principles.
arXiv Detail & Related papers (2024-10-11T19:21:38Z) - Automatic AI Model Selection for Wireless Systems: Online Learning via Digital Twinning [50.332027356848094]
AI-based applications are deployed at intelligent controllers to carry out functionalities like scheduling or power control.
The mapping between context and AI model parameters is ideally done in a zero-shot fashion.
This paper introduces a general methodology for the online optimization of AMS mappings.
arXiv Detail & Related papers (2024-06-22T11:17:50Z) - GreenLightningAI: An Efficient AI System with Decoupled Structural and
Quantitative Knowledge [0.0]
Training powerful and popular deep neural networks comes at very high economic and environmental costs.
This work takes a radically different approach by proposing GreenLightningAI.
The new AI system stores the information required to select the system subset for a given sample.
We show experimentally that the structural information can be kept unmodified when re-training the AI system with new samples.
arXiv Detail & Related papers (2023-12-15T17:34:11Z) - A Free Lunch with Influence Functions? Improving Neural Network
Estimates with Concepts from Semiparametric Statistics [41.99023989695363]
We explore the potential for semiparametric theory to be used to improve neural networks and machine learning algorithms.
We propose a new neural network method MultiNet, which seeks the flexibility and diversity of an ensemble using a single architecture.
arXiv Detail & Related papers (2022-02-18T09:35:51Z) - How does unlabeled data improve generalization in self-training? A
one-hidden-layer theoretical analysis [93.37576644429578]
This work establishes the first theoretical analysis for the known iterative self-training paradigm.
We prove the benefits of unlabeled data in both training convergence and generalization ability.
Experiments from shallow neural networks to deep neural networks are also provided to justify the correctness of our established theoretical insights on self-training.
arXiv Detail & Related papers (2022-01-21T02:16:52Z) - Self-Ensembling GAN for Cross-Domain Semantic Segmentation [107.27377745720243]
This paper proposes a self-ensembling generative adversarial network (SE-GAN) exploiting cross-domain data for semantic segmentation.
In SE-GAN, a teacher network and a student network constitute a self-ensembling model for generating semantic segmentation maps, which together with a discriminator, forms a GAN.
Despite its simplicity, we find SE-GAN can significantly boost the performance of adversarial training and enhance the stability of the model.
arXiv Detail & Related papers (2021-12-15T09:50:25Z) - Dynamic Inference with Neural Interpreters [72.90231306252007]
We present Neural Interpreters, an architecture that factorizes inference in a self-attention network as a system of modules.
inputs to the model are routed through a sequence of functions in a way that is end-to-end learned.
We show that Neural Interpreters perform on par with the vision transformer using fewer parameters, while being transferrable to a new task in a sample efficient manner.
arXiv Detail & Related papers (2021-10-12T23:22:45Z) - Modeling Item Response Theory with Stochastic Variational Inference [8.369065078321215]
We introduce a variational Bayesian inference algorithm for Item Response Theory (IRT)
Applying this method to five large-scale item response datasets yields higher log likelihoods and higher accuracy in imputing missing data.
The algorithm implementation is open-source, and easily usable.
arXiv Detail & Related papers (2021-08-26T05:00:27Z) - Understanding Dynamics of Nonlinear Representation Learning and Its
Application [12.697842097171119]
We study the dynamics of implicit nonlinear representation learning.
We show that the data-architecture alignment condition is sufficient for the global convergence.
We derive a new training framework, which satisfies the data-architecture alignment condition without assuming it.
arXiv Detail & Related papers (2021-06-28T16:31:30Z) - Category-Learning with Context-Augmented Autoencoder [63.05016513788047]
Finding an interpretable non-redundant representation of real-world data is one of the key problems in Machine Learning.
We propose a novel method of using data augmentations when training autoencoders.
We train a Variational Autoencoder in such a way, that it makes transformation outcome predictable by auxiliary network.
arXiv Detail & Related papers (2020-10-10T14:04:44Z) - Model Fusion via Optimal Transport [64.13185244219353]
We present a layer-wise model fusion algorithm for neural networks.
We show that this can successfully yield "one-shot" knowledge transfer between neural networks trained on heterogeneous non-i.i.d. data.
arXiv Detail & Related papers (2019-10-12T22:07:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.