Intelligence, physics and information -- the tradeoff between accuracy
and simplicity in machine learning
- URL: http://arxiv.org/abs/2001.03780v2
- Date: Mon, 20 Jan 2020 17:51:09 GMT
- Title: Intelligence, physics and information -- the tradeoff between accuracy
and simplicity in machine learning
- Authors: Tailin Wu
- Abstract summary: I believe viewing intelligence in terms of many integral aspects, and a universal two-term tradeoff between task performance and complexity, provides two feasible perspectives.
In this thesis, I address several key questions in some aspects of intelligence, and study the phase transitions in the two-term tradeoff.
- Score: 5.584060970507507
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: How can we enable machines to make sense of the world, and become better at
learning? To approach this goal, I believe viewing intelligence in terms of
many integral aspects, and also a universal two-term tradeoff between task
performance and complexity, provides two feasible perspectives. In this thesis,
I address several key questions in some aspects of intelligence, and study the
phase transitions in the two-term tradeoff, using strategies and tools from
physics and information. Firstly, how can we make the learning models more
flexible and efficient, so that agents can learn quickly with fewer examples?
Inspired by how physicists model the world, we introduce a paradigm and an AI
Physicist agent for simultaneously learning many small specialized models
(theories) and the domain they are accurate, which can then be simplified,
unified and stored, facilitating few-shot learning in a continual way.
Secondly, for representation learning, when can we learn a good representation,
and how does learning depend on the structure of the dataset? We approach this
question by studying phase transitions when tuning the tradeoff hyperparameter.
In the information bottleneck, we theoretically show that these phase
transitions are predictable and reveal structure in the relationships between
the data, the model, the learned representation and the loss landscape.
Thirdly, how can agents discover causality from observations? We address part
of this question by introducing an algorithm that combines prediction and
minimizing information from the input, for exploratory causal discovery from
observational time series. Fourthly, to make models more robust to label noise,
we introduce Rank Pruning, a robust algorithm for classification with noisy
labels. I believe that building on the work of my thesis we will be one step
closer to enable more intelligent machines that can make sense of the world.
Related papers
- A Dual Approach to Imitation Learning from Observations with Offline Datasets [19.856363985916644]
Demonstrations are an effective alternative to task specification for learning agents in settings where designing a reward function is difficult.
We derive DILO, an algorithm that can leverage arbitrary suboptimal data to learn imitating policies without requiring expert actions.
arXiv Detail & Related papers (2024-06-13T04:39:42Z) - Breaking the Curse of Dimensionality in Deep Neural Networks by Learning
Invariant Representations [1.9580473532948401]
This thesis explores the theoretical foundations of deep learning by studying the relationship between the architecture of these models and the inherent structures found within the data they process.
We ask What drives the efficacy of deep learning algorithms and allows them to beat the so-called curse of dimensionality.
Our methodology takes an empirical approach to deep learning, combining experimental studies with physics-inspired toy models.
arXiv Detail & Related papers (2023-10-24T19:50:41Z) - Accelerating exploration and representation learning with offline
pre-training [52.6912479800592]
We show that exploration and representation learning can be improved by separately learning two different models from a single offline dataset.
We show that learning a state representation using noise-contrastive estimation and a model of auxiliary reward can significantly improve the sample efficiency on the challenging NetHack benchmark.
arXiv Detail & Related papers (2023-03-31T18:03:30Z) - Ignorance is Bliss: Robust Control via Information Gating [60.17644038829572]
Informational parsimony provides a useful inductive bias for learning representations that achieve better generalization by being robust to noise and spurious correlations.
We propose textitinformation gating as a way to learn parsimonious representations that identify the minimal information required for a task.
arXiv Detail & Related papers (2023-03-10T18:31:50Z) - Hindsight States: Blending Sim and Real Task Elements for Efficient
Reinforcement Learning [61.3506230781327]
In robotics, one approach to generate training data builds on simulations based on dynamics models derived from first principles.
Here, we leverage the imbalance in complexity of the dynamics to learn more sample-efficiently.
We validate our method on several challenging simulated tasks and demonstrate that it improves learning both alone and when combined with an existing hindsight algorithm.
arXiv Detail & Related papers (2023-03-03T21:55:04Z) - Understanding Self-Predictive Learning for Reinforcement Learning [61.62067048348786]
We study the learning dynamics of self-predictive learning for reinforcement learning.
We propose a novel self-predictive algorithm that learns two representations simultaneously.
arXiv Detail & Related papers (2022-12-06T20:43:37Z) - Solving Reasoning Tasks with a Slot Transformer [7.966351917016229]
We present the Slot Transformer, an architecture that leverages slot attention, transformers and iterative variational inference on video scene data to infer representations.
We evaluate the effectiveness of key components of the architecture, the model's representational capacity and its ability to predict from incomplete input.
arXiv Detail & Related papers (2022-10-20T16:40:30Z) - Stop ordering machine learning algorithms by their explainability! A
user-centered investigation of performance and explainability [0.0]
We study tradeoff between model performance and explainability of machine learning algorithms.
We find that the tradeoff is much less gradual in the end user's perception.
Results of our second experiment show that while explainable artificial intelligence augmentations can be used to increase explainability, the type of explanation plays an essential role in end user perception.
arXiv Detail & Related papers (2022-06-20T08:32:38Z) - Distill on the Go: Online knowledge distillation in self-supervised
learning [1.1470070927586016]
Recent works have shown that wider and deeper models benefit more from self-supervised learning than smaller models.
We propose Distill-on-the-Go (DoGo), a self-supervised learning paradigm using single-stage online knowledge distillation.
Our results show significant performance gain in the presence of noisy and limited labels.
arXiv Detail & Related papers (2021-04-20T09:59:23Z) - Laplacian Denoising Autoencoder [114.21219514831343]
We propose to learn data representations with a novel type of denoising autoencoder.
The noisy input data is generated by corrupting latent clean data in the gradient domain.
Experiments on several visual benchmarks demonstrate that better representations can be learned with the proposed approach.
arXiv Detail & Related papers (2020-03-30T16:52:39Z) - Explainable Active Learning (XAL): An Empirical Study of How Local
Explanations Impact Annotator Experience [76.9910678786031]
We propose a novel paradigm of explainable active learning (XAL), by introducing techniques from the recently surging field of explainable AI (XAI) into an Active Learning setting.
Our study shows benefits of AI explanation as interfaces for machine teaching--supporting trust calibration and enabling rich forms of teaching feedback, and potential drawbacks--anchoring effect with the model judgment and cognitive workload.
arXiv Detail & Related papers (2020-01-24T22:52:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.